Understanding of Neural Networks Class 12 Notes, A network of neurones is called a “neural network”. “Neural” comes from the word neurone of the human nervous system. In this chapter you are going to learn about advance concept of Neural Network.
Understanding of Neural Networks Class 12 Notes
What is a neural network?
Artificial neural networks help to generate the best possible outcome without making any changes anywhere, as the human brain does. Neural networks can extract data features automatically without the need for a programmer. A network of neurons is called a “neural network.”. For example, auto replying to the emails, suggesting email replies, spam filtering, Facebook image tagging, showing items of our interest in the e-shopping web portals, and many more. One of the best-known examples of a neural network is Google’s search algorithm.
definition: A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Parts of a Neural Network
Every neural network comprises layers of interconnected nodes — an input layer, hidden layer(s), and an output layer.
- Input Layer: The input layer is the first layer of a neural network and helps to receive data from the outside world.
- Hidden Layers: This is the second layer of the neural network, which processes the input data using multiple hidden layers. Each hidden layer contains nodes or artificial neurones. These nodes are interconnected with each other.
- Output Layer: The last layer of the neural network is the output layer, which helps to generate the output and final predictions or outcomes of the neural network.

An Artificial Neural Network (ANN) with two or more hidden layers is known as a Deep Neural Network. The process of training deep neural networks is called Deep Learning. The term “deep” in deep learning refers to the number of hidden layers (also called depth) of a neural network. A neural network that only has three layers is just a basic neural network.
Components of a neural network
The key components of a neural network are as follows:
What do you mean by Neurons in AI?
- Neurones are also called nodes.
- Neurones are fundamental building blocks of a neural network.
- Neurones can receive inputs from the external sources.
- Each neurone computes a weighted sum of its input.
What are weights in neurons?
- Weights represent the strength of connections between neurones.
- Each connection (synapse) has an associated weight.
- Weights convey the importance of that feature in predicting the final output.
- During training, neural networks learn optimal weights to minimise error.
Why we use activation functions?
- Activation functions in neural networks are like decision-making.
- The activation function decides whether a neurone should be activated or not for sending a signal based on input received.
- There are different types of activation functions, like the Sigmoid Function, Tanh Function, ReLU (Rectified Linear Unit), etc.
- Activation functions allow non-linearities in the model, which can help to understand complex patterns in data and decision-making.
What is the purpose of bias?
- Bias terms are constants added to the weighted sum before applying the activation function.
- They allow the network to shift the activation function horizontally.
- Bias helps account for any inherent bias in the data.
What is the purpose of connections in neural networks?
- Connections represent the synapses between neurons.
- Each connection has an associated weight.
- Biases (constants) are also associated with each neuron, affecting its activation threshold.
What is the purpose of Learning Rule?
- Neural networks learn by adjusting their weights and biases.
- The learning rule specifies how these adjustments occur during training.
- Backpropagation, a common learning algorithm, computes gradients and updates weights to minimize the network’s error.
What do you mean by propagation Functions?
Propagation helps to move data from one layer to another layer where one becomes the input of the next layer and another layer becomes the output layer. The most common types are “forward propagation,” where the data flows from the input layer to the output layer to make predictions, and “backpropagation,” where the error signal is calculated and used to update the network weights during training, which allows the machine to learn from its mistakes.
How the Neural Network works?
A neural network is a type of machine learning model work similar to human brain. The neural network contain layers or nodes, each node perform some simple mathematical operation. The layers in neural network are –
- Input Layer: Take input from the real world
- Hidden Layer: Process the inputs using weights, biases and activation function.
- Output Layer: Generate the final output.
Lets understand how Neural Network perform a task
- Input Layer: Each node takes an input value (x)
- Weights and Biases: Each input is multiplies by a weight(w) and a bias(b).
∑wixi + bias = (𝑤1 x 𝑥1) + (𝑤2 x 𝑥2) + (𝑤3 x 𝑥3) + 𝑏
f(x) = 1 , if ∑w1x1 + b - Activation function: The result (z) passes through an activation function to determine the output (a).
a = 1 if z >= 0
a = 0 if z < 0 - Outputs from one layer become inputs to the next layer.
Lets see one example
We have converted the example of the textbook to the restaurant analogy.
Suppose a neural network as a restaurant where:
Ingredients = Input values (𝑥)
Chefs = Nodes/Neurons
Recipe = Weights (𝑤) and Bias (𝑏)
Taste Test = Activation function
Serving = Output
Let us see a simple problem.
CASE I: Let the features be represented as x1,x2 and x3.
Input Layer:
Feature 1, x1 = 2 (2 tomatoes)
Feature 2, x2 = 3 (3 onions)
Feature 3, x3 = 1(1 carrot)
Hidden Layer:
Weight 1, w1 = 0.4 (0.4 teaspoons of salt per tomato)
Weight 2, w2 = 0.2 (0.2 teaspoons of salt per tomato)
Weight 3, w3 = 0.6 (0.6 teaspoons of salt per tomato)
bias = 0.1 (extra seasoning)
threshold = 3.0
Output: Using the formula: (Suppose the total seasoning added by chefs can be calculated)
∑wixi + bias = w1x1 + w2x2 + w3x3 + bias
= (0.42) + (0.23) + (0.6*1) + 0.1
= 0.8 + 0.6 + 0.6 + 0.1
= 2.1
Now, we apply the threshold value (Activation function: The chefs taste the dish)
If output > threshold, then output = 1 (active)
If output < threshold, then output = 0 (inactive)
In this case:
Output (2.1) < threshold (3.0)
So, the output of the hidden layer is:
Output = 0
This means that the neurone in the hidden layer is inactive, meaning the dish does not pass; the output is 0.
CASE II
Let’s say we have another neuron in the output layer with the following weights and bias:
w1 = 0.7
w2 = 0.3
bias = 0.2
The output of the hidden layer (0) is passed as input to the output layer:
Output = w1 x x1 + w2 x x2 + bias
= 0.7 x 0 + 0.3 x 0 + 0.2
= 0.2
Let’s assume the threshold value for the output layer is 0.1:
Output (0.2) > threshold (0.1)
So, the final output of the neural network is:
Output = 1
Types of Neural Networks
Neural networks are classified into different types based on their purposes. The most common types of neural networks are:
- Standard Neural Network (Perceptron)
- Feed Forward Neural Network (FFNN)
- Convolutional Neural Network (CNN)
- Recurrent Neural Network (RNN)
- Generative Adversarial Network (GAN)
1. Standard Neural Network (Perceptron)
Standard Neural Network which is known as Perceptron is a fundamental concept in machine learning. In standard Neural Network does’t have a hidden layer. The perceptron created by Rosenblatt in 1958, Perceptron is a simple neural netowrk with single layer of input node connected to a layer of output nodes. It is uses threshold logic Unit as aftifical neurons.
Application: Useful in spam detection and basic decision-making.
2. Feed Forward Neural Network (FFNN)
FFNN is also known as multilayer perceptions. FFNN has an input layer, one or more hidden layers, and an output layer. In FFNN, data flow from the input layer to the output layer. FFNN uses activation functions and weights to process information; FFNN is capable of handling noisy data.
Application: used in tasks like image recognition, natural language processing (NLP), and regression.
3. Convolutional Neural Network (CNN)
CNNs are basically used in visual data. CNNs can extract the features from the image, which can help to recognise patterns and objects in the image. CNNs use three-dimensional data for image classification and object recognition tasks.

Application: Dominant in computer vision for tasks such as object detection, image recognition, style transfer, and medical imaging
4. Recurrent Neural Network (RNN)
A recurrent neural network is designed for sequential data; this neural network deals with data that has a sequence, like a sentence or time series. RNNs have a connection loop that can go back to the previous layer, which is known as a feedback connection.
Application: Used in NLP for language modeling, machine translation, chatbots, as well as in speech recognition, time series prediction, and sentiment analysis.
5. Generative Adversarial Network (GAN)
GANs consist of two neural networks – a generator and a discriminator – trained simultaneously. The generator creates new data instances, while the discriminator evaluates them for authenticity. They are used for unsupervised learning and can
generate new data samples. GANs are used for generating realistic data, such as images and videos.
Application: Widely employed in generating synthetic data for various tasks like image generation, style transfer, and data augmentation.
Future of NN and its impact on society
Neural networks improve themselves; they are advanced algorithms that can improve the productivity of the various industries like manufacturing, finance, and healthcare. Neural networks also have a capability for analysing large amounts of data and generating recommendations for individuals or for industry. Neural networks help in economic growth, and they can create new job opportunities in the field, like data science and artificial intelligence.
Neural networks have some concerns about ethical issues such as data privacy, bias in algorithms, and job displacement. It is critical to the programmer to handle these uses carefully and ensure that the neural networks benefit society as a whole.
Disclaimer: We have taken an effort to provide you with the accurate handout of “Understanding of Neural Networks Class 12 Notes“. If you feel that there is any error or mistake, please contact me at anuraganand2017@gmail.com. The above CBSE study material present on our websites is for education purpose, not our copyrights. All the above content and Screenshot are taken from Artificial Intelligence Class 12 CBSE Textbook, Sample Paper, Old Sample Paper, Board Paper and Support Material which is present in CBSEACADEMIC website, This Textbook and Support Material are legally copyright by Central Board of Secondary Education. We are only providing a medium and helping the students to improve the performances in the examination.
Images and content shown above are the property of individual organizations and are used here for reference purposes only.
For more information, refer to the official CBSE textbooks available at cbseacademic.nic.in