Artificial Intelligence Class 11 – has been designed by the teachers and examiners team. The team have collected Notes, all Important Subjective QA and MCQ from the Textbook.

The AI Class 11 NCERT Handbook Notes are designed to align with the CBSE syllabus and provide comprehensive coverage of all key topics. These notes are created by experts and provide a clear and concise summary of each chapter in the textbook. Students seeking AI Class 11 NCERT Book Notes can find them here, with each unit presented in a simple and easy-to-understand format.

What is AI Bias –Artificial intelligence (AI) has the potential to be applied for the benefit of society as a whole, enhancing individual lives and tackling global issues. Some of the ways AI can be beneficial include:

Healthcare: By analysing vast amounts of medical data, finding disease trends, and assisting in the creation of new medicines, AI can help improve healthcare results.

AI can help personalise education by offering specialised learning experiences and enhancing student performance.

Climate change: Through energy efficiency, waste reduction, and the facilitation of more sustainable habits, AI can significantly contribute to the addressing of climate change.

Social good: By analysing vast amounts of data and offering insights to guide decision-making, AI can be used to address social concerns including poverty, inequality, and injustice.

What is AI Bias

Principles for ethical AI

Organizations and people can adhere to the following guidelines to guarantee that artificial intelligence (AI) is created and used in an ethical and responsible manner:

Transparency: AI systems should be transparent, with clear and intelligible explanations of how decisions are made, so that people can understand and trust the technology.

Responsibility: Those who develop and utilise AI should be held accountable for the results and effects of the technology, as well as for ensuring that AI systems are used in ways that are consistent with moral principles.

Fairness: AI systems should be created to eliminate prejudice and discrimination and should treat every person equally, regardless of their colour, gender, or other traits.

Types of bias

There are several types of bias that can occur in artificial intelligence (AI) systems:

Algorithmic bias: This occurs when the data and algorithms used to train AI systems are biased, resulting in the AI system making biased decisions.

Representational bias: This occurs when the data used to train AI systems is unrepresentative or incomplete, leading the AI system to make incorrect or unfair decisions.

Confirmation bias: This occurs when AI systems only consider data that supports pre-existing beliefs or hypotheses, leading to a narrow and skewed view of the data.

Attribution bias: This occurs when AI systems make incorrect assumptions about the causes of events, leading to incorrect decisions.

Feedback loop bias: This occurs when AI systems reinforce existing biases by using biased data to make decisions, which then becomes part of the training data for future AI systems, perpetuating the bias.

Human bias in AI design: This occurs when AI systems are designed and developed by humans who hold biases, leading to those biases being baked into the AI system.

How bias influences AI based decisions

Artificial intelligence (AI) system bias can have a big impact on how these systems decide. The data utilised to train the AI system, the methods used, and the people involved in its development all have the potential to introduce bias.

An artificial intelligence system may use biassed data to learn and then use that bias to generate unfair or inaccurate conclusions. In the actual world, an AI system may act discriminatorily if it is educated on data that incorporates gender or racial prejudices, for instance.

The algorithms used to train the AI system can also induce bias. For instance, some algorithms might be prejudiced by default, or they might be made to emphasise some outcomes over others, which would produce biassed results.

Finally, bias can also be introduced by people who are involved in the development and application of AI systems. A human designer’s preexisting bias, for instance, might have an impact on the design and development of the AI system and lead to biassed judgements.

How data driven decisions can be debiased

Several techniques can be used to debias data-driven decisions:

Training data that is diverse and representative: Lessening bias in AI systems can be achieved by using a diverse and representative set of training data. This contains information from many populations with various backgrounds, experiences, and viewpoints.

Transparency of algorithms: Transparency of algorithms used in AI systems is necessary to audit decision-making processes, identify bias, and correct it.

Human oversight: Human monitoring of AI systems can assist in identifying and resolving the system’s biases. This can require including various teams in the development of AI systems and conducting ongoing evaluations of their decision-making procedures.

Fairness and accountability: Achieving fairness and accountability in AI systems is essential for reducing bias. Clear goals and objectives for the AI system should be established, and they should be frequently monitored and evaluated to make sure they are being met.

Data quality: To guarantee that AI systems are unbiased, high-quality data is necessary. This includes information that is accurate, unique, and comprehensive, as well as information that is representative of the population under study.

Storytelling Artificial Intelligence – Storytelling has been a fundamental aspect of human communication and culture for thousands of years. The power of storytelling lies in its ability to evoke emotions and convey complex information in a way that is memorable, engaging, and easy to understand. This makes storytelling an effective tool for communicating data and insights, especially in situations where dry statistics or numbers alone might not have the desired impact.

Cross-cultural storytelling is particularly powerful as it transcends language and cultural barriers, connecting people from different backgrounds through shared experiences and emotions. This makes storytelling an ideal tool for data storytelling, as it can help convey complex information in a way that is accessible and understandable to a diverse range of audiences.

In the context of data storytelling, the power of storytelling lies in its ability to make data more accessible, engaging, and memorable. By combining data and story, data storytellers can bring data to life, revealing insights, patterns, and trends in a way that is both informative and emotionally resonant.

Storytelling Artificial Intelligence

The Need for Storytelling

The need to convey information and ideas in a way that is effective, engrossing, and simple to understand leads to the necessity for storytelling. It is essential for people, organisations, and enterprises to be able to stand out and leave a lasting impact in today’s world of excess information by skillfully expressing their insights and ideas.

By turning complicated knowledge into a simple, memorable tale, storytelling offers a way to make it more understandable. People are better able to comprehend and remember the information as a result, which enhances their capacity to recall and use it in the future.

Story telling with data

Telling a story with data involves effectively communicating insights and information through visualizations and narrative. The following are some steps to tell a great story with your data:

Identify the story: Start by defining the story you want to tell, your audience, and the key insights you want to convey.

Gather the data: Collect the data relevant to your story and ensure that it is accurate and reliable.

Clean and Organize the data: Clean up the data, removing any duplicates or errors, and organize it in a way that makes it easy to work with.

Visualize the data: Choose the appropriate visualization methods that best represent the data and insights you want to convey.

Add context: Provide context and background information to help your audience understand the data and the story you’re telling.

Craft the narrative: Develop a compelling narrative that ties together the data and the insights you want to convey.

Test and iterate: Test your story with a small group of people to see how they react and make changes as necessary.

Present and share: Present your story in a way that is engaging and accessible, and share it with your intended audience.

Conflict and Resolution

Conflict resolution in AI refers to the processes and techniques used to resolve conflicts between AI systems, algorithms, or models when there are conflicting or inconsistent results. These conflicts can arise in various AI applications such as recommendation systems, autonomous decision-making, and multi-agent systems.

To resolve these conflicts, various approaches can be used, including:

Rule-based approaches: Conflicts can be resolved by using predefined rules or decision trees that dictate the order of priority for the conflicting results.

Negotiation-based approaches: Conflicts can be resolved through negotiation between the conflicting AI systems or algorithms, allowing them to reach a mutually acceptable solution.

Machine learning-based approaches: AI systems can be trained to learn from past conflict resolution outcomes and use that knowledge to resolve future conflicts.

Human-in-the-loop approaches: Conflicts can be resolved with the assistance of a human expert who can make a final decision or provide guidance to the AI systems.

Storytelling for audience

The technique of leveraging AI technology and data to develop engaging tales and stories for varied audiences is known as AI storytelling. In order to produce compelling and insightful tales, this method combines the strength of data analysis and visualisation with the storytelling abilities of journalists, designers, and other content producers.

Insights, patterns, and the accessibility and comprehension of complicated information for a broad range of audiences are the aims of AI storytelling. It works well in a variety of contexts and can be used to engage and inform audiences, including:

Business and marketing: AI storytelling can assist businesses in creating more effective marketing efforts by helping them better understand their target audiences and market trends.

News & journalism: AI storytelling can aid in the discovery of obfuscated patterns and trends as well as the presentation of information in a more interesting and approachable manner.

Education and training: AI storytelling can be used to design dynamic and engaging learning experiences, enhancing the effectiveness and memorability of education and training.

Insights from storytelling

Numerous significant insights that can be used to guide innovation, enlighten decision-making, and enhance comprehension of complicated challenges can be gained by storytelling. The following are some important lessons that can be learned via storytelling:

Understanding of human behaviour and emotions: Storytelling offers a window into the reasons behind people’s actions as well as their feelings and experiences. This insight can be used to guide decisions and increase customer engagement.

Patterns and trends: Through data analysis and visualisation, storytelling may help reveal unseen patterns and trends and offer important new perspectives on challenging problems.

Empathy and connection: The power of storytelling to build empathy and connections among people, promoting comprehension and cooperation.

Storytelling may challenge preconceived notions and offer fresh viewpoints and concepts, fostering innovation

AI Values Class 11 Notes – AI values refers to the ethical and moral principles that guide the development and deployment of artificial intelligence (AI) technologies.

AI Values Class 11 Notes

Issues and Concerns around AI

There are several issues and concerns around artificial intelligence (AI), including:

Bias and Discrimination: AI systems can perpetuate existing biases and discriminatory practices if they are trained on biased data sets. This can result in unfair treatment of certain groups of people.

Job Loss: AI systems have the potential to automate many jobs, leading to significant job losses and increased unemployment.

Security and Privacy: AI systems store and process vast amounts of personal and sensitive data, making them vulnerable to hacking and privacy breaches.

Lack of Regulation: The development and deployment of AI systems is largely unregulated, leading to concerns about accountability and transparency.

Ethical Concerns: AI raises complex ethical questions, such as the use of autonomous weapons, the development of AI systems that can cause harm, and the loss of privacy and control over personal data.

Lack of Human Understanding: AI systems can be difficult for humans to understand, leading to a lack of trust in the systems and the decisions they make.

Economic Disparity: The benefits of AI are likely to accrue to those who own and control the technology, leading to increased economic disparity.

To address these concerns, it’s important for governments, industry, and the academic community to work together to establish ethical principles for the development and deployment of AI, and to ensure that the benefits of AI are shared equitably.

AI and Ethical Concerns

Autonomous Weapons: The development of autonomous weapons raises ethical questions about accountability and the potential for harm.

Bias and Discrimination: AI systems can perpetuate existing biases and discriminatory practices if they are trained on biased data sets. This can result in unfair treatment of certain groups of people.

Privacy and Control over Personal Data: AI systems store and process vast amounts of personal and sensitive data, leading to concerns about privacy and control over personal information.

Responsibility and Accountability: It can be difficult to determine who is responsible when an AI system causes harm, making it challenging to hold anyone accountable.

Job Loss: AI systems have the potential to automate many jobs, leading to significant job losses and increased unemployment.

Lack of Human Understanding: AI systems can be difficult for humans to understand, leading to a lack of trust in the systems and the decisions they make.

The Potential for AI to Cause Harm: AI systems have the potential to cause harm, whether through errors in decision-making or malicious intent.

AI and Bias

Bias in artificial intelligence (AI) refers to systematic errors or unfairness in the algorithms and models used to develop AI systems. Bias can arise from a number of sources, including:

Data Bias: AI systems are trained on data sets, and if the data used to train the system is biased, the resulting AI system will also be biased. For example, if a facial recognition system is trained on a predominantly white and male data set, it may have difficulty accurately recognizing individuals from other racial or ethnic groups.

Algorithm Bias: Bias can also be introduced into AI systems through the algorithms and models used to develop them. For example, an algorithm that relies on historical data may perpetuate existing biases and discriminatory practices.

Human Bias: Humans involved in the development and deployment of AI systems can also introduce bias into the systems, either knowingly or unknowingly.

AI: Ethics, Bias, and Trust

Artificial Intelligence (AI) raises important ethical, bias, and trust issues.

Ethics: AI systems can raise complex ethical questions, such as the use of autonomous weapons, the development of AI systems that can cause harm, and the loss of privacy and control over personal data.

Bias: AI systems can perpetuate existing biases and discriminatory practices if they are trained on biased data sets. This can result in unfair treatment of certain groups of people.

Trust: AI systems can be difficult for humans to understand, leading to a lack of trust in the systems and the decisions they make. The potential for bias and discrimination in AI systems further undermines trust in the technology.

Employment and AI

Job Automation: AI can automate numerous tasks and jobs, causing significant job losses in certain industries and higher unemployment rates.

New Job Opportunities: AI is also creating new jobs in fields such as data science, machine learning, and AI development and deployment.

Job Changes: AI is transforming many jobs and requiring workers to acquire new skills and knowledge to stay competitive in the evolving job market.

Income Inequality: AI can exacerbate existing income disparities, with highly skilled workers benefiting more from the technology compared to lower-skilled workers who face a greater risk of job loss.

Labor Market Effects: The influence of AI on the labor market is likely to have significant effects on workers, employers, and society as a whole.

Teachers and Examiners (CBSESkillEduction) collaborated to create the Clustering Notes. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Clustering Notes

What is Clustering?

Finding a pattern in a set of unlabeled data is the goal of clustering, which is an unsupervised learning technique. Clustering is a strategy for grouping comparable data in a way that makes the data and objects in a group more similar to one another than the data and objects in the other groups.

Imagine that you have a sizable library of books that you must categorise and arrange in a bookcase. You don’t know anything about the contents of the books because you haven’t read them all. You begin by first gathering the books with related titles and this is clustering.

Let us understand this with a simple graphical example:

Clustering algorithms can be applied in many fields, for instance:

Marketing – It’s critical to target the appropriate demographic if you’re a business. Clustering algorithms can identify people who are likely to buy your goods or service and group them together. To enhance the likelihood of a sale, focus your messaging on the groups you’ve identified.

Biology – Classification of plants and animals given their features.

Libraries – Book ordering

Insurance – Identifying groups of motor insurance policy holders with a high average claim cost; Identifying frauds

City-planning – Identifying groups of houses according to their house type, value and geographical location

Earthquake studies – Clustering observed earthquake epicenters to identify dangerous zones

WWW – Document classification; clustering weblog data to discover groups of similar access patterns

Identifying Fake News – Fake news is being created and spread at a rapid rate due to technology innovations such as social media. But clustering algorithm is being used to identify fake news based on the news content.

Clustering Notes

Clustering Workflow

In order to cluster the data, the following steps are required to be taken:

Prepare the data – The set of features that will be made available to the clustering method is referred to as data preparation. The data representation must include descriptive features in the input set (feature selection) for the clustering approach to be effective, or else new features based on the original set will be generated (feature extraction). We normalise, scale, and alter feature data at this stage.

Create similarity metrics – You must aggregate all of the feature data from the two examples into a single numeric value in order to determine how similar two data sets are. Consider a set of shoe data where the only attribute is “shoe size.” By calculating the size difference between two pairs of shoes, you may determine how comparable they are. The similarity between shoes increases as the numerical gap between sizes decreases. A manual similarity measure is one that was manually created. Any clustering technique relies on the similarity measure, so it must be carefully selected.

Run the clustering algorithm – When using machine learning, you occasionally come across datasets with millions of samples. To handle these massive datasets, ML algorithms must scale effectively. However, because they must determine the similarity between each pair of points, many clustering techniques are not scalable. The process of clustering data can be done in a variety of ways. For a more thorough taxonomy of clustering approaches, the cluster algorithms can be roughly divided into hierarchical or dividing categories.

Interpret the results – Since clustering is unsupervised, there is no “truth” to validate findings. The absence of truth makes evaluating quality more difficult. The interpretation of the findings in this case is essential.

Clustering Notes

Types of Clustering

In fact, there are more than 100 clustering algorithms known. But few of the algorithms are used popularly, let’s look at them in detail:

Centroid-based clustering – The most popular centroid-based clustering algorithm is k-means, which groups the data into non-hierarchical clusters. Although efficient, centroid-based algorithms are sensitive to beginning conditions and outliers. K-means is the primary clustering algorithm covered in this course because it is straightforward, effective, and efficient.

Density-based clustering – High example density locations are linked together into clusters using density-based clustering. This permits distributions of any shape, provided that dense regions can be connected. These algorithms struggle with data that has several dimensions and different densities. Additionally, these techniques do not allocate outliers to clusters by design.

Distribution-based Clustering – Distribution-based The clustering approach presupposes that the data is made up of distributions, such Gaussian distributions. The distribution-based algorithm groups the data into three Gaussian distributions, as shown in the picture below. The likelihood that a point belongs to the distribution reduces as one moves away from the distribution’s centre. These waning probabilities are depicted by the bands. Use a different approach if you don’t know what kind of distribution your data have.

Hierarchical clustering – A tree of clusters is produced through hierarchical clustering. Unsurprisingly, hierarchical data, like taxonomies, are ideally suited to hierarchical clustering. For an illustration, see Oksana Lukjancenko, Trudy Wassenaar, and Dave Ussery’s Comparison of 61 Sequenced Escherichia coli Genomes. Another benefit of pruning the tree at the proper level is that any number of clusters can be selected.

Clustering Notes

K- means Clustering

Iterative clustering algorithm K-means seeks to locate local maxima with each iteration. This algorithm has two steps to it –

Step 1: Cluster Assignment

The algorithm visits each data point in this stage and places the data point on one of the cluster centroids. The distance a data point has to travel to a specific centroid to be assigned to a given cluster.

Step 2: Move centroid

K-means relocates the centroids to the average of the points in a cluster in the move centroid step. In other words, the algorithm finds the centroid’s location by calculating the average of all the points in a cluster and moving the centroid there. The process is continued until all data points are grouped together and there is no longer any room for the clusters to shift. The initial cluster’s number is determined at random.

Applying the K-Means Algorithm

Iteration 1 – To begin, we construct two centroids using random numbers, and then we place each data point in the cluster of the nearest centroid. We want to generate two clusters in this instance since we are utilising two centroids, which corresponds to K=2.

Iteration 2 – The centroids are not distributed equally, as you can see above. In the following iteration, Using an algorithm, the new centroid values are determined as the average values of the two clusters.

Iterations 3-5 – We repeat the process until there is no further change in the value of the centroids.

k-Means Clustering: Advantages and Disadvantages

Advantages of k-means

Relatively simple to implement.

Scales to large data sets.

Guarantees convergence.

Can warm-start the positions of centroids.

Easily adapts to new examples.

Generalizes to clusters of different shapes and sizes, such as elliptical clusters.

Disadvantages of k-means

Choosing k manually

Being dependent on initial values – For a low k, you can reduce this reliance by running k-means multiple times with various initial values and selecting the best outcome. As “k” rises, more sophisticated k-means algorithms are required (referred to as “k-means seeding”) to select better initial centroids.

Data clustering with changing cluster sizes and densities – K-means struggles to cluster data with varying cluster sizes and densities. You must generalise k-means as outlined in the Advantages section in order to cluster such data.

Clustering outliers – Outliers in a cluster can drag centroids or they may form their own cluster in place of being ignored. Before clustering, take into account eliminating or clipping outliers.

Scaling with the number of dimensions – A distance-based similarity measure between any two samples converges to a fixed value as the number of dimensions rises. Feature data can be subjected to PCA to reduce dimensionality, or the clustering technique can be modified using “spectral clustering,” as will be discussed below.

k-means Generalization

The k-means approach, which divides clusters with various means, is generalised by the k-groups procedure. We suggest two k-groups algorithms: the first version of k-groups and the second variation of k-groups. K-group implementation is based in part on Hartigan and Wong’s k-means technique.

Two graphs side-by-side. The first showing a dataset with somewhat obvious clusters. The second showing an odd grouping of examples after running k-means.

To cluster naturally imbalanced clusters like the ones shown in Figure 1, you can adapt (generalize) k-means. In Figure, the lines show the cluster boundaries after generalizing k-means as: a. Left plot – No generalization, resulting in a non-intuitive cluster boundary. b. Centre plot – Allow different cluster widths, resulting in more intuitive clusters of different sizes. c. Right plot – Besides different cluster widths, allow different widths per dimension, resulting in elliptical instead of spherical clusters, improving the result.

Why is it Unsupervised?

We cluster data points by dividing them into various clusters. As a result, clustering typically groups the data based on the similarity of the features rather than the target or labels. As a result, clustering uses a similarity function to assess how similar two data points are (k indicates that Euclidean distance is assessed). And because the feature you give the cluster determines the kind of groups you get, feature engineering is important in clustering.

How do you classify it? 1. The usual approach requires a set of labelled data/or a person to annotate the clusters. 4. Decide the features 5. Cluster the data 6. Use labelled data or human evaluators to annotate the clusters

Teachers and Examiners (CBSESkillEduction) collaborated to create the Difference between Classification and Clustering. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Difference between Classification and Clustering

Classification

We deal with classification issues almost every day. Here are a few compelling instances to show how classification issues are used frequently.

before starting classification first we have to ‘understand the Supervised learning –

Artificial intelligence (AI) can be created through the process of “supervised learning,” which involves training a computer algorithm on input data that has been labelled for a specific output.

Imagine receiving a basket full of several fruit varieties. The machine must now be trained by feeding it each different fruit one at a time, as shown below:

a. If shape of object is rounded with a depression at top and Red in colour, then it will be labeled as – Apple. b. If shape of object is long curving cylinder and green in colour, then it will be labeled as – Banana.

Supervised learning is further classified into two categories of algorithms:

a. Classification: A classification problem is when the output variable is a category, such as “Red” or “blue” or “disease” and “no disease”. b. Regression: A regression problem is when the output variable is a real value, such as “INR” or “Kilograms”, “Fahrenheit” etc.

Difference between Classification and Clustering

What is classification in Artificial Intelligence / Machine Learning (AI/ML)

Classification is the process of locating and classifying things or concepts into preset groups. Classification in data management permits the separation and sorting of data in accordance with predetermined needs for various business or personal objectives.

For example, if you reside in a gated community, there are specific trash cans for different types of waste, such as food, paper, and plastic. Here, you are essentially labelling each category after categorising the waste into various groups.

In the below picture, we are assigning the labels ‘paper’, ‘metal’, ‘plastic’, and so on to different types of waste.

Types of Classification Algorithm

Classification is a type of supervised learning. It labels the examples of input data and is best used when the output has finite and discrete values.

Examples of classification problems include: a. Given an email, classify if it is spam or not. b. Given a handwritten character, classify it as one of the known characters. c. Given recent user behavior, classify as churn or not.

There are two main types of classification tasks that you may encounter, they are:

i) Binary Classification: Classification with only 2 distinct classes or with 2 possible outcomes

Example: Male and Female

Example: Classification of spam email and non-spam email

Example: Results of an exam: pass/fail

Example: Positive and Negative sentiment

ii) Multi Class Classification: Classification with more than two distinct classes.

Example: classification of types of soil

Example: classification of types of crops

Example: classification of mood/feelings in songs/music

Binary Classification

Binary classification often involves two classes: one representing the normal state and the other the abnormal state. For example, the normal condition is “not spam,” while the abnormal state is “spam.” Another example is when a task involving a medical test has a normal condition of “cancer not identified” and an abnormal state of “cancer detected.”

The class for the normal state is assigned the class label 0 and the class with the abnormal state is assigned the class label 1.

Popular algorithms that can be used for binary classification include: a. Logistic Regression b. k-Nearest Neighbors c. Decision Trees d. Support Vector Machine

Logistic Regression

One of the binomial classification algorithms used to categorise observations into a finite set of classes is logistic regression. With binary data, where either an event occurs (1) or it doesn’t, logistic regression can be used (0).

As a result, given a feature x, an attempt is made to determine whether an event y occurs or not. So, y can only be either 0 or 1. x is given the value 1 in the scenario where the event occurs. Y is assigned a value of 0 if the event does not occur. For instance, if y stands for whether a sports team wins a game, then y will be 1 if they do or 0 if they lose.

Difference between Classification and Clustering

True positives, true negatives, false positives and false negatives

The effectiveness of a classification model, or classifier’s predictions, is evaluated using a matrix (NxN table) in the field of machine learning and artificial intelligence, where N is the number of target classes. The classification model’s predicted values are contrasted with the actual target values in the confusion matrix. This information reveals the classification model’s level of performance and the types of errors it is committing.

Let’s understand the matrix:

a. The target variable has two values: Positive or Negative b. The columns represent the actual values of the target variable c. The rows represent the predicted values of the target variable

But wait – what’s TP, FP, FN and TN here? That’s the point we have to understand in confusion matrix. Let’s understand each term below.

True Positive (TP)

a. The predicted value matches the actual value b. The actual value was positive and classification model also predicts positive c. There is no error

True Negative (TN)

a. The predicted value matches the actual value b. The actual value was negative and classification model also forecasts negative c. There is no error

False Positive (FP)

a. The predicted value doesn’t match the actual value b. The actual value was negative but the model predicted a positive value c. This is Type 1 Error

False Negative (FN)

a. The predicted value doesn’t match the actual value b. The actual value was positive but the model predicted a negative value c. This is Type 2 Error

Difference between Classification and Clustering

False Positive or False Negative in Medical Science

False positives in medical testing, and more generally in binary classification, occur when a test result incorrectly indicates the presence of a condition, such as a disease (the result is positive), when in reality it is not present. False negatives, on the other hand, occur when a test result incorrectly indicates the absence of a disease, when in fact it is present. These two types of errors can occur in a binary test.

Difference between Classification and Clustering

Practice exercise on simple binary classification models

A set of 1,000 test examples, of which 50% are negative, was used to evaluate a binary classifier. The classifier was discovered to have a 60% sensitivity and 70% accuracy. For this example, create the confusion matrix.

Undoubtedly one of the most well-known shipwrecks in history is the sinking of the Titanic. The RMS Titanic struck an iceberg and sank on April 15, 1912, while on her first voyage. Unfortunately, there were not enough lifeboats to accommodate everyone, and 1502 out of 2224 passengers and staff perished.

Even while survival required a certain amount of luck, it appears that some groups of people had a higher chance of living than others. You must use passenger data to create a predictive model that responds to the query: “What kinds of people were more likely to survive?” (i.e. name, age, gender, socioeconomic class, etc.).

Teachers and Examiners (CBSESkillEduction) collaborated to create the Regression Class 11 Notes. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Regression Class 11 Notes

Regression is a technique or algorithm used in machine learning to model a target value using separate predictors. It simply functions as a statistical tool for determining the correlation between two variables, one of which is reliant on the other. This approach is useful for forecasting and determining the causal connection between variables.

Regression techniques differ based on: 1. The number of independent variables 2. The type of relationship between the independent and dependent variable

Correlation is a measure of the strength of a linear relationship between two quantitative variables

a. Correlation is positive when the values increase together b. Correlation is negative when one value decreases as the other increases

Correlation can have a value: a. 1 is a perfect positive correlation b. 0 is no correlation (the values don’t seem linked at all) c. -1 is a perfect negative correlation

Regression Class 11 Notes

Crosstabs and Scatterplots

Crosstabs

We can build a link between two variables with the aid of cross tabs. A table showing this relationship is used. The crosstab below displays whether a person has an unlisted phone number by age.

a. This table displays the number of observations for each possible combination of the two variables’ values in each table cell.

b. For example, we can see that 185 people between the ages of 18 and 34 do not have an unlisted phone number.

c. Additionally, percentages within the columns are displayed, with each column’s percentages adding up to 100%. For instance, 24% of all those without an unlisted phone number are between the ages of 18 and 34.

d. People without unlisted numbers have a distinct age distribution than those with unlisted numbers. In other words, a correlation between the two is shown by the crosstab: younger persons are more likely to have unlisted phone numbers.

e. As a result, it is also possible to conclude that the variables used to make this table are connected. We would state that these two categorical variables were not correlated if there was no connection between them.

Scatterplots

The values for two different numerical variables are represented by dots in a scatter plot (also known as a scatter chart or scatter graph). Each dot’s location on the horizontal and vertical axes represents a data point’s values. To view relationships between variables, utilise scatter plots.

Regression Class 11 Notes

Pearson’s r

The Pearson correlation coefficient, where r = 1 denotes a perfect positive correlation and r = -1 denotes a perfect negative correlation, is used to assess the strength of a linear link between two variables. Therefore, you may use this test to determine whether there is a correlation between people’s height and weight (the taller a person is, the more likely it is that they will be heavier).

The following conditions must be met for Pearson’s correlation coefficient: The scale should be either interval or ratio.

a. Variables should be approximately normally distributed b. The association should be linear c. There should be no outliers in the data

Equation

What does this test do?

The ‘r’ symbol stands for the Pearson product-moment correlation coefficient, also known as the Pearson correlation coefficient, which is a measure of the strength of a linear link between two variables. The Pearson correlation coefficient, r, shows how distant all of these data points are from this line of greatest fit (i.e., how well the data points match this new model/line of best fit). Basically, a Pearson product-moment correlation seeks to draw a line of best fit through the data of two variables.

What values can the Pearson correlation coefficient take?

Between +1 and -1 are the possible values for the Pearson correlation coefficient, or r. There is no link between the two variables, as indicated by a value of 0. Positive associations have values greater than 0, meaning that if one variable’s value rises, so does the value of the other. A result that is less than 0 denotes a negative connection, meaning that when one variable’s value rises, the value of the other variable falls. The illustration below demonstrates this:

Regression Class 11 Notes

How can we determine the strength of association based on the Pearson correlation coefficient?

The Pearson correlation coefficient, r, will be closer to +1 or -1, depending on whether the link is positive or negative, respectively, the stronger the association between the two variables.

A score of +1 or -1 indicates that all of your data points are on the line of best fit; no data points deviate from this line in any manner. Variation around the line of best fit is indicated by r values between +1 and -1, such as 0.8 or -0.4. The variation around the line of best fit increases as the value of r approaches zero. The graphic below displays many relationships and their correlation coefficients –

Are there guidelines to interpreting the Pearson’s correlation coefficient?

Yes, the following guidelines have been proposed:

Regression Class 11 Notes

Assumptions

A Pearson’s correlation is supported by four “assumptions”. The results of your data analysis utilising a Pearson’s correlation may not be accurate if any one of these four conditions is not satisfied.

Assumption # 1 – It is best to measure the two variables continuously. These continuous variables include, for instance, height (measured in feet and inches), temperature (measured in °C), salary (measured in dollars/INR), revision time (measured in hours), IQ score (intelligence), reaction time (measured in milliseconds), test performance (measured from 0 to 100), sales (measured in number of transactions per month), and so on.

Assumption # 2 – Your two variables must be linearly related to one another. While there are several techniques to determine whether a Pearson’s correlation exists, we advise using Stata to create a scatterplot so that you can compare your two variables. The scatterplot can then be visually inspected to ensure linearity. Your scatterplot might resemble one of the examples below:

Assumption #3 – There shouldn’t be any notable exceptions. Outliers are merely individual data points in your data that deviate from the pattern that is typically observed (for instance, in a study of 100 students’ IQ scores, where the mean score was 108 with only a small variation between students, one student had a score of 156, which is extremely unusual and may even place her in the top 1% of IQ scores globally). The possible impact of outliers is highlighted in the scatterplots below:

Assumption # 4 – The distribution of your variables should be roughly normal. Bivariate normality is required to evaluate the statistical significance of the Pearson correlation, but because this assumption is challenging to evaluate, a more straightforward approach is more frequently used.

Regression Class 11 Notes

Regression – Finding The line

Regression analysis refers to the process of creating a distribution in which more than one variable is involved. The value of the variable that depends on the other is typically found, or rather predicted.

Let x and y be two different variables. If y relies on x, the outcome takes the form of a straightforward regression. We also give the variables x and y the following names:

y – Regression or Dependent Variable or Explained Variable x – Independent Variable or Predictor or Explanator

Teachers and Examiners (CBSESkillEduction) collaborated to create the Data Analysis Class 11 Notes. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Data Analysis Class 11 Notes

Structured Data

Structured data is a standardized format for describing a page’s content and categorizing it. for example, Names, dates, addresses, credit card numbers, stock data, and other everyday items.

It is a clear structure, and is highly organized in a structured repository. In a relational database management system, it can be simply stored and found because it neatly fits into fixed fields and columns (RDBMS).

Common sources of structured data are: a. Excel files b. SQL databases c. Medical devices Logs d. Online Forms

Characteristics of Structured Data a. High organized b. Clearly defined c. Easy to access d. Easy to analyze

Examples of Structured Data a. Name b. Age c. Gender d. Address e. Phone Number f. Currency g. Date h. Billing info

Sources of Structured Data a. SQL database b. Spreadsheet c. Sensors d. Medical Device e. Online Forms f. Point of Sales Systems g. Web and Server Logs

Data Analysis Class 11 Notes

Date and Time Datatype

Date and Time datatypes are used to hold values with both date and time information. Date-time information can be stored in a variety of formats.

String Data Type

An array of bytes (or words) that stores a succession of elements is frequently used to build the structured data type known as a string. A string can contain [A – Z], [as z], [0 -9], and [all special characters], yet they are all treated as though they were text because they can store alphanumeric data. There are also spaces in it. String information has to be enclosed in quotes (“”or”).

Examples: Address = “9th Floor, SAS Tower, Gurgaon” “Hamburger” “I ate 3 hamburgers”.

Categorical Data Types

The term “categorical data” also refers to a collection of information that may be categorised into categories, such as the report cards for all students. Because it may be categorised based on the variables included in the report card, such as class, subjects, sections, school-house, etc., this data is known as categorical data.

There are four different type of Categorical Data Type – a. Nominal b. Continuous c. Ordinal d. Binary

Data Analysis Class 11 Notes

Representation of Data

The study of statistics focuses on gathering, organizing, analyzing, interpreting, and presenting data. The observations are turned into useful knowledge via data science. To complete this work, statisticians condense a lot of data into a format that is manageable and yields useful information.

Data representation techniques are broadly classified in two ways –

Non-Graphical technique – Tabular form and case form Large datasets should not be represented in this outdated format. When our goal is to make decisions after analysing a set of data, non-graphical techniques are less suitable.

Graphical Technique – Pie Chart, Bar graphs, line graphs, etc. The most typical visual representation of statistical data is in the form of points, lines, dots, and other geometric shapes. Due to time restrictions, it would not be feasible to describe the creation techniques for all sorts of diagrams and maps.

for example – a. Line graphs b. Bar diagrams c. Pie diagram d. Scatter Plots

Line Graphs

A line graph, often known as a line chart, is a visual representation of data that is constantly changing over time. A line graph connects the data using points that display a continuous change. Depending on the data points they represent, the lines in a line graph can either ascend or drop.

The advantages of using Line graph is that it is useful for making comparisons between different datasets, it is easy to tell the changes in both long and short term, with even small changes over time.

Bar Diagram

The bars in a bar graph, commonly referred to as a bar chart or bar diagram, are used to compare data between categories. The bar’s length is inversely proportional to the value it stands for. Simply put, the value a bar represents increases with length. The graph’s bars, which can run either horizontally or vertically, are all the same width.

Following rules should be observed while constructing a bar diagram: (a) The width of all the bars or columns should be similar. (b) All the bars should be placed on equal intervals/distance. (c) Bars may be shared with colours or patterns to make them distinct and attractive.

Bar Diagram helpful for comparing data, offer a visual representation for quick comparison of amounts in various categories, and make determining relationships simple. Large changes over time are also depicted in bar graphs.

Pie Chart

A circular graph with numerous parts or sections is known as a pie chart. Each sector (segment) of the pie represents the relative size, i.e., the percentage or contribution that each category made to the overall pie. Each part of the diagram resembles a slice of a pie, and the whole thing looks like one. Data from a short table can often be visualised using pie charts.

The advantages of a pie chart is that it is simple and easy-to-understand and provides data comparison at a glance.

Scatter Plots

Scatter plots are used to show the relationship between two variables (or aspects) for a collection of paired data. They consist of a set of data points plotted along the x and y axes. The various shapes that the data points take tell a tale all their own, most frequently indicating the connection (positive or negative) in a lot of data.

Types of Correlation

The statistical concept of correlation describes how closely two variables move in parallel with one another. Two variables are considered to have a positive correlation if they move in the same direction. They have a negative correlation if they travel in the opposite directions.

Positive Correlation – Both variables are seen to be moving in the same direction. In other words, with the increase in one variable, the other variable also increases.

Negative Correlation – Both the variables are seen to be moving in opposite directions. While one variable increases, the other variable decreases.

Exploring Data

Exploring data entails “getting to know” the data, including its values and their typical, unusual, focused, or extreme characteristics. More significantly, throughout the exploration process, one has the chance to spot and fix any issues with their data that could perhaps influence the findings they come to during analysis. This is the first step in data analysis and involves summarizing the main characteristics of a dataset, such as its size, accuracy, initial patterns in the data and other attributes.

Data Analysis Class 11 Notes

Case, Variables and Levels of Measurement

Cases and Variables

A variable is a quality that may be measured and have several values. Or, anything that varies depending on the circumstance. In contrast, a constant in a research study is the same in every situation. Case a sampling point for an experimental unit.

Cases are nothing more than a collection of objects, a dataset is said to consist of cases.

Levels of Measurement

The level of measurement refers to the method used to determine a set of data. Data cannot be treated equally in all cases. It makes logical to categorise data sets using several standards. Some are qualitative while others are quantitative. There are discrete and continuous data sets. The type of qualitative data can be nominal or ordinal. Additionally, interval and ratio data can be separated into two types.

Data Analysis Class 11 Notes

Data Analysis Class 11 Notes

Nominal Level

Nominal-level data are qualitative data. The four seasons of winter, spring, summer, and autumn are examples of nominal variables, as are product categories like Mercedes, BMW, or Audi. Since they are not numbers, they cannot be ranked or utilised in calculations. The easiest or lowest of the four ways to characterise data is the nominal level of measurement.

Ordinal Level

Ordinal data is composed of groups and categories that are arranged in a specific order. For instance, suppose you were asked to give a restaurant meal a rating and you had the choice between unpleasant, unappetizing, just acceptable, tasty, and delicious. Although the restaurant utilised words rather than numbers to judge the quality of its meals, it is obvious that these preferences are ranked from low to high or from negative to positive, making the data qualitative rather than ordinal. The disparity between the data, however, cannot be quantified. Ordinal scale data cannot be used in calculations, much like nominal scale data.

Interval Level

Because it has a clear ordering, data measured using the interval scale is similar to data recorded using the ordinal scale, but there are some differences as well. Even though the data does not have a starting point, or a zero value, the differences between interval scale data can still be measured.

Ratio Scale Level

Similar to interval scale data, ratio scale data has a 0 point and can be used to calculate ratios. For instance, the results of four multiple-choice questions on the final test in statistics were 80, 68, 20, and 92 (out of a possible 100). The grades are produced by a computer. The numbers 20, 68, 80, and 92 can be arranged from lowest to highest, or vice versa. The variations in the data are significant. The score 92 is 24 points higher than the score 68. One can compute ratios. The lowest score is zero. 20 divided by 4 equals 80. An 80 is four times better than a 20, for example.

Data Analysis Class 11 Notes

Data Matrix and Frequency Tables

What is Data Matrix?

The Data Matrix is a tabular representation of the cases and variables utilised in your statistical analysis. In a data matrix, each row denotes a case and each column a variable. There could be hundreds, lakhs, or even more examples in a complete data matrix.

Frequency Tables

The number of times a specific data value happens (occurrences) in a given set of data is the frequency of that data value. In cricket, if four players each score 90 runs, the score of 90 is said to occur four times. ‘f’ is frequently used to denote a data value’s frequency.

Graphs and Shapes of Distributions

Statisticians or machine learning engineers often want to summarize the data they have. They can do it by various available methods like data matrix, frequency tables or by graphical representation. When graphed, the data in a set is arranged to show how the points are distributed throughout the set.

Data Analysis Class 11 Notes

Mean, Median and Mode

Mean

Imagine yourself returning home with your report card after the announcement of your final grades in class. Your parents will ask, “What is your average score?” regarding your grades and general performance. In actuality, they’re looking for your MEAN score.

The most often used and well-known index of central tendency is the mean (or average). The mean is determined by dividing the total number of values in the data set by the number of values in the data set. The mean in this situation is calculated by adding up all of your marks and dividing them by the number of topics.

M = ∑ fox / n Where M = Mean ∑ = Sum total of the scores f = Frequency of the distribution x = Scores n = Total number of cases

Median

The value of an observation for which half are larger and half are smaller is known as the median. The mean of the two middle points is calculated if the number of data points is even. For the median difference, the median for two samples is calculated, and then their difference is calculated.

For a grouped data, calculation of a median in continuous series involves the following steps: (I) The data arranged in ascending order of their class interval (ii) Frequencies are converted into cumulative frequencies (iii) Median class of the series is identified (iv) Formula used to find actual median value

Mode

Another crucial indicator of a tactical series’ primary tendency is its mode. In the data series, it is the value that appears the most frequently. The most frequent score in our data set is the mode. It stands in for the highest bar on a histogram or bar chart. Therefore, you might occasionally think of the mode as the most common choice.

Data Analysis Class 11 Notes

Z – score (For Advance Learners)

The Z-score provides us with a notion of how far our particular data point is from the mean. Technically speaking, it’s a measurement of how far the data point deviates from the population mean by standard deviations. If a value is above the mean, the z-score is positive; if it is below the mean, it is negative.

How do we interpret a z-score?

The value of the z-score tells you how many standard deviations your data point is away from the mean. If a zscore is equal to 0, it is on the mean. A positive z-score indicates the raw score is higher than the mean average. For example, if a z-score is equal to +1, it is 1 standard deviation above the mean. A negative z-score reveals the raw score is below the mean average. For example, if a z-score is equal to -2.

Teachers and Examiners (CBSESkillEduction) collaborated to create the Creative and Critical Thinking Class 11 Notes. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Creative and Critical Thinking Class 11 Notes

Design Thinking Framework

Design concepts (ideas for new goods, structures, machines, etc.) are generated through cognitive, strategic, and practical processes, according to the definition of design thinking. In corporate and social situations, design thinking is also linked to recommendations for the invention of goods and services.

The goal of design thinking, which is a framework centred on solutions, is to generate as many ideas and potential solutions as you can. Both a fundamental design thinking theory and a step in the process, ideation is.

There is a five stages of Design Thinkin – a. Empathize b. Define c. Ideate d. Prototype e. Test

Empathize

Empathic design thinking is the first step. To better grasp the issue, one must let go of any preconceived views and immerse themselves in the situation. Simply said, empathy is the ability to put oneself in the shoes of another person and understand how they might be feeling about their issue, circumstance, or situation.

Define

The data gathered during the Empathize stage is put to use in the Define stage to generate insights and to identify the issue that needs to be fixed. It’s a chance for the design thinker to frame the problem or challenge in terms of unmet user demands, with a human-centered approach to problem-solving.

Ideate

The issue has now become clear, so it is time to come up with solutions. As part of the problem-solving process, many ideas are being generated. Idea generation is the main focus of ideation. One shouldn’t worry about whether the ideas developed during brainstorming are realistic, doable, or even viable.

The thinkers’ sole responsibility is to generate as many ideas as they can for them. Thinking broadly in terms of concepts and results is necessary. There are numerous brainstorming tools available for use at this point.

Prototype

The creation of a model intended to address customer issues at the prototype stage is tested throughout the subsequent step of the process. A prototype’s creation is not a meticulous procedure. A simple sketch, poster, group role-playing, homemade “gadget,” or a 3D printed object are some examples.

The prototypes must be quick to develop, simple, and affordable. Prototypes are therefore thought of as crude representations of what the finished product is supposed to look like.

Test

Testing prototypes with end users is one of the most crucial steps in the design thinking process. Often, this process is seen running concurrently with prototyping. The designers get feedback on the prototype(s) and another chance to engage with and sympathise with the people for whom they are developing solutions during testing. The goal of testing is to discover as much as possible about the user, the issue, and any viable solutions.

Creative and Critical Thinking Class 11 Notes

Right Questioning

A greater comprehension of the problem results in the development of excellent solutions when outstanding questions are asked. Designers are expected to connect with consumers and users frequently as part of the process of designing solutions using the design thinking framework in order to collect specific information about the issues and users’ expectations. The best method to handle the situation is to thoroughly analyse these facts.

5 W’s and One H question

The What, When, Where, Who, and Why of an occurrence are simply referred to as the 5Ws, and the How of the incident is the 1H. If the answers to these questions are discovered, the circumstance will be sufficiently known to end a case.

Creative and Critical Thinking Class 11 Notes

Identifying the problem to solve

The process of solving an issue involves defining it, identifying its root cause, generating potential solutions through brainstorming, and narrowing down the options to find the best one.

Many people spend their entire working day dealing with problems. You may encounter difficulties that are big or tiny, straightforward or complex, whether you’re working to solve a problem for an internal or external client.

Creative and Critical Thinking Class 11 Notes

Ideate

Ideation is the process of coming up with ideas and solutions through exercises like brainstorming, prototyping, and sketching. Design thinkers produce ideas during the ideation stage through imaginative and inquisitive actions, which take the shape of questions and solutions.

Ideation Will Help You –

a. Ask the right questions and innovate with a strong focus on your users, their needs, and your insights about them. b. Bring together perspectives and strengths of your team members. c. Get obvious solutions out of your heads, and drive your team beyond them.

Ideation Techniques –

Brainstorm – Participants should feel free to express their opinions without fear of judgment. So that there are numerous solutions accessible for resolving the problem

Brain dump – Similar to brainstorming, a brain dump involves individuals writing down their thoughts on paper or post-it notes and then sharing them with the larger group.

Brain writing – Brain writing, commonly referred to as “individual brainstorming,” is quite similar to a brainstorming session. The participants brainstorm ideas on paper, then hand off their own piece of paper to another participant after a short while so that they can build on one another’s ideas.

Creative and Critical Thinking Class 11 Notes

A Focus on Empathy

The first step in design thinking is empathy because it enables designers to understand, communicate with, and even share the sentiments of consumers. We may connect with how other people could be feeling about their issue, circumstance, or situation by empathising with them.

What’s an Empathy Map?

The empathy map is a very helpful tool for identifying user needs and getting a better knowledge of the current issue. Additionally, it aids in strengthening that comprehension by providing understanding of user behaviour.

You can utilise the empathy map activity to generate a truthful broad depiction of the user or users in order to create a “persona” or profile for the user. A user’s education, lifestyle, hobbies, values, objectives, wants, thoughts, desires, attitudes, and activities can all be included in a persona.

Teachers and Examiners (CBSESkillEduction) collaborated to create the Math for AI Class 11 Notes. All the important Information are taken from the NCERT Textbook Artificial Intelligence (417).

Math for AI Class 11 Notes

Introduction to Matrices

A matrix is a set of numbers arranged in rows and columns. or A rectangular grid of numbers organised into rows and columns is known as a matrix. We know that computers only understand numbers (binary, hexadecimal, etc.), Matrices help to store the image pixel in the row and column wise format.

Example of AI Matrix –

The mathematics of data is frequently referred to as matrix (or linear algebra). This subject is suggested as a requirement before beginning the study of artificial intelligence because it is arguably the foundation of that field.

Terminology related to Matrices 1. Order of matrix – If a matrix has 3 rows and 4 columns, order of the matrix is 3*4 i.e. row*column 2. Square matrix – The matrix in which the number of rows is equal to the number of columns 3. Diagonal matrix – A matrix in which all the non-diagonal elements equal to 0 is called a diagonal matrix 4. Upper triangular matrix – Square matrix where all the elements below the diagonal is equal to 0 5. Lower triangular matrix – Square matrix where all the elements above the diagonal equal to 0 6. Scalar matrix – Square matrix where all the diagonal elements equal to some constant k 7. Identity matrix – Square matrix where all the diagonal elements equal to 1 and all the nondiagonal elements equal to 0 8. Column matrix – The matrix which consists of only 1 column. Sometimes, it is used to represent a vector. 9. Row matrix – A matrix consisting only of row. 10. Trace – It is the sum of all the diagonal elements of a square matrix.

Math for AI Class 11 Notes

How do you define a Matrix?

When we represent a set of numbers in the form of ‘M’ horizontal line (called rows) and ‘N’ vertical line (called columns), this arrangement is called m x n (m by n) matrix.

Example =

If A =

| 1 2 3 | | 4 5 6 | | 7 8 9 |

Math for AI Class 11 Notes

Types of Matrix

1. Row Matrix – Matrix with only one row. A = [1 3 -5]

2. Column Matrix – Matrix with only one column.

3. Square Matrix: A matrix in which number of rows are equal to number of columns.

Rows = 3 and Column = 3, so this is square matrix.

4. Diagonal Matrix: A matrix with all elements zero except its leading diagonal.

5. Scalar Matrix: A matrix in which all the diagonal elements are equal and all other elements are zero.

And if all diagonal element is unity (1) and all other non-diagonal element is equal to zero, this matrix is called Unit matrix.

Math for AI Class 11 Notes

Matrix Operations

Transpose Matrix

Transpose of a matrix creates a new matrix with number of rows and columns flipped This is denoted by the superscript T next to the matrix AT

Determinant

Every square matrix can be expressed using a number which is known as it determinant. If A = [aif] is a square matrix of order n, then determinant of A is denoted by det A or |𝐴| .

Math for AI Class 11 Notes

Vector and Vector Arithmetic

The foundation of linear algebra is the vector. In the description of algorithms and procedures, such as the target variable (y) when training an algorithm, vectors are widely employed in the field of machine learning.

What is Vector?

While the one-dimensional array expression in brackets is a column vector or just a vector, the two-dimensional array expression enclosed in brackets is a matrix.

We begin by defining a vector, a set of n numbers which we shall write in the form

Math for AI Class 11 Notes

Vector Arithmetic

Vector Addition

Vectors of equal length can be added to create a new vector x = y + z The new vector has the same length as the other two.

Vector Subtraction

Vector of unequal length can be subtracted from another vector of equal length to create a new third vector. x = x − y As with addition, the new vector has the same length as the parent vectors and each element of the new vector is calculated as the subtraction of the elements at the same indices.

Vector Multiplication

If we perform a scaler multiplication, there is only one type operation – multiply the scaler with a scaler and obtain a scaler result, a x b = c But vector has a different story, there are two different kinds of multiplication – the one in which the result of the product is scaler and the other where the result of product is vector (there is third one also which gives tensor result, but out of scope for now) To begin, let’s represent vectors as column vectors. We’ll define the vectors A and B as the column vectors

Physical quantities are of two types:

Scaler: Which has only magnitude, no direction. Vector: Which has both in it – magnitude and direction.

Matrix and Matrix Arithmetic

A and B are two matrices of order m x n, which means they each have m rows and n columns. By adding the relevant elements from A and B, they can be combined to form a matrices of order m x n.

Multiplication of a matrix by a scalar

Let A = [a if] be an m x n matrix and K be any number called a scalar. Then matrix obtained by multiplying scalar K is denote by K A

Multiplication of Matrices

Two matrices with the same size can be multiplied together, and this is often called elementwise matrix multiplication Two matrices A and B can be multiplied (for the product AB) if the number of columns in A (Pre- multiplier) is same as the number of rows in B (Post multiplier).

Now we use to multiply them A and B matrix as

(first row of A) X First column of B (first Row of A) X second column of B (second row of A) X (first column of B) (second row of A) X (second column of B)