Evaluating Models Class 10 Questions and Answers

Evaluating Models Class 10 Questions and Answers – The CBSE has changed the syllabus of Std. X. The Questions and Answers are made based on the new syllabus and based on the CBSE textbook, Sample paper and Board Paper.  All the important Information are taken from the Artificial Intelligence Class X Textbook Based on CBSE Board Pattern.

Evaluating Models Class 10 Questions and Answers

Q1. What will happen if you deploy an AI model without evaluating it with known test set data?

Answer: If you are going to deploy an AI model without evaluating it, then several issues may occur.
The model can produce incorrect result. The model will be trained only on training data, but when you produce new data, then the model will fail. Without training, the model can take unfair or harmful decisions.

Q2. Do you think evaluating an AI model is that essential in an AI project cycle?

Answer: Answer: Model evaluation is like giving your AI model a report card. It helps you understand its strengths, weaknesses, and suitability for the task at hand. This feedback loop is essential for building trustworthy and reliable AI systems.

Q3. Explain train-test split with an example.

Answer: The train-test split is a technique for evaluating the performance of a machine learning algorithm.

  • It can be used for any supervised learning algorithm
  • The procedure involves taking a dataset and dividing it into two subsets: The training dataset and the testing dataset
  • The train-test procedure is appropriate when there is a sufficiently large dataset available

Q4. “Understanding both error and accuracy is crucial for effectively evaluating and improving AI models.” Justify this statement.

Answer:

Error –

  • Error can be described as an action that is inaccurate or wrong.
  • In Machine Learning, the error is used to see how accurately our model can predict data it uses to learn new, unseen data.
  • Based on our error, we choose the machine learning model which performs best for a particular dataset.

Accuracy –

  • Accuracy is an evaluation metric that allows you to measure the total number of predictions a model gets right.
  • The accuracy of the model and performance of the model is directly proportional, and hence better the performance of the model, the more accurate are the predictions.

Q5. What is classification accuracy? Can it be used all times for evaluating AI models?

Answer: Classification accuracy is the number of correct predictions made as a ratio of all predictions made. Classification accuracy is only suitable when there are an equal number of observations in each class, i.e., a balanced dataset (which is rarely the case), and that all predictions and prediction errors are equally
important.

Assertion and reasoning-based questions:

Q6. Assertion: Accuracy is an evaluation metric that allows you to measure the total number of predictions a model gets right.
Reasoning: The accuracy of the model and performance of the model is directly proportional, and hence better the performance of the model, the more accurate are the predictions.

Choose the correct option:
(a) Both A and R are true and R is the correct explanation for A
(b) Both A and R are true and R is not the correct explanation for A
(c) A is True but R is False
(d) A is false but R is True

Answer: (b) Both A and R are true and R is not the correct explanation for A

Q7. Assertion: The sum of the values in a confusion matrix’s row represents the total number of instances for a given actual class.
Reasoning: This enables the calculation of class-specific metrics such as precision and recall, which are essential for evaluating a model’s performance across different classes.

Choose the correct option:
(a) Both A and R are true and R is the correct explanation for A
(b) Both A and R are true and R is not the correct explanation for A
(c) A is True but R is False
(d) A is false but R is True

Answer: (a) Both A and R are true and R is the correct explanation for A

Case study-based questions:

Q8. Identify which metric (Precision or Recall) is to be used in the following cases and why?

a) Email Spam Detection

Answer: Precision, Email, classifiers identify if the email is spam and have evolved into other categories such as social, advertisement, notifications, etc. Similar models are increasingly being used in messaging applications.

b) Cancer Diagnosis

Answer: Recall, The recall is the measure of our model correctly identifying True Positives. Thus, for all the patients who actually have Cancer, recall tells us how many we correctly identified as having a Cancer. The metrics Recall is generally used for unbalanced dataset when dealing with the False Negatives become important and the model needs to reduce the FNs as much as possible.

c) Legal Cases(Innocent until proven guilty)

Answer: Precision, The metrics Precision is generally used for unbalanced datasets when dealing with the False Positives become important, and the model needs to reduce the FPs as much as possible.

d) Fraud Detection

Answer: Recall, The recall is the measure of our model correctly identifying True Positives. Thus, for all the patients who actually have Cancer, recall tells us how many we correctly identified as having a Cancer. The metrics Recall is generally used for unbalanced dataset when dealing with the False Negatives become important and the model needs to reduce the FNs as much as possible.

e) Safe Content Filtering (like Kids YouTube)

Answer: Recall, The recall is the measure of our model correctly identifying True Positives. Thus, for all the patients who actually have Cancer, recall tells us how many we correctly identified as having a Cancer. The metrics Recall is generally used for unbalanced dataset when dealing with the False Negatives become important and the model needs to reduce the FNs as much as possible.

Q9. People of a village are totally dependent on the farmers for their daily food items. Farmers grow new seeds by checking the weather conditions every year. An AI model is being deployed in the village which predicts the chances of heavy rain to alert farmers which helps them in doing the farming at the right time. Which evaluation parameter out of precision, recall and F1 Score is best to evaluate the performance of this AI model? Explain.

Answer: Let us take each of the factor into consideration at once, If precision is considered, FN cases will not be taken into account, so it will be of great loss as if the machine will predict there will be no heavy rain, but if the rain occurred, it will be a big monetary loss due to damage to crops.

If only recall is considered, then FP cases will not be taken into account. This situation will also cause a big amount of loss, as all people of the village are dependent on farmers for food, and if the model predicts there will be heavy rain and the farmers may not grow crops, it will affect the basic needs of the people.

Hence F1 Score is the best suited parameter to test this AI model, which is the balance between Precision and Recall.

Q10. What is a confusion matrix? What is it used for?

Answer: The confusion matrix is used to store the results of comparison between the prediction and reality.From the confusion matrix, we can calculate parameters like recall, precision ,F1 score which are used to evaluate the performance of an AI model.

Q11. What should be the value of F1 score if the model needs to have 100% accuracy?

Answer: The model will have an F1 score of 1 if it has to be 100% accurate.

Q12. Why should we avoid using the training data for evaluation?

Answer: This is because our model will simply remember the whole training set, and will therefore always predict the correct label for any point in the training set.

Q13. What is a corpus?

Answer: The term used to describe the whole textual data from all the documents altogether is known as corpus.

Q14. What is F1 Score in Evaluation?

Answer: F1 score can be defined as the measure of balance between precision and recall.

What is F1 Score in Evaluation

Q15. Examine the following case studies. Draw the confusion matrix and calculate metrics such as accuracy, precision, recall, and F1-score for each one of them.

a. Case Study 1:

A spam email detection system is used to classify emails as either spam (1) or not spam (0). Out of 1000 emails:

  • True Positives(TP): 150 emails were correctly classified asspam.
  • False Positives(FP): 50 emails were incorrectly classified asspam.
  • True Negatives(TN): 750 emails were correctly classified as not spam.
  • False Negatives(FN): 50 emails were incorrectly classified as not spam.

Answer:

A spam email detection system is used to classify emails as either spam (1) or not spam (0)

Accuracy=(TP+TN) / (TP+TN+FP+FN)
=(150+750)/(150+750+50+50)
=900/1000
=0.90

Precision=(TP/(TP+FP))100
=150/(150+50)
=150/200
=0.75

Recall=TP/(TP+FN)
=150/(150+50)
=150/200
=0.75

F1 Score = 2 * Precision * Recall / ( Precision + Recall )
=2 * 0.75 * 0.75 / (0.75+0.75)
=0.75
=75%

b. Case Study 2:

A credit scoring model is used to predict whether an applicant is likely to default on a loan (1) or not (0). Out of 1000 loan applicants:

  • True Positives(TP): 90 applicants were correctly predicted to default on the loan.
  • False Positives(FP): 40 applicants were incorrectly predicted to default on the loan.
  • True Negatives(TN): 820 applicants were correctly predicted not to default on the loan.
  • False Negatives (FN): 50 applicants were incorrectly predicted not to default on the loan.
    Calculate metrics such as accuracy, precision, recall, and F1-score.

Answer:

A credit scoring model is used to predict whether an applicant is likely to default on a loan (1) or not (0)

Accuracy=(TP+TN) / (TP+TN+FP+FN)
=(90+820)/(90+820+40+50)
=910/1000
=0.91

Precision=TP/(TP+FP)
=90/(90+40)
=90/130
=0.692

Recall=TP/(TP+FN)
=90/(90+50)
=90/140
=0.642

F1 Score = 2 * Precision * Recall / ( Precision + Recall )
=2 * 0.692 * 0.642 / (0.692+0.642)
=0.666
=66.6%

c. Case Study 3:

A fraud detection system is used to identify fraudulent transactions(1) from legitimate ones (0). Out of 1000 transactions:

  • True Positives(TP): 80 transactions were correctly identified asfraudulent.
  • False Positives(FP): 30 transactions were incorrectly identified asfraudulent.
  • True Negatives(TN): 850 transactions were correctly identified aslegitimate.
  • False Negatives(FN): 40 transactions were incorrectly identified aslegitimate.

Answer:

A fraud detection system is used to identify fraudulent transactions(1) from legitimate ones (0)

Accuracy=(TP+TN) / (TP+TN+FP+FN)
=(80+850)/(80+850+30+40)
=930/1000
=0.93

Precision=TP/(TP+FP)
=80/(80+30)
=80/110
=0.727

Recall=TP/(TP+FN)
=80/(80+40)
=80/120
=0.667

F1 Score = 2 * Precision * Recall / ( Precision + Recall )
=2 * 0.727 * 0.667 / (0.727+0.667)
=0.696
=69.6%

d. Case Study 4:

A medical diagnosis system is used to classify patients as having a certain disease (1) or not having it (0). Out of 1000 patients:

  • True Positives(TP): 120 patients were correctly diagnosed with the disease.
  • False Positives(FP): 20 patients were incorrectly diagnosed with the disease.
  • True Negatives(TN): 800 patients were correctly diagnosed as not having the disease.
  • False Negatives(FN): 60 patients were incorrectly diagnosed as not having the disease.

Answer:

A medical diagnosis system is used to classify patients as having a certain disease (1) or not having it (0)

Accuracy=(TP+TN) / (TP+TN+FP+FN)
=(120+800)/(120+800+20+60)
=920/1000
=0.92

Precision=TP/(TP+FP)
=120/(120+20)
=120/140
=0.857

Recall=TP/(TP+FN)
=120/(120+60)
=120/180
=0.667

F1 Score = 2 * Precision * Recall / ( Precision + Recall )
=2 * 0.857 * 0.667 / (0.857+0.667)
=0.75
=75%

e. Case Study 5:

An inventory management system is used to predict whether a product will be out of stock (1) or not (0) in the next month. Out of 1000 products:

  • True Positives (TP): 100 products were correctly predicted to be out of stock.
  • False Positives (FP): 50 products were incorrectly predicted to be out of stock. True Negatives (TN): 800 products were correctly predicted not to be out of stock.
  • True Negatives(TN): 800 products were correctly predicted not to be out of stock.
  • False Negatives(FN): 50 products were incorrectly predicted not to be out of stock.

Answer:

An inventory management system is used to predict whether a product will be out of stock (1) or not (0)

Accuracy=(TP+TN) / (TP+TN+FP+FN)
=(100+800)/(100+800+50+50)
=900/1000
=0.90

Precision=TP/(TP+FP)
=100/(100+50)
=100/150
=0.667

Recall=TP/(TP+FN)
=100/(100+50)
=100/150
=0.667

F1 Score = 2 * Precision * Recall / ( Precision + Recall )
=2 * 0.667 * 0.667 / (0.667+0.667)
=0.667
=66.7%

Disclaimer: We have taken an effort to provide you with the accurate handout of “Evaluating Models Class 10 Questions and Answers“. If you feel that there is any error or mistake, please contact me at anuraganand2017@gmail.com. The above CBSE study material present on our websites is for education purpose, not our copyrights. All the above content and Screenshot are taken from Artificial Intelligence Class 10 CBSE Textbook, Sample Paper, Old Sample Paper, Board Paper and Support Material which is present in CBSEACADEMIC website, This Textbook and Support Material are legally copyright by Central Board of Secondary Education. We are only providing a medium and helping the students to improve the performances in the examination. 

Images and content shown above are the property of individual organizations and are used here for reference purposes only.

For more information, refer to the official CBSE textbooks available at cbseacademic.nic.in

cbseskilleducation

1 thought on “Evaluating Models Class 10 Questions and Answers”

Leave a Comment

error: Content is protected !!