Generative AI Class 12 Notes, Generative AI is a subset of artificial intelligence that helps to create new content such as text, images, audio, and video. Generative AI uses machine learning algorithms to learn from existing datasets and generate similar content.
Generative AI class 12 notes
Introduction to Generative AI
Generative AI is a subset of artificial intelligence that helps to create new content such as text, images, audio, and video. Generative AI uses machine learning algorithms to learn from existing datasets and generate similar content. Generative AI is used in various fields, like in education, art and design, film and animation, music, gaming, healthcare, etc. But there are some challenges of generative AI; this technology can be used for creating fake images, misusing content, etc. Examples of generative AI tools are ChatGPT, Gemini, Copilot, and DALL-E, etc.
Working of Generative AI
Generative AI uses neural networks to create new content based on existing data. The generative AI can generate text, images, audio, and videos based on the previous data. Generative Adversarial Networks and Variational Autoencoders are the most popular models in the field of Generative AI. These two models are powerful model and play an important role in generating text, images, audio, and video file.
1. Generative Adversarial Networks (GANs): A Generative Adversarial Network is a deep learning architecture that consists of two neural networks, a generator and a discriminator, to compete against each other to generate more authentic data based on a training dataset. The generator helps to generat new data samples such as images or text and discriminator helps to diferenciate between real and fake data. GANs are applied in various domains like image generation, style transfer, and data augmentation.


2. Variational Autoencoders (VAEs): Variational autoencoders are a machine learning algorithm that is designed to generate data uniquely. Variational autoencoders can improve the quality of the image, you can create an image using text. Variational autoencoders consist of two parts: an encoder and a decoder. The encoder understands the data and converts it into a hidden space (lower-dimensional space) called a latent space, and the decoder translates the information back from this hidden space into its original form. Variational autoencoders are used in different applications like data generation, detecting anomalies in data and filling missing information.
Differences between Generative AI and Discriminative AI
Generative AI | Discriminative AI | |
---|---|---|
Purpose | It is used for generate new content like text, image, audio and video. | It is used for categorizes data into specific group based on the existing data. |
Models | Use tricks to guesses based on the patterns to create new things. | Find rules to separate and recognize patterns |
Training Focus | Tries to understand what makes data unique and how to create new similar data | Focuses on learning how to draw lines or make rules |
Application | create new artworks, generate new ideas for stories, and find unusual patterns in data. | Powers things like facial and speech recognition and helps make decisions like whether an email is spam or not. |
Examples of Algorithms used | Naïve Bayes, Gaussian discriminant analysis, GAN, VAEs, LLM, DBMs, Autoregressive models | Logistic Regression, Decision Trees, SVM, Random Forest |
Applications of Generative AI
- Image Generation: It involves creating new images based on patterns learned from existing datasets. These models analyse the characteristics of input images and generate new ones.
- Text Generation: Text generation is when computers write sentences that sound like people wrote them. It involves creating written content that mimics human language patterns.
- Video Generation: It involves creating new videos by learning from existing ones, including animations and visual
effects. These models learn from videos to create realistic and unique visuals. - Audio Generation: Audio generation involves computers producing new sounds, such as music or voices, based on sounds they have heard.
LLM- Large Language Model
A Large Language Model is a deep learning algorithm that can generate and classify text, answer the questions and translate text from one language to another language. LLMs are called large because this type of algorithm is trained on a large dataset of text and code.

Transformers in LLMs
Transformers is a type of neural network that processes sequential data like text or image, transformers are a fundamental of LLMs enabling efficient and effective learning of complex language patterns and relationships within vast amounts of text data.
Some leading Large Language Models (LLMs) are:
- OpenAI’s GPT-4o: OpenAI GPT work on machine learning principles. It is a pretrained transformer model that can understand and generate content in the form of text and images.
- Google’s Gemini 1.5 Pro: Gemini has multimodal capabilities, which can process and analyse complex audio inputs, generate new code, suggest improvements etc.
- Meta’s LLaMA 3.1: Meta LLaMA is an open-source and powerful language model. It is a large language model (LLMs) that is used for reading handwriting, creating graphs and charts, and use in research and commercial purposes.
- Anthropic’s Claude 3.5: Claude is a powerful and intelligent AI model that has more natural and engaging dialogue; it can generate text and write code.
- Mistral AI’s Mixtral 8x7B: Mistral AI models are powerful language processing tools that are used in chatbots, customer support, language translation, and content creation.
Applications of LLMs:
- Text Generation: AI applications can generate meaningful text based on the user input, like dialogue generation, story writing, content creation and poetry generation. Some other examples include.
- Can translate the natural language
- Code writing
- Autocompleting text and generating continuation for sentences or paragraphs
- Examil auto-completion
- Audio Generation: LLMs are not capable for generating audio signals directly, but they are capable of generating text to speech. LLMs enable TTS systems to synthesise natural-sounding speech from text inputs.
- Image Generation: LLMs can generate new images using text prompts. LLMs understand the visual content to produce textual descriptions and relevant images.
- Video Generation: LLMs can generate video based on a script or using textual descriptions. LLMs are used for generating subtitles, captions, or scene summaries for video.
Limitations of LLM:
- LLMs are expensive and slow, LLMs need lots of computing power to run on computers.
- LLMs can generate incorrect or misleading information.
- LLMs cannot adapt to new situations easly, LLMs are not capable of understanding real things.
Risks associated with LLM:
- LLMs learn from the internet, they can adopt any harmful biases.
- LLMs can disclose the personal information of the user.
- If the LLMs trained using sensitive information, then accidentally reveal the confidential details.
Future of Generative AI
Generative AI will address complex challenges in fields like healthcare and education, enhance NLP tasks like multilingual translation, and expand in multimedia content creation. Collaboration between humans and AI will deepen, emphasizing AI’s role as a supportive partner across domains.
- Ethical and Social Implications of Generative AI: Generative AI, with its ability to create realistic content such as images, videos, and text, brings about a multitude of ethical and social considerations. Ethical and social implications of generative AI becomes crucial for ensuring responsible development and deployment.
- Deepfake Technology: The emergence of deepfake AI technology, such as DeepFaceLab and FaceSwap, raises concerns about the authenticity of digital content. Deepfake algorithms can generate compelling fake images, audio, and videos. Examples: Deepfake AI tools, such as DeepArt’s style transfer algorithms, can seamlessly manipulate visual content, creating deceptiv and misleading media.
- Bias and Discrimination: Generative AI models, exemplified by Clearview AI’s facial recognition algorithms, have demonstrated biases that disproportionately affect certain demographic groups. Examples: The AI-powered hiring platform developed by HireVue has faced criticism for perpetuating recruitment bias.
- Plagiarism: Presenting AI-generated content as one’s work, whether intentionally or unintentionally, raises ethical questions regarding intellectual property rights and academic integrity.
- Transparency: Transparency in the use of generative AI is paramount to maintaining trust and accountability. Disclosing the use of AI-generated content, particularly in academic and professional settings, is essential to uphold ethical standards and prevent instances of academic dishonesty.
Points to Remember:
- Be cautious and transparent when using generative AI.
- Respect copyright and avoid presenting AI output as your own.
- Consult your teacher/institution for specific guidelines.
Citing Sources with Generative AI:
- Intellectual Property: Ensure proper attribution for AI-generated content to respect original creators and comply with copyright laws.
- Accuracy: Verify the reliability of AI-generated information and cite primary data sources whenever possible to maintain credibility.
- Ethical Use: Acknowledge AI tools and provide context for generated content to promote transparency and ethical use.
Citation Example:
- Treat the AI as author: Cite the tool name (e.g., Bard) & “Generative AI tool” in the author spot.
- Date it right: Use the date you received the AI-generated content, not any tool release date.
- Show your prompt: Briefly mention the prompt you gave the AI for reference (optional).
Disclaimer: We have taken an effort to provide you with the accurate handout of “Generative AI Class 12 Notes“. If you feel that there is any error or mistake, please contact me at anuraganand2017@gmail.com. The above CBSE study material present on our websites is for education purpose, not our copyrights. All the above content and Screenshot are taken from Artificial Intelligence Class 12 CBSE Textbook, Sample Paper, Old Sample Paper, Board Paper and Support Material which is present in CBSEACADEMIC website, This Textbook and Support Material are legally copyright by Central Board of Secondary Education. We are only providing a medium and helping the students to improve the performances in the examination.
Images and content shown above are the property of individual organizations and are used here for reference purposes only.
For more information, refer to the official CBSE textbooks available at cbseacademic.nic.in