HELP

Microsoft AI Fundamentals AI-900 Exam Prep

AI Certification Exam Prep — Beginner

Microsoft AI Fundamentals AI-900 Exam Prep

Microsoft AI Fundamentals AI-900 Exam Prep

Pass AI-900 with clear Azure AI fundamentals and mock practice

Beginner ai-900 · microsoft · azure ai fundamentals · azure

Prepare for Microsoft AI-900 with a clear beginner roadmap

Microsoft AI-900: Azure AI Fundamentals is one of the most accessible ways to enter the world of artificial intelligence certification. It is designed for learners who want to understand core AI concepts, Azure AI services, and practical business use cases without needing a deep technical background. This course blueprint is built specifically for non-technical professionals preparing for the AI-900 exam by Microsoft and follows the official exam domains in a structured, confidence-building format.

If you are new to certification exams, this course begins with the essentials: what the AI-900 exam covers, how registration works, what to expect from Microsoft question styles, and how to create a realistic study plan. From there, the course moves through the exam objectives in a logical sequence so you can build understanding step by step instead of memorizing disconnected facts.

Aligned to the official AI-900 exam domains

The course structure maps directly to the official AI-900 objective areas listed by Microsoft:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Each chapter is designed to help you understand what Microsoft expects you to know at the fundamentals level. Rather than assuming prior cloud or AI expertise, the lessons explain concepts in plain language and connect them to realistic business and workplace scenarios. This makes the material easier to remember and much easier to apply during the exam.

What makes this course effective for non-technical professionals

Many beginners struggle not because the AI-900 exam is too advanced, but because technical terminology can feel overwhelming at first. This course addresses that challenge by using a guided six-chapter book format. Chapter 1 introduces the exam experience, scoring, scheduling, and study strategy. Chapters 2 through 5 cover the official domains in depth, combining conceptual understanding with service recognition and scenario-based reasoning. Chapter 6 brings everything together with a full mock exam and final review.

Throughout the course, you will learn how to distinguish AI workloads, identify which Azure AI services fit common use cases, and recognize the exam language Microsoft uses to test foundational understanding. You will also review responsible AI principles, machine learning basics, computer vision use cases, natural language processing scenarios, and generative AI concepts including prompts, copilots, and Azure OpenAI Service at a fundamentals level.

Practice in the style of the real exam

Success on AI-900 requires more than reading definitions. You must be able to interpret short scenarios, eliminate distractors, and choose the best answer based on what the question is actually testing. That is why this course includes exam-style practice at the chapter level and finishes with a full mock exam chapter. These activities help you identify weak areas early, reinforce official objectives, and build exam-day confidence.

You will practice skills such as:

  • Matching workloads to Azure AI services
  • Recognizing machine learning concepts like regression, classification, and clustering
  • Understanding image, text, speech, and document processing use cases
  • Identifying responsible AI principles in business scenarios
  • Explaining generative AI workloads and Azure OpenAI fundamentals

A practical path to exam readiness

By the end of this course, you will have a complete blueprint for preparing effectively for the Microsoft AI-900 exam. Whether you are exploring AI for your career, supporting cloud projects, or adding a recognized Microsoft credential to your profile, this training is designed to help you study efficiently and avoid common beginner mistakes.

Ready to start your certification journey? Register free to begin building your AI-900 study plan, or browse all courses to explore more certification prep options on Edu AI.

With focused domain coverage, clear explanations, and realistic practice, this course gives you a strong foundation for passing Microsoft Azure AI Fundamentals and understanding the core ideas behind modern AI on Azure.

What You Will Learn

  • Describe AI workloads and considerations for responsible AI aligned to the AI-900 exam domain
  • Explain fundamental principles of machine learning on Azure, including common ML concepts and Azure ML capabilities
  • Identify computer vision workloads on Azure and choose the right Azure AI services for image, video, OCR, and face-related scenarios
  • Describe natural language processing workloads on Azure, including text analytics, language understanding, translation, and speech solutions
  • Explain generative AI workloads on Azure, including copilots, prompt concepts, foundation models, and Azure OpenAI Service use cases
  • Apply exam-ready reasoning to AI-900 question formats through chapter quizzes, scenario review, and a full mock exam

Requirements

  • Basic IT literacy and comfort using web applications
  • No prior certification experience is needed
  • No programming background is required
  • Interest in Microsoft Azure and AI concepts for business or career growth
  • A stable internet connection for course access and practice exams

Chapter 1: AI-900 Exam Orientation and Winning Study Plan

  • Understand the AI-900 exam format and objectives
  • Plan registration, scheduling, and exam logistics
  • Build a beginner-friendly study strategy
  • Set up for practice, review, and confidence building

Chapter 2: Describe AI Workloads and Responsible AI

  • Recognize core AI workloads and business scenarios
  • Differentiate machine learning, computer vision, NLP, and generative AI
  • Explain responsible AI principles in practical terms
  • Answer AI workload questions in AI-900 exam style

Chapter 3: Fundamental Principles of Machine Learning on Azure

  • Understand machine learning fundamentals without coding
  • Compare supervised, unsupervised, and reinforcement learning
  • Explore Azure machine learning capabilities and model lifecycle basics
  • Practice AI-900 questions on ML principles and Azure tools

Chapter 4: Computer Vision and NLP Workloads on Azure

  • Identify key computer vision workloads and Azure services
  • Understand NLP workloads for text, speech, and translation
  • Match business needs to Azure AI Vision and Azure AI Language solutions
  • Master mixed-domain practice questions for vision and NLP

Chapter 5: Generative AI Workloads on Azure

  • Understand generative AI concepts in beginner-friendly language
  • Explore copilots, prompts, and foundation model use cases
  • Learn Azure OpenAI Service basics and responsible use
  • Practice AI-900 questions on generative AI workloads

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer specializing in Azure AI

Daniel Mercer is a Microsoft Certified Trainer with extensive experience teaching Azure AI and fundamentals-level certification pathways. He has coached beginners and business professionals through Microsoft exam preparation with a focus on practical understanding, exam confidence, and clear objective-by-objective instruction.

Chapter 1: AI-900 Exam Orientation and Winning Study Plan

The AI-900: Microsoft Azure AI Fundamentals exam is designed as an entry-level certification, but candidates should not mistake “fundamentals” for “easy.” Microsoft expects you to recognize core AI workloads, understand responsible AI principles, distinguish between major Azure AI services, and apply basic exam reasoning to short business scenarios. This chapter gives you the orientation you need before diving into technical content. A strong start matters because many candidates do not fail due to lack of intelligence; they fail because they study the wrong topics, underestimate wording traps, or arrive at the exam without a strategy.

This course is built around the AI-900 exam objectives and the practical thinking the test rewards. Across later chapters, you will learn how to describe AI workloads and responsible AI considerations, explain machine learning fundamentals on Azure, identify computer vision and natural language processing scenarios, and understand generative AI concepts such as copilots, prompts, foundation models, and Azure OpenAI Service use cases. In this opening chapter, however, the goal is not deep technical mastery. The goal is exam orientation: what the certification validates, how the exam is delivered, how registration and logistics work, how to build a study plan, and how to approach practice and review with confidence.

AI-900 questions often look simple on the surface, but the exam rewards precision. You may be asked to identify the most appropriate Azure AI service for a scenario, recognize whether a statement describes machine learning or conversational AI, or distinguish between a general AI concept and a specific Azure product capability. To succeed, you need both conceptual clarity and disciplined answer selection. That is why this chapter emphasizes common exam traps, elimination strategies, and a beginner-friendly plan you can actually follow.

Think of this chapter as your map and checklist. By the end, you should know what Microsoft is testing, how this course aligns to the official domains, how to schedule your exam intelligently, and how to prepare in a way that builds confidence instead of panic. If you study with structure, practice with intent, and review weak areas honestly, AI-900 becomes a very manageable certification milestone.

  • Understand what AI-900 validates and what it does not.
  • Learn the exam structure, question styles, and realistic passing mindset.
  • Prepare for registration, ID checks, test center or online delivery, and rescheduling rules.
  • See how the official exam domains map to this course’s lessons and outcomes.
  • Create a study system with notes, repetition, and short review cycles.
  • Use practice questions effectively, eliminate distractors, and manage time under pressure.

Exam Tip: The AI-900 exam is broad rather than deep. Candidates often lose points not because the content is advanced, but because they confuse similar services or overlook a keyword in the scenario. Train yourself to read for intent: what business need is being described, and which Azure AI capability directly fits that need?

Approach the rest of the course as a sequence of exam domains, not disconnected lessons. Every topic you study should answer three questions: What concept is Microsoft testing? How is it likely to appear in a scenario? What clues help me identify the right answer quickly? Keeping those questions in mind from Chapter 1 onward will make your preparation more efficient and much more exam-focused.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Plan registration, scheduling, and exam logistics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a beginner-friendly study strategy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: What the AI-900 Azure AI Fundamentals certification validates

Section 1.1: What the AI-900 Azure AI Fundamentals certification validates

AI-900 validates foundational understanding of artificial intelligence concepts and the Azure services that support common AI workloads. It is not a role-based expert exam, and Microsoft does not expect you to build production-grade models or engineer full AI solutions from scratch. Instead, the certification confirms that you can identify major AI workload categories, recognize when Azure offers a service for a given scenario, and understand basic responsible AI principles. This matters because the exam is intended for a broad audience: students, business users, technical beginners, and professionals entering cloud AI topics for the first time.

From an exam perspective, “fundamentals” means you should be able to describe what machine learning is, what computer vision can do, how natural language processing differs from speech, and where generative AI fits in modern Azure offerings. You should also know that responsible AI is not an optional side note. Microsoft consistently emphasizes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. If an answer choice conflicts with these principles, it is usually a poor choice.

The certification validates recognition more than configuration. For example, you may need to know that image classification, object detection, OCR, translation, sentiment analysis, speech-to-text, or chatbot scenarios map to specific categories of Azure AI capability. You are less likely to be tested on deep implementation details such as advanced SDK code or fine-tuning procedures. The exam is checking whether you can think like an informed practitioner or stakeholder who understands the AI landscape on Azure.

Exam Tip: A common trap is overthinking the level of detail. If two answer options seem technical, step back and ask which one aligns with the business task named in the question. AI-900 usually rewards the most direct service-to-scenario match, not the most advanced-sounding technology.

Another important point is what AI-900 does not validate. It does not prove deep machine learning engineering skill, advanced data science expertise, or solution architect-level Azure design. Candidates sometimes import assumptions from higher-level certifications and make AI-900 questions harder than they are. Stay at the fundamentals level. Know the purpose of services, the meaning of common AI terminology, and the basic decision logic for selecting the correct tool.

As you move through this course, anchor every lesson to this core objective: identify workloads, understand principles, and choose appropriately in context. That is the heart of what AI-900 validates.

Section 1.2: Microsoft exam structure, question types, scoring, and passing mindset

Section 1.2: Microsoft exam structure, question types, scoring, and passing mindset

Microsoft certification exams can vary slightly over time, so you should always expect some flexibility in exact question count and presentation. For AI-900, candidates commonly encounter a mix of standard multiple-choice items, multiple-response items, drag-and-drop style matching, and short scenario-based questions. Some items are straightforward definition checks, while others ask you to apply concepts to a business use case. The exam may also include unscored questions used by Microsoft to evaluate future content, which means you should treat every question seriously because you will not know which ones count.

The passing score is reported on a scale, commonly with 700 as the target. That does not mean 70 percent correct in a simple one-to-one way. Microsoft uses scaled scoring, and question difficulty may vary. The key lesson is psychological: do not try to calculate your score during the exam. Focus on answering each item accurately and consistently. Many candidates damage their performance by panicking after a few difficult questions and assuming they are failing.

Question wording matters. Microsoft often tests whether you can distinguish between similar but not identical concepts. Words such as “best,” “most appropriate,” “should,” or “can” are meaningful. In AI-900, one answer may be technically possible, but another is the intended Azure-native service for the exact scenario. The exam rewards precision rather than broad plausibility.

Exam Tip: If a question asks for the best service, do not choose an answer simply because it could work with extra customization. Prefer the option designed specifically for the task described.

Build a passing mindset around calm pattern recognition. Your job is not to prove mastery of every AI topic in the world. Your job is to identify what domain the question belongs to, spot the scenario clues, eliminate wrong options, and choose the strongest remaining answer. This course will help you build that habit repeatedly.

Common traps include reading too quickly, overlooking qualifiers, and confusing product families with workload categories. Another trap is assuming that all AI questions are about machine learning models. In reality, many AI-900 items are about selecting managed Azure AI services for vision, language, speech, search, or generative AI use cases. Do not let the broad term “AI” push you toward unnecessarily technical thinking.

Finally, remember that fundamentals exams are confidence exams as much as knowledge exams. If you understand the domains, use process-of-elimination, and avoid second-guessing yourself without evidence, you give yourself an excellent chance to pass.

Section 1.3: Registration process, testing options, identification rules, and rescheduling basics

Section 1.3: Registration process, testing options, identification rules, and rescheduling basics

Before you can pass AI-900, you need a clean registration and test-day experience. Microsoft certification exams are typically scheduled through Microsoft’s certification dashboard with an authorized delivery provider. You will choose the AI-900 exam, select your preferred language if available, and then decide whether to test at a physical center or use online proctoring. Your choice should match your environment and stress profile. Some candidates perform better at a quiet test center; others prefer the convenience of testing from home or office.

If you choose online delivery, prepare your space carefully. You will usually need a reliable internet connection, a working webcam and microphone, and a clean desk area that complies with exam security requirements. System checks should be completed well before exam day. If your environment is unstable, noisy, or cluttered, online testing can become a distraction. Do not assume convenience equals lower stress.

Identification rules matter. Your registration name should match your government-issued identification as required by the testing provider. Mismatches in spelling, missing middle names when required, or expired identification can create major problems at check-in. Review the latest ID policy in advance rather than guessing. Candidates sometimes study for weeks and then lose their exam appointment over preventable identity issues.

Exam Tip: Schedule your exam only after you have checked your legal name in the certification profile, confirmed your identification documents, and verified whether your preferred testing option has any special requirements.

Rescheduling and cancellation rules can change, but there is usually a defined window in which changes are allowed without major penalty. Read the current policy at the time you book. Do not assume you can move your exam at the last minute. A smart strategy is to schedule early enough to create commitment, but not so early that you set yourself up for avoidable pressure. Many beginners do well by booking a date two to four weeks after beginning focused study, then adjusting only if practice results clearly show they are not ready.

Also plan the practical details: time zone, arrival or check-in time, confirmation emails, and contingency for technical issues. Exam success begins before the first question appears. Smooth logistics protect your concentration and reduce anxiety, which directly improves performance.

Section 1.4: Official exam domains overview and how this course maps to them

Section 1.4: Official exam domains overview and how this course maps to them

The AI-900 exam blueprint is organized around major Azure AI knowledge domains. Although Microsoft can update objective wording over time, the core tested areas are stable: AI workloads and responsible AI principles, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI workloads on Azure. Understanding this structure is essential because it tells you what to study, how to categorize questions, and where your weak points are if practice performance is uneven.

This course maps directly to those domains. First, you will learn to describe AI workloads and responsible AI considerations aligned to the exam. This includes recognizing the difference between common AI solution types and understanding Microsoft’s responsible AI principles. Next, you will study machine learning fundamentals, including common ML concepts and Azure Machine Learning capabilities at a foundational level. You will then move into computer vision, where the focus is choosing suitable Azure AI services for image analysis, video understanding, OCR, and face-related scenarios as permitted by current service guidance.

After vision, the course covers natural language processing. Here, you will study workloads such as text analytics, language understanding, translation, and speech solutions. Finally, the course addresses generative AI, including copilots, prompt concepts, foundation models, and Azure OpenAI Service use cases. These are increasingly important on the exam because Microsoft expects candidates to recognize what generative AI is and how it differs from more traditional predictive or analytical AI tasks.

Exam Tip: When a question feels vague, classify it first by domain. Is this about machine learning, vision, language, speech, or generative AI? Once you identify the domain, the correct answer becomes easier to spot because the plausible service choices narrow quickly.

The official domains are not just a list; they are your study framework. If you finish a lesson and cannot explain where it fits on the blueprint, your understanding is not yet exam-ready. Organize your notes by domain, not by random page order. This makes review faster and helps you notice repeated patterns in Microsoft’s question style.

Throughout this book, chapter quizzes, scenario reviews, and the full mock exam are designed to reinforce these same domains. That means your practice is not generic. It is aligned to the objectives Microsoft is actually testing.

Section 1.5: Study planning for beginners with note-taking and review cycles

Section 1.5: Study planning for beginners with note-taking and review cycles

If you are new to AI or Azure, the best study plan is simple, consistent, and structured around repetition. Beginners often make one of two mistakes: trying to memorize everything in one long session, or jumping between videos, documentation, flashcards, and practice questions without a system. A better method is to study one domain at a time, make short and organized notes, and revisit material in planned review cycles. This course is written to support exactly that kind of progression.

Start by setting a realistic timeline. For many candidates, seven to twenty-one days of focused preparation is enough, depending on background and available study time. Break your plan into sessions of manageable length. For example, you might study one primary topic per day, spend part of the next day reviewing it, and then test yourself briefly before moving on. This keeps earlier content active while adding new material. Your goal is not just exposure; it is retention.

Use note-taking strategically. Do not copy paragraphs from Microsoft documentation. Instead, create compact notes with headings such as workload, purpose, common clues, likely distractors, and related services. If you can write a one- or two-line distinction between similar services, you are studying in an exam-smart way. For example, your notes should help you quickly remember why a service is used, not just what its marketing description says.

Exam Tip: The best notes for AI-900 are comparison notes. Microsoft often tests whether you can distinguish one service or concept from another, so your review sheets should emphasize differences, triggers, and boundaries.

Build review cycles into your plan. A strong pattern is same-day recap, next-day review, and end-of-week consolidation. During recap, summarize the lesson in your own words. During next-day review, revisit the key distinctions and any confusing points. At the end of the week, scan all domains covered so far and identify weak areas. This process prevents the common beginner problem of understanding a topic once and forgetting it by exam day.

Confidence grows from evidence. As you progress, track what you can explain without looking at notes. If you cannot describe a concept simply, you likely do not understand it well enough for scenario questions. Study to the point of recognition plus explanation. That combination is what makes fundamentals knowledge durable and test-ready.

Section 1.6: How to use practice questions, eliminate distractors, and manage exam time

Section 1.6: How to use practice questions, eliminate distractors, and manage exam time

Practice questions are most useful when they train reasoning, not just memory. Many candidates misuse them by chasing a high score through repetition alone. That creates false confidence because they remember answer patterns instead of understanding why an answer is right. For AI-900, every practice set should be treated as a decision-making drill. After each question, ask yourself what clues pointed to the correct answer, why the distractors were wrong, and which domain the question belonged to. This transforms practice into exam skill.

Eliminating distractors is one of the most powerful tactics on a fundamentals exam. Wrong answers are often attractive because they sound modern, broad, or technically impressive. Your task is to reject answers that are too general, too advanced, or mismatched to the specific workload. If the scenario is about extracting printed or handwritten text from images, you should think OCR-related capability, not generic machine learning. If the scenario is about detecting sentiment in customer reviews, that points toward language analysis rather than translation or speech.

Exam Tip: Underline or mentally isolate the workload verb in the scenario: classify, detect, translate, transcribe, analyze, generate, recognize, extract. That single verb often reveals the correct service family.

Time management also matters, even on an entry-level exam. Move steadily. If a question is unclear, eliminate what you can, choose the best current answer, and mark it for review if the interface allows. Do not spend excessive time wrestling with one item early in the exam. The opportunity cost is too high, and later questions may be easier points. Good candidates maintain momentum and return with a calmer mind if time remains.

Avoid the trap of changing answers without a clear reason. Your first answer is not always right, but random second-guessing usually hurts more than it helps. Change an answer only when you identify a specific misread keyword, remember a relevant concept, or realize a better service-to-scenario fit. Evidence-based correction is good; anxiety-based switching is dangerous.

Finally, use practice exams near the end of your preparation to simulate pacing and confidence. Review not just what you missed, but also what you answered correctly for the wrong reason. That distinction matters. Exam readiness means you can consistently identify the right answer because you understand the concept and can defeat the distractors.

Chapter milestones
  • Understand the AI-900 exam format and objectives
  • Plan registration, scheduling, and exam logistics
  • Build a beginner-friendly study strategy
  • Set up for practice, review, and confidence building
Chapter quiz

1. You are beginning preparation for the Microsoft AI-900 exam. Which study approach best aligns with the purpose and coverage of this certification?

Show answer
Correct answer: Focus on broad understanding of AI workloads, responsible AI principles, and Azure AI service selection rather than deep implementation details
AI-900 is a fundamentals exam that validates recognition of core AI workloads, responsible AI concepts, and major Azure AI services across official domains. Option A matches that scope. Option B is incorrect because AI-900 is not primarily a coding or developer-implementation exam. Option C is incorrect because the exam remains broad and includes machine learning, computer vision, NLP, conversational AI, and responsible AI—not just generative AI.

2. A candidate says, "Because AI-900 is an entry-level certification, I can probably pass by skimming definitions the night before." Which response is most accurate?

Show answer
Correct answer: That approach is risky because the exam often tests precise distinctions between similar concepts and services in short scenarios
The chapter emphasizes that 'fundamentals' does not mean 'easy.' AI-900 commonly uses short business scenarios and wording traps that require precision, especially when distinguishing related AI services and workloads. Option B is correct. Option A is wrong because the exam does include scenario-style reasoning. Option C is wrong because deep Azure administration knowledge is not the deciding factor for AI-900 success; focused understanding of the exam objectives is more relevant.

3. A company wants an exam preparation plan that reduces stress and increases the chance of passing AI-900 on the first attempt. Which plan is the most appropriate?

Show answer
Correct answer: Map study sessions to the official exam domains, review weak areas in short cycles, and use practice questions to learn elimination strategies
The chapter recommends treating the course as a sequence of exam domains, building a structured study system, reviewing weak areas honestly, and using practice to improve answer selection and time management. Therefore, Option B is correct. Option A is wrong because random study and delayed practice reduce exam readiness and confidence. Option C is wrong because marketing language does not reliably map to the objective-based reasoning tested on AI-900.

4. You are taking a practice test for AI-900. A scenario asks for the most appropriate Azure AI capability, and two answer choices seem similar. According to the Chapter 1 guidance, what should you do first?

Show answer
Correct answer: Look for the business need and keywords in the scenario, then eliminate answers that do not directly match that need
Chapter 1 stresses reading for intent: identify the business need being described and determine which Azure AI capability directly fits it. Elimination of distractors is a key exam strategy, so Option B is correct. Option A is wrong because exams do not reward choosing the most impressive-sounding service. Option C is wrong because the best exam answer is the most appropriate and direct fit, not the broadest or most feature-rich option.

5. A candidate is deciding when to register for the AI-900 exam. Which action best reflects good exam logistics planning?

Show answer
Correct answer: Plan registration and scheduling early, confirm whether the exam will be taken online or at a test center, and review ID and policy requirements in advance
The chapter explicitly highlights planning for registration, ID checks, test center versus online delivery, and rescheduling rules. Handling these details early reduces stress and prevents avoidable exam-day issues, so Option B is correct. Option A is wrong because delaying logistics can create preventable problems. Option C is wrong because while logistics are not scored content, poor exam readiness can still negatively affect performance and access to the exam.

Chapter 2: Describe AI Workloads and Responsible AI

This chapter maps directly to one of the most tested introductory areas of the AI-900 exam: recognizing AI workloads, matching them to business scenarios, and understanding responsible AI principles at a practical level. Microsoft expects candidates to identify what kind of AI problem is being described before they choose a service, model type, or solution approach. In other words, this chapter is about classification of the problem itself. If a scenario mentions predicting future sales, that points to forecasting. If it mentions extracting text from scanned forms, that points to optical character recognition. If it describes a chatbot answering user questions, that indicates conversational AI or natural language processing. The exam often rewards this first-level pattern recognition.

You should leave this chapter able to differentiate machine learning, computer vision, natural language processing, and generative AI at a high level. You should also understand where knowledge mining fits. AI-900 is not a deep implementation exam, but it does test your ability to recognize what Azure capability would make sense for a given need. The most common mistake is choosing a familiar buzzword instead of the workload that actually matches the business goal. For example, many learners choose machine learning for every predictive problem, even when the scenario is really about document search, semantic retrieval, or content generation.

Another major theme is responsible AI. Microsoft frames AI systems as tools that must be designed and used with fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability in mind. On the exam, these ideas are usually tested through short scenario statements rather than abstract philosophy. You may be asked which principle is violated if a model performs worse for one demographic group, or which principle is supported by documenting model limitations and explaining output behavior. Learn the principles in plain language and associate each one with a realistic consequence.

This chapter also supports later course outcomes. The distinctions you learn here prepare you for machine learning on Azure, computer vision services, language workloads, and generative AI. Think of this chapter as your exam framework: first identify the workload category, then reason toward the likely Azure service family, then check whether responsible AI concerns change what a correct answer should be. Exam Tip: On AI-900, the fastest route to the correct answer is often to identify the business objective first and ignore distracting implementation details.

As you read, focus on the phrases Microsoft likes to test. Forecasting is predicting numeric values over time. Anomaly detection is finding unusual patterns. Ranking is ordering results by relevance or preference. Conversational AI enables natural interactions through bots or virtual agents. Knowledge mining extracts insights from large stores of documents. Generative AI creates new content such as text, code, or images based on prompts. These terms are not interchangeable, and exam questions are designed to see whether you can distinguish them cleanly.

Finally, remember the scope of AI-900: foundational understanding, not engineering depth. You are not expected to train complex neural networks by hand or architect enterprise-scale systems. You are expected to identify what an AI workload is, recognize common use cases, understand responsible AI principles, and make sensible high-level Azure choices. If you can explain why a scenario is machine learning rather than NLP, or why transparency matters for a decision-support model, you are thinking at the right level for this exam.

Practice note for Recognize core AI workloads and business scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Differentiate machine learning, computer vision, NLP, and generative AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Official domain focus: Describe AI workloads

Section 2.1: Official domain focus: Describe AI workloads

The AI-900 objective “Describe AI workloads” sounds broad, but the exam usually tests it in a structured way. You are given a business problem and asked to recognize the category of AI involved. The key categories you should know are machine learning, computer vision, natural language processing, conversational AI, knowledge mining, and generative AI. Your job is not to design the full solution. Your job is to identify the type of problem being solved and connect that to the right Azure capability at a high level.

An AI workload is essentially a class of tasks where AI techniques help solve a business need. For example, predicting customer churn is a machine learning workload. Detecting defects in product images is a computer vision workload. Determining whether a review is positive or negative is an NLP workload. Building a virtual assistant that responds to customer questions is a conversational AI workload. Searching thousands of documents for insights is knowledge mining. Creating draft content from prompts is generative AI.

The exam often includes distractors that sound technically impressive but do not match the stated need. If the scenario is about understanding and extracting value from existing content, that is not automatically machine learning. It may be knowledge mining. If the scenario is about creating new text rather than classifying text, that points to generative AI rather than traditional NLP. Exam Tip: Look for action verbs in the scenario. Predict, classify, detect, extract, translate, rank, search, summarize, and generate each suggest different AI workloads.

Microsoft also expects you to understand that AI workloads support business scenarios such as recommendation, forecasting, visual inspection, transcription, translation, chatbots, and document analysis. The exam may describe the business outcome without naming the AI category directly. That means you must infer the workload from context. A strong exam technique is to ask yourself, “What is the system trying to do with the data?” If it is learning patterns from historical data to predict future outcomes, think machine learning. If it is interpreting images or video, think computer vision. If it is understanding or producing language, think NLP or generative AI depending on whether content is analyzed or created.

Be careful not to overcomplicate introductory questions. AI-900 wants foundational categorization. It is usually enough to identify the primary workload and avoid answers that describe unrelated AI techniques. Simpler, business-aligned reasoning is often rewarded more than technical depth.

Section 2.2: Common AI workloads including forecasting, anomaly detection, ranking, and conversational AI

Section 2.2: Common AI workloads including forecasting, anomaly detection, ranking, and conversational AI

Several AI workload types appear repeatedly on AI-900 because they represent common business uses of AI. Forecasting involves predicting a future numeric value based on historical patterns. Typical examples include sales forecasting, demand planning, energy usage prediction, or estimating future call volumes. When you see time-based data and future estimates, forecasting should come to mind immediately. This usually falls under machine learning.

Anomaly detection is about identifying unusual events, rare behaviors, or unexpected deviations from normal patterns. Common scenarios include fraud detection, equipment failure alerts, network intrusion monitoring, and abnormal sensor readings in IoT systems. The exam may use phrases such as “identify unusual transactions” or “detect outliers.” That language strongly suggests anomaly detection rather than simple classification. Exam Tip: If the system is looking for what is different from normal rather than assigning one of several known labels, anomaly detection is often the better answer.

Ranking means ordering items by relevance, quality, probability, or user preference. Search engines rank results. E-commerce sites rank products for recommendations. Customer support systems may rank suggested solutions. Ranking can use machine learning, but on the exam, focus on the business behavior: the system is deciding what should appear first or in what order. A common trap is confusing ranking with classification. Classification assigns categories, while ranking orders alternatives.

Conversational AI enables human-like interactions through chat or voice. Typical solutions include chatbots, virtual agents, support assistants, and speech-enabled systems. In exam scenarios, conversational AI may involve answering user questions, guiding users through tasks, or integrating speech recognition and text-to-speech. It is often supported by NLP, but the workload label may be conversational AI because the business value centers on interactive dialogue rather than one-time text analysis.

  • Forecasting: predicts future numeric outcomes from historical data.
  • Anomaly detection: flags unusual or suspicious observations.
  • Ranking: orders results by relevance or preference.
  • Conversational AI: supports interactive user communication through bots or assistants.

These workload names matter because exam questions frequently describe functionality rather than theory. If you memorize definitions only, you may miss scenario-based wording. Practice translating everyday business language into workload categories. For example, “find fraudulent claims” suggests anomaly detection. “Show the most relevant help articles first” suggests ranking. “Answer customer questions through a web chat window” suggests conversational AI. Recognizing these patterns quickly is a major score booster.

Section 2.3: When to use machine learning versus knowledge mining versus generative AI

Section 2.3: When to use machine learning versus knowledge mining versus generative AI

This distinction is one of the most useful exam skills because these three categories can sound similar in business conversations. Machine learning is used when you want a system to learn patterns from data and make predictions, classifications, or decisions. Examples include predicting loan risk, classifying support tickets, recommending products, or forecasting sales. The output is usually a score, label, forecast, or ranking based on learned patterns.

Knowledge mining is used when you want to extract, organize, and discover insights from large collections of content such as documents, forms, PDFs, images, or enterprise records. A classic example is making thousands of documents searchable and extracting key information from them. The emphasis is not on creating new content or building predictive models from labeled training data. The emphasis is on unlocking value from existing information. On the exam, if the scenario mentions indexing documents, enriching searchable content, extracting entities, or enabling document search across a large repository, knowledge mining is likely the best fit.

Generative AI is used when the goal is to create new content such as answers, summaries, drafts, code, or images in response to prompts. It can support copilots, content generation, summarization, transformation, and question answering grounded in provided data. The phrase “generate” is important, but the exam may not state it directly. Watch for wording such as “draft,” “compose,” “create a summary,” “produce a response,” or “assist users by writing.”

A common exam trap is choosing generative AI when the real task is retrieval from existing documents. Another is choosing machine learning when the task is document search and extraction. Exam Tip: Ask what the system primarily does. If it predicts from patterns, choose machine learning. If it extracts and organizes insights from existing content, choose knowledge mining. If it creates new content based on prompts or context, choose generative AI.

There can be overlap in real solutions, but AI-900 usually expects the primary workload. For example, a chatbot that only retrieves FAQs may lean toward knowledge mining plus conversational AI, while a copilot that drafts personalized replies points more clearly to generative AI. A fraud detection system is machine learning, not generative AI, even if a generated summary is later added for analysts. Focus on the central business value described in the question stem.

Section 2.4: Responsible AI concepts including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Section 2.4: Responsible AI concepts including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Responsible AI is a core conceptual area for AI-900, and Microsoft expects you to know the six principles in practical terms: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These are not just ethical labels. On the exam, they appear in scenario form, so you need to connect each principle to a concrete issue or design choice.

Fairness means AI systems should treat people equitably and avoid harmful bias. If a hiring model consistently disadvantages one group, fairness is the issue. Reliability and safety mean the system should perform consistently and minimize harm, especially in high-impact situations. If a model behaves unpredictably under real-world conditions, reliability and safety are the concern. Privacy and security focus on protecting personal data and securing systems against misuse or unauthorized access. If a scenario mentions sensitive customer data, consent, or data protection, think privacy and security.

Inclusiveness means systems should be designed for people with a wide range of abilities, languages, backgrounds, and contexts. A voice assistant that works poorly for users with different accents raises inclusiveness concerns. Transparency means users and stakeholders should understand how AI is used, what data influences outcomes, and what the system can or cannot do. If users need explanations, documentation, confidence indicators, or clear disclosure that they are interacting with AI, transparency is involved. Accountability means humans and organizations remain responsible for AI outcomes and governance. If a company must assign oversight, review decisions, or establish policies for AI use, that reflects accountability.

  • Fairness: avoid biased outcomes across groups.
  • Reliability and safety: ensure dependable, safe behavior.
  • Privacy and security: protect data and access.
  • Inclusiveness: design for broad accessibility and usability.
  • Transparency: explain AI usage and limitations.
  • Accountability: maintain human responsibility and governance.

Exam Tip: The exam often tests the difference between transparency and accountability. Transparency is about understanding and explainability; accountability is about responsibility and oversight. Do not confuse them. Another common trap is mixing fairness and inclusiveness. Fairness is about equitable outcomes; inclusiveness is about designing so different people can use and benefit from the system.

When in doubt, identify the harm or risk described in the scenario. If the risk is unequal treatment, choose fairness. If the risk is data exposure, choose privacy and security. If the need is explaining model behavior, choose transparency. This practical mapping is exactly what the exam is designed to measure.

Section 2.5: Microsoft Azure AI service families and choosing the right solution at a high level

Section 2.5: Microsoft Azure AI service families and choosing the right solution at a high level

AI-900 does not expect deep product mastery in this chapter, but it does expect you to recognize Azure AI service families at a high level and match them to workloads. Azure AI services support prebuilt capabilities for vision, language, speech, document intelligence, and related tasks. Azure Machine Learning supports building, training, deploying, and managing machine learning models. Azure AI Search is commonly associated with knowledge mining and intelligent search experiences. Azure OpenAI Service is associated with generative AI workloads using large language models and related foundation models.

Choosing the right family begins with identifying whether the need is custom prediction, prebuilt perception, retrieval from content, or generated output. If a company wants to build a custom model to predict customer churn from historical records, Azure Machine Learning is the high-level fit. If it wants OCR, image analysis, speech transcription, or sentiment analysis using prebuilt AI capabilities, Azure AI services are the likely fit. If it wants to search and extract value from a large corpus of unstructured documents, Azure AI Search aligns well. If it wants a copilot, text generation, summarization, or conversational assistance based on prompts, Azure OpenAI Service is the likely answer.

A common trap is choosing Azure Machine Learning for every AI problem because it sounds broad and powerful. In many exam questions, Microsoft wants you to notice that a prebuilt service is more appropriate than building and training a custom model. Another trap is assuming every chatbot requires generative AI. Some bots use traditional conversational flows or retrieve knowledge without generating novel responses. Exam Tip: Prefer the simplest Azure solution that satisfies the requirement described. AI-900 often rewards “right-sized” service choices.

At this level, think in families, not detailed SKU comparisons. Machine learning equals custom predictive modeling. Azure AI services equal prebuilt capabilities for common vision, language, speech, and document tasks. Azure AI Search equals searchable insight from content. Azure OpenAI Service equals generative and copilot-style experiences. Once you categorize the business need correctly, the solution family usually becomes much clearer.

Section 2.6: Exam-style scenarios and practice questions for Describe AI workloads

Section 2.6: Exam-style scenarios and practice questions for Describe AI workloads

Although this chapter does not include actual quiz items, you should know how AI-900 presents workload questions. The exam typically gives you a short scenario packed with clues and then asks for the most appropriate workload, principle, or service family. Success depends on filtering out irrelevant details. Company size, cloud migration history, and industry background may be included as distractors. The correct answer usually depends on one or two key requirements such as predicting values, analyzing text, extracting document content, or generating responses.

For workload identification, train yourself to look for the primary verb. Predict, detect, classify, extract, search, translate, converse, and generate each point in different directions. For responsible AI, look for the stated risk or governance need. Unequal outcomes indicate fairness. Need for explanation indicates transparency. Protecting sensitive records indicates privacy and security. Need for human review and policy ownership indicates accountability.

When two answers seem plausible, ask which one most directly solves the scenario. For example, ranking and recommendation can be related, but if the text emphasizes ordering results by relevance, ranking is the cleaner choice. Machine learning and generative AI can both appear in customer service solutions, but if the requirement is to create draft responses from prompts, generative AI is the stronger fit. If the requirement is to route tickets to categories, traditional machine learning or NLP classification is more appropriate.

Exam Tip: Beware of overreading. AI-900 questions are usually simpler than they first appear. If the scenario says “identify handwritten text in scanned forms,” do not jump to generative AI or custom ML. That is fundamentally a document/OCR-style workload. If it says “predict next month’s demand,” that is forecasting. If it says “provide users with a natural language assistant,” think conversational AI, possibly supported by language or generative services depending on the wording.

To prepare effectively, practice converting plain business statements into AI categories. This chapter’s lesson goals support that skill directly: recognize core AI workloads and business scenarios, differentiate machine learning, computer vision, NLP, and generative AI, explain responsible AI principles in practical terms, and answer AI workload questions in AI-900 exam style. If you can do those four things consistently, you will be well positioned for this exam domain and for the Azure-specific chapters that follow.

Chapter milestones
  • Recognize core AI workloads and business scenarios
  • Differentiate machine learning, computer vision, NLP, and generative AI
  • Explain responsible AI principles in practical terms
  • Answer AI workload questions in AI-900 exam style
Chapter quiz

1. A retail company wants to predict the numeric value of next month's sales for each store based on historical sales data. Which AI workload does this scenario describe?

Show answer
Correct answer: Forecasting
Forecasting is the machine learning workload used to predict numeric values over time, such as future sales. Computer vision is used for analyzing images or video, so it does not fit a sales prediction scenario. Conversational AI is used for chatbots and virtual agents, not for time-based numeric prediction. On AI-900, identifying the business objective first helps distinguish forecasting from unrelated AI workloads.

2. A company needs to process thousands of scanned invoices and extract printed text such as invoice numbers, dates, and totals. Which AI capability should they identify for this requirement?

Show answer
Correct answer: Optical character recognition (OCR)
OCR is the correct capability because it extracts text from scanned documents and images. Natural language generation is used to create new text, not read text from invoices. Anomaly detection is used to find unusual patterns in data, such as suspicious transactions, and does not address text extraction. AI-900 commonly tests recognition of document-processing scenarios as computer vision workloads.

3. A customer support team wants to deploy a virtual agent that can answer common user questions through a website chat interface using natural language. Which workload best matches this scenario?

Show answer
Correct answer: Conversational AI
Conversational AI is the best match because the solution involves a bot or virtual agent interacting with users in natural language. Knowledge mining focuses on extracting and organizing insights from large document collections, which may support search experiences but is not the primary workload described here. Image classification is a computer vision task for labeling images, so it is unrelated to a website chat assistant. AI-900 often expects candidates to map chatbots directly to conversational AI or NLP.

4. A loan approval model is found to approve applications accurately for most applicants but performs significantly worse for one demographic group. Which responsible AI principle is most directly affected?

Show answer
Correct answer: Fairness
Fairness is the principle most directly affected because the model performs unequally across demographic groups. Transparency relates to explaining how a system works and documenting its limitations, which is important but not the primary issue in this scenario. Inclusiveness focuses on designing AI systems that can be used effectively by people with different needs and abilities; while related to broad accessibility, it is not the best answer for biased model performance. On AI-900, performance disparities between groups usually indicate a fairness concern.

5. A legal firm wants users to search a large collection of contracts, emails, and case files to find relevant information and extracted insights quickly. Which AI workload best fits this business objective?

Show answer
Correct answer: Knowledge mining
Knowledge mining is the correct answer because it focuses on extracting insights from large volumes of documents and enabling search, discovery, and semantic retrieval. Generative AI creates new content such as text or images based on prompts, which is different from searching and organizing existing documents. Regression is a machine learning technique for predicting numeric values, so it does not fit a document search and insight extraction scenario. AI-900 frequently tests the distinction between document search workloads and predictive modeling.

Chapter 3: Fundamental Principles of Machine Learning on Azure

This chapter targets one of the most testable AI-900 areas: understanding what machine learning is, when it should be used, and how Azure supports the machine learning lifecycle. For the exam, Microsoft does not expect you to build models with code. Instead, you need to recognize machine learning workloads, identify common terminology, distinguish major learning approaches, and match Azure tools to the correct scenario. That means you should be able to look at a business problem and decide whether it is supervised learning, unsupervised learning, or a broader AI workload that belongs to a different Azure AI service.

At a high level, machine learning is about using data to train a model that can make predictions or discover patterns. In AI-900, the exam often tests your understanding through plain-language business examples rather than mathematical definitions. You may see scenarios about predicting house prices, classifying customer emails, grouping similar products, or identifying unusual transactions. Your task is to map those scenarios to the correct machine learning concept and then identify which Azure capability best fits. This chapter is designed to help you do that quickly and accurately under exam pressure.

One common exam trap is confusing machine learning with rule-based programming. If the problem can be solved by explicit rules written in advance, it may not require machine learning. Machine learning becomes useful when patterns are too complex to express as fixed rules, or when the system needs to improve based on historical data. Another trap is mixing up machine learning with Azure AI services such as vision, speech, or language. Some of those services are powered by machine learning behind the scenes, but on the exam you must answer based on the service the customer would use, not the hidden implementation.

In this chapter, you will first understand machine learning fundamentals without coding. Then you will compare supervised, unsupervised, and reinforcement learning, learn the core terms that appear repeatedly in the AI-900 objective domain, and connect those ideas to Azure Machine Learning capabilities. You will also review model lifecycle basics such as training, validation, and inference, along with practical no-code options in Azure. Finally, you will sharpen your exam-ready reasoning so that you can eliminate distractors and identify the best answer even when multiple options sound technically possible.

  • Know the difference between features and labels.
  • Recognize when a scenario is regression, classification, clustering, or anomaly detection.
  • Understand training, validation, testing, and inference in plain language.
  • Identify overfitting and basic model quality concerns.
  • Match Azure Machine Learning workspace, automated machine learning, and designer to the right use case.
  • Stay alert to wording that distinguishes machine learning from other Azure AI workloads.

Exam Tip: AI-900 questions are usually about selecting the most appropriate concept or Azure tool, not the most advanced one. If an option sounds powerful but overly complex for the described need, it is often a distractor.

Approach this domain like an exam coach would: translate the scenario into keywords, match those keywords to the machine learning task, and then confirm whether Azure Machine Learning is the intended platform. If you master that process, this chapter can become one of the highest-scoring parts of your exam.

Practice note for Understand machine learning fundamentals without coding: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Compare supervised, unsupervised, and reinforcement learning: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explore Azure machine learning capabilities and model lifecycle basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Official domain focus: Fundamental principles of ML on Azure

Section 3.1: Official domain focus: Fundamental principles of ML on Azure

The AI-900 exam blueprint expects you to explain foundational machine learning ideas and identify Azure services used to build and manage machine learning solutions. This is not a data scientist exam. You are not being tested on coding libraries, algorithm tuning, or advanced statistics. Instead, Microsoft wants you to show conceptual understanding: what machine learning is, what kinds of problems it solves, and how Azure provides managed capabilities for model development and deployment.

Machine learning uses data to train models that can predict outcomes, classify items, find patterns, or detect unusual behavior. On the exam, this is usually framed through business-friendly examples. For example, predicting the future sales of a product from historical data is a machine learning scenario. Deciding if an email is spam or not spam is another. Grouping similar customers without predefined categories is also a machine learning task, but it belongs to a different learning type.

The official domain also expects you to understand Azure-based machine learning options at a broad level. Azure Machine Learning is the main platform service you should associate with building, training, managing, and deploying models on Azure. Questions may mention a workspace, automated machine learning, the designer experience, models, datasets, endpoints, or pipelines. You do not need to memorize every interface detail, but you do need to know the purpose of each capability and when a no-code or low-code option makes sense.

A major distinction tested in this domain is the difference between machine learning as a custom model-building approach and prebuilt AI services that expose ready-made intelligence. If the scenario asks you to train a model using your own tabular data, Azure Machine Learning is usually the right direction. If the scenario is specifically about analyzing images, extracting text from documents, or translating speech, it may belong to Azure AI services rather than a custom ML workflow.

Exam Tip: When the question says a company wants to use its own historical data to predict or classify outcomes, think Azure Machine Learning first. When the question emphasizes a prebuilt capability such as OCR or sentiment analysis, think Azure AI services.

Another frequently tested area is understanding that machine learning on Azure includes the full lifecycle, not just training. You should recognize that organizations need a place to store assets, run experiments, track models, and deploy endpoints for predictions. Azure Machine Learning helps with those tasks through a managed workspace and related tools. In short, the exam domain is about practical recognition: knowing what ML is, what Azure offers, and how to match business needs to the correct Azure approach.

Section 3.2: Core ML concepts including features, labels, training, validation, and inference

Section 3.2: Core ML concepts including features, labels, training, validation, and inference

The AI-900 exam repeatedly uses a small set of machine learning terms. If you know these clearly, many questions become much easier. Start with features. Features are the input variables used by a model to learn patterns. In a house price scenario, features might include square footage, location, number of bedrooms, and property age. They are the known facts supplied to the model.

A label is the answer the model is trying to learn from during supervised learning. In the same house example, the label would be the actual sale price. If you are training a spam filter, the features might include words or sender patterns, while the label would be spam or not spam. A common exam trap is reversing these two ideas. Remember: features go in, labels are what the model tries to predict.

Training is the process of using historical data to teach the model relationships between inputs and outcomes. The model looks at examples and adjusts itself so that it can make useful predictions on new data. Validation involves checking how well the model performs during development so you can compare models or settings and reduce the chance of building something that only works well on the training data. Some questions may also mention a test dataset, which is used for final evaluation after the model has been developed.

Inference happens after a model is trained and deployed. This is when the model receives new data and produces a prediction. On the exam, if a question asks about using a trained model to generate a result for a new input, that is inference. Do not confuse inference with training. Training builds the model; inference uses the model.

It is also important to understand datasets in practical terms. You generally divide data to support training and evaluation. A model that performs well only on training data but poorly on new data is not useful. That is why validation and testing matter. AI-900 does not expect detailed mathematical metrics, but it does expect you to understand the purpose of each stage.

  • Features: input columns or variables.
  • Labels: target outcomes in supervised learning.
  • Training: learning from known examples.
  • Validation: checking model performance during development.
  • Inference: using a trained model to make predictions on new data.

Exam Tip: If the question asks what data is required for supervised learning, the key clue is labeled data. If there are no predefined outcomes, the scenario is probably unsupervised learning instead.

Questions in this area often reward precise vocabulary. Learn the plain-English meaning of these terms, and you will be able to decode many scenario-based items quickly.

Section 3.3: Regression, classification, clustering, and anomaly detection explained simply

Section 3.3: Regression, classification, clustering, and anomaly detection explained simply

The exam strongly emphasizes the ability to identify the type of machine learning problem from a scenario. Four names appear often: regression, classification, clustering, and anomaly detection. The easiest way to answer correctly is to focus on the kind of output the business wants.

Regression predicts a numeric value. If the answer is a number on a continuous scale, such as price, revenue, temperature, or demand, the scenario is usually regression. A classic AI-900 example is predicting the cost of a taxi trip or the future sales amount for a store. Even if the model is complex, the exam keyword is still simple: numeric prediction means regression.

Classification predicts a category or class. If the goal is yes or no, true or false, fraud or not fraud, churn or no churn, disease A or disease B, then classification is the likely answer. Sometimes there are only two classes, and sometimes there are many. The core idea is that the output is a label rather than a continuous number.

Clustering groups similar data items when there are no predefined labels. The system discovers natural groupings based on patterns in the data. For example, segmenting customers into groups based on purchasing behavior without already knowing the segment names is clustering. This is an unsupervised learning task because there are no labels to train on.

Anomaly detection identifies unusual items or events that do not fit normal patterns. Common examples include suspicious credit card transactions, abnormal sensor readings, or unusual login behavior. On the exam, anomaly detection can sound similar to classification, especially when the anomaly could be called fraud. The clue is whether the question emphasizes finding unusual deviations from normal behavior, often when there are few or no labeled examples.

AI-900 may also expect you to compare broader learning categories. Regression and classification are typically supervised learning because they use labeled outcomes. Clustering is unsupervised because it searches for hidden patterns without labels. Reinforcement learning is different again: it learns through rewards and penalties as an agent interacts with an environment, such as a robot learning to navigate or a system learning an optimal sequence of actions.

Exam Tip: Do not overcomplicate the question. Ask yourself, “Is the answer a number, a category, a group, or an unusual event?” That shortcut often leads directly to regression, classification, clustering, or anomaly detection.

A frequent trap is choosing classification just because the business wants to decide something. If the result is an estimated amount, it is regression. If the business wants to discover groups with no existing labels, it is clustering. If the business wants to spot rare, unusual behavior, anomaly detection is often the best fit.

Section 3.4: Overfitting, data quality, model evaluation, and responsible ML considerations

Section 3.4: Overfitting, data quality, model evaluation, and responsible ML considerations

AI-900 does not go deeply into model tuning, but it does expect you to understand why some models perform poorly in the real world. One key concept is overfitting. A model is overfit when it learns the training data too closely, including noise and accidental patterns, so it performs well on known data but poorly on new data. On the exam, overfitting is usually recognized through a description such as “high accuracy during training but poor performance after deployment.”

The opposite idea, sometimes discussed more indirectly, is underfitting. That happens when a model is too simple or not trained well enough to capture useful patterns. However, overfitting is the more common exam focus. Validation and test datasets help reveal whether a model generalizes well beyond training examples.

Data quality matters just as much as the algorithm. If the data is incomplete, inaccurate, outdated, biased, or not representative of the real population, the model results will be unreliable. Exam questions may describe duplicated records, missing values, skewed sampling, or labels that were assigned inconsistently. In each case, the issue is that poor-quality data leads to poor-quality predictions. This is often summarized as “garbage in, garbage out.”

Model evaluation in AI-900 is mostly conceptual. You should understand that a model must be assessed using data not used in training and that evaluation helps compare model performance. You do not need to master formulas, but you should recognize that there are metrics appropriate to different tasks, and that success cannot be judged by training performance alone. A model should be useful, accurate enough for the scenario, and reliable on unseen data.

Responsible machine learning also appears in this domain and connects to the broader responsible AI themes introduced earlier in the course. If a model is used in hiring, lending, healthcare, or other impactful decisions, fairness, transparency, accountability, privacy, and reliability become especially important. Bias in data can lead to unfair predictions. Lack of explanation can make results hard to trust. Poor governance can create compliance risks.

Exam Tip: If a question mentions biased historical data or unfair outcomes across groups, the issue is not just technical performance; it is also a responsible AI concern. On AI-900, responsible AI principles can be the real target of the question even when the wording sounds like a machine learning issue.

Watch for distractors that focus on choosing a more complex algorithm when the actual problem is data quality or overfitting. Better tools cannot fix bad data automatically. The exam often rewards candidates who identify the root cause, not the flashiest technology.

Section 3.5: Azure Machine Learning workspace, automated machine learning, designer, and no-code capabilities

Section 3.5: Azure Machine Learning workspace, automated machine learning, designer, and no-code capabilities

Azure Machine Learning is Microsoft’s cloud platform for creating, managing, and deploying machine learning solutions. For AI-900, you need to know the purpose of its core capabilities rather than the step-by-step configuration details. The central organizational unit is the workspace. An Azure Machine Learning workspace provides a place to manage assets such as datasets, experiments, models, compute resources, endpoints, and related artifacts. If a question asks where a team manages machine learning resources in Azure, the workspace is the key term.

Automated machine learning, often called automated ML or AutoML, helps users train and compare models with limited coding. It can try multiple algorithms and preprocessing approaches to find a strong candidate model for a given dataset and prediction task. On the exam, AutoML is a strong answer when the scenario emphasizes quickly building a model, reducing manual algorithm selection, or enabling users who are not expert data scientists to train predictive models.

The designer is a drag-and-drop visual interface for building machine learning pipelines. It allows users to connect data preparation, training, and evaluation components visually. If the exam describes a low-code or visual workflow for creating an ML pipeline, designer is the likely answer. A common trap is confusing designer with automated ML. Designer gives visual control over the pipeline, while automated ML focuses on automatically selecting and optimizing models.

Azure Machine Learning also supports the model lifecycle more broadly: preparing data, training models, tracking experiments, registering models, and deploying them for inference. Questions may mention endpoints, which are deployed interfaces that applications can call to get predictions from a trained model. Again, AI-900 tests recognition, not implementation detail.

No-code and low-code capabilities are especially important in this exam chapter because the lessons focus on understanding machine learning fundamentals without coding. Microsoft wants candidates to know that machine learning on Azure is accessible beyond traditional programming-heavy workflows. Automated ML and designer are the main examples you should remember.

  • Workspace: central place to manage ML assets and activities.
  • Automated ML: automatically trains and compares models.
  • Designer: visual drag-and-drop pipeline authoring.
  • Deployment endpoint: used to perform inference with a trained model.

Exam Tip: If the scenario says “visual interface,” think designer. If it says “automatically identify the best model,” think automated ML. If it says “manage experiments, models, and compute,” think workspace.

Do not confuse Azure Machine Learning with prebuilt Azure AI services. Azure Machine Learning is the right answer when the organization wants to build or customize models using its own data and control the lifecycle. That distinction appears often on the exam.

Section 3.6: Exam-style scenarios and practice questions for Fundamental principles of ML on Azure

Section 3.6: Exam-style scenarios and practice questions for Fundamental principles of ML on Azure

This chapter closes with exam-style reasoning guidance rather than direct quiz items. On AI-900, scenario wording is often simple, but the distractors are chosen to exploit common misunderstandings. Your strategy should be to identify the required output, determine whether labels exist, and then match the need to the correct Azure capability.

For example, when a scenario describes predicting a future numeric amount such as sales, cost, temperature, or demand, anchor immediately on regression. If the scenario describes assigning one of several categories such as approved or denied, fraud or not fraud, or premium or standard, anchor on classification. If there are no predefined outcomes and the task is to find similar groups, choose clustering. If the task is to find rare deviations or unusual patterns, anomaly detection is the better fit.

Then ask whether the question is about a machine learning concept or an Azure product. If the company wants to build a custom predictive model from its own business data, Azure Machine Learning is usually the platform. If the company wants a no-code or low-code path, examine the wording carefully. “Automatically find the best model” points to automated ML. “Use a visual drag-and-drop pipeline” points to designer. “Central place to manage assets” points to a workspace.

Questions may also test lifecycle thinking. If a model is being created from historical examples, that is training. If the model is being checked using held-out data, that is validation or testing. If a deployed model is being called to make predictions on new records, that is inference. These distinctions are simple but heavily tested.

Another useful exam habit is watching for hidden responsible AI cues. If the scenario mentions unfair treatment of groups, skewed historical decisions, or concerns about transparency and trust, the correct answer may involve responsible AI principles rather than a pure modeling choice. Similarly, if performance is poor only on new data, suspect overfitting instead of assuming the wrong Azure service was selected.

Exam Tip: In AI-900, the best answer is usually the one that fits the stated business requirement most directly. Avoid reading advanced assumptions into the scenario. If the problem can be solved by a standard machine learning concept or a named Azure capability from the objective domain, that is typically the intended answer.

As you move into practice assessment, review this chapter until you can classify scenarios in seconds. The exam rewards fast pattern recognition: output type, label presence, lifecycle stage, and Azure tool match. If you can apply those four checks consistently, you will handle most machine learning questions in this domain with confidence.

Chapter milestones
  • Understand machine learning fundamentals without coding
  • Compare supervised, unsupervised, and reinforcement learning
  • Explore Azure machine learning capabilities and model lifecycle basics
  • Practice AI-900 questions on ML principles and Azure tools
Chapter quiz

1. A retail company wants to use historical sales data to predict next month's revenue for each store. Which type of machine learning workload should they use?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numeric value, which is a core AI-900 concept for supervised learning. Classification would be used to predict a category, such as whether a store is high-risk or low-risk. Clustering is an unsupervised technique used to group similar items when no labeled outcome is provided.

2. You are reviewing a dataset used to train a model that predicts whether a customer will cancel a subscription. The dataset includes customer age, monthly spend, and support ticket count, along with a column named Churned with values Yes or No. In this scenario, what is the label?

Show answer
Correct answer: The Churned column
The Churned column is correct because the label is the value the model is trying to predict. Customer age, monthly spend, and support ticket count are features, not labels. The full dataset contains both features and labels, but the label specifically refers to the target outcome used in supervised learning.

3. A company wants to group customers into segments based on purchasing behavior, but it does not have predefined categories for those customers. Which approach should you identify as most appropriate?

Show answer
Correct answer: Unsupervised learning using clustering
Unsupervised learning using clustering is correct because the goal is to discover natural groupings in data without labeled outcomes. Supervised classification requires known categories in advance, which the scenario explicitly says are not available. Reinforcement learning is used for decision-making based on rewards and is not the best fit for customer segmentation.

4. A business analyst with no coding experience wants to train and compare multiple machine learning models in Azure by using historical data and having Azure automatically select the best model. Which Azure capability should you recommend?

Show answer
Correct answer: Automated machine learning in Azure Machine Learning
Automated machine learning in Azure Machine Learning is correct because AI-900 expects you to recognize it as the Azure option for training and evaluating models with minimal manual model-selection effort. Azure AI Language and Azure AI Vision are prebuilt AI services for language and image scenarios, not general-purpose tools for training custom predictive models from tabular business data.

5. A data science team reports that a model performs extremely well on training data but poorly on new, unseen data. Which issue does this most likely indicate?

Show answer
Correct answer: Overfitting
Overfitting is correct because the model has learned the training data too closely and does not generalize well to new data, which is a common model quality concern in the AI-900 domain. Inference refers to using a trained model to make predictions, not to a quality problem. Clustering is an unsupervised learning method and does not describe this train-versus-new-data performance issue.

Chapter 4: Computer Vision and NLP Workloads on Azure

This chapter targets one of the highest-value parts of the AI-900 exam: recognizing common computer vision and natural language processing workloads and matching them to the correct Azure AI service. On the exam, Microsoft rarely asks for deep implementation details. Instead, it tests whether you can identify the business need, classify the AI workload, and choose the most appropriate Azure offering. That means your success depends on understanding service boundaries, not memorizing code or SDK syntax.

In this chapter, you will connect exam objectives to real-world scenarios involving images, video, documents, text, speech, and translation. You will learn how to distinguish Azure AI Vision from Azure AI Language, when to think about OCR versus document extraction, and how to avoid common traps involving services that sound similar but solve different problems. The chapter also reinforces mixed-domain reasoning, because AI-900 frequently presents scenario-based questions where multiple services appear plausible at first glance.

The first major exam focus in this chapter is computer vision workloads on Azure. These include analyzing image content, detecting objects, reading text from images, analyzing faces within allowed Azure capabilities, and deriving insights from video. The second major focus is NLP workloads on Azure, including sentiment analysis, key phrase extraction, entity recognition, conversational language understanding, question answering, translation, and speech-related solutions. You should expect the exam to test both direct definitions and business-matching scenarios.

Exam Tip: For AI-900, begin every scenario by asking, “What is the input?” If the input is an image, scanned page, or video, start with computer vision services. If the input is text, spoken language, or multilingual communication, start with NLP services. This simple rule eliminates many distractors.

Another recurring exam skill is separating broad platform names from specific use cases. Azure AI Vision is associated with image analysis, OCR, object detection, and some visual understanding tasks. Azure AI Language supports text analytics and language understanding tasks. Azure AI Speech handles speech-to-text, text-to-speech, speech translation, and speaker-related speech features. Azure AI Translator focuses on language translation. Azure AI Document Intelligence specializes in extracting structure and fields from forms, invoices, receipts, and similar documents. The exam often rewards candidates who can identify which service is purpose-built rather than merely possible.

As you work through the sections, pay attention to the wording of business requests. Phrases like “identify objects in an image,” “read printed text,” “extract invoice fields,” “determine sentiment,” “translate spoken conversation,” or “answer questions from a knowledge base” point strongly to specific services. The exam may also include responsible AI considerations, such as choosing appropriate solutions, understanding sensitivity around face-related capabilities, and recognizing that AI outputs are probabilistic rather than guaranteed to be correct.

This chapter is designed as a practical exam-prep guide, not just a conceptual overview. Each section maps directly to exam objectives, highlights common answer traps, and explains how to reason through Microsoft-style scenarios. If you can confidently match business needs to Azure AI Vision and Azure AI Language solutions after this chapter, you will be well prepared for a significant portion of the AI-900 skills measured.

Practice note for Identify key computer vision workloads and Azure services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand NLP workloads for text, speech, and translation: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Match business needs to Azure AI Vision and Azure AI Language solutions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Official domain focus: Computer vision workloads on Azure

Section 4.1: Official domain focus: Computer vision workloads on Azure

The AI-900 exam expects you to recognize the main categories of computer vision workloads and the Azure services aligned to them. Computer vision refers to AI systems that interpret visual input such as photographs, scanned images, camera feeds, or video. In exam terms, this usually means identifying whether a business scenario needs image analysis, text extraction from images, face-related analysis, object detection, spatial understanding, or video insight generation.

A strong exam approach is to break computer vision into workload types. First, image analysis includes describing an image, detecting visual features, tagging content, identifying objects, or generating captions. Second, OCR focuses on reading printed or handwritten text embedded in images. Third, face-related workloads involve detecting human faces and analyzing facial attributes within Azure’s supported and governed capabilities. Fourth, video understanding extends image analysis over time, such as identifying scenes, extracting text from frames, or indexing video content. Finally, document-focused extraction overlaps with vision but often belongs more specifically to Document Intelligence when the goal is to pull structured fields from business forms.

On the AI-900 exam, the official domain focus is not on building custom convolutional neural networks or tuning image models. Instead, it emphasizes choosing from Azure AI services that provide prebuilt capabilities. That means exam questions often describe a company requirement such as monitoring product photos, reading signs in uploaded images, or analyzing media content, then ask which Azure service fits best.

Exam Tip: If the task is general image understanding, think Azure AI Vision. If the task is extracting structured fields from business documents like invoices and receipts, think Azure AI Document Intelligence. Many candidates lose points by selecting Vision for a form-processing requirement that is actually document extraction.

A common trap is confusing image classification with object detection. Image classification answers “What is in this image?” at a broad level. Object detection answers “Where are the objects, and what are they?” If an exam question mentions bounding boxes, locating multiple items, or finding where objects appear, object detection is the better fit. Another trap is assuming every camera or video requirement needs a custom solution. AI-900 generally expects you to know Azure’s managed services first, not to design advanced bespoke models.

The exam also tests your ability to identify visual workload inputs and outputs. Inputs might include photos, scanned PDFs, surveillance footage, or mobile-captured images. Outputs might be tags, captions, OCR text, extracted entities, timestamps, or confidence scores. Microsoft may phrase the scenario in business language rather than technical language, so translate it mentally into an AI task category before choosing a service.

Finally, remember that responsible AI applies here too. Visual analysis systems can be affected by image quality, lighting, angle, and bias-related concerns. Exam questions may not go deep into policy, but they do expect you to understand that AI outputs are probabilistic and should be evaluated appropriately in real solutions.

Section 4.2: Image classification, object detection, OCR, facial analysis, and video understanding with Azure AI Vision

Section 4.2: Image classification, object detection, OCR, facial analysis, and video understanding with Azure AI Vision

Azure AI Vision is the central service to remember for many computer vision questions on AI-900. It supports common scenarios such as analyzing image content, recognizing objects, reading text in images through OCR-related capabilities, and enabling broader visual understanding tasks. The exam expects you to map requirements to capabilities, not to memorize API names.

Start with image classification and tagging. If a company wants to determine whether an uploaded image contains a car, dog, mountain, or food item, that is a vision classification-style workload. Azure AI Vision can analyze visual content and return descriptive tags or captions. If the business wants a general description of image contents for search, organization, or accessibility, this is another clue that Vision is appropriate.

Object detection goes a step further by locating items inside the image. If the scenario mentions identifying multiple products on a shelf, finding each bicycle in a street photo, or drawing rectangles around objects, that indicates object detection rather than simple classification. On the exam, words like “where,” “locate,” and “bounding box” should immediately push you toward object detection reasoning.

OCR is another frequent AI-900 topic. Optical character recognition extracts text from images, screenshots, scanned documents, or photos of signs. If the task is to read text from a street sign, receipt photo, scanned page, or screenshot, Vision is often the right answer unless the scenario emphasizes structured field extraction from forms. This distinction matters: OCR reads the text; Document Intelligence interprets the layout and business meaning of fields such as invoice number or total due.

Face-related scenarios appear on the exam, but be careful. AI-900 may reference facial analysis at a high level, such as detecting the presence of faces or supporting identity-related or image-analysis scenarios. However, candidates should avoid overgeneralizing face capabilities. Read the wording closely and stick to what is explicitly being asked. The exam is more likely to test service recognition than controversial or restricted implementation detail.

Video understanding extends visual analysis across sequences of frames. If an organization wants to search video archives, identify scenes, extract visible text in videos, or generate metadata from media content, think of Azure video analysis capabilities associated with the Azure AI Vision family. The key exam idea is that video understanding is not fundamentally separate from vision; it is vision applied over time.

Exam Tip: If the requirement says “analyze images” or “read text from images,” Azure AI Vision is usually correct. If it says “extract fields from invoices, receipts, IDs, or forms,” Azure AI Document Intelligence is more precise. The exam often tests this exact contrast.

A common trap is selecting Azure AI Language because the output of OCR is text. Remember: if the source data is an image, the first service you need is usually Vision to obtain the text. Language services may then analyze that text afterward, but they are not the primary answer for reading it from the image in the first place.

Section 4.3: Document intelligence and common information extraction scenarios

Section 4.3: Document intelligence and common information extraction scenarios

Document intelligence sits at the intersection of vision and business process automation, and it is a favorite source of exam distractors. Many candidates see a scanned file and immediately choose Azure AI Vision because it can perform OCR. But AI-900 distinguishes between simply reading text and extracting structured information from business documents. That second use case is where Azure AI Document Intelligence belongs.

Document Intelligence is designed for forms and business records such as invoices, receipts, tax forms, identification documents, and purchase orders. The goal is not just to return raw text. The goal is to identify fields, values, tables, key-value pairs, and layout structure. For example, a business may want invoice number, vendor name, line items, subtotal, tax, and total extracted automatically from thousands of incoming invoices. That is a classic document intelligence scenario.

On the exam, pay close attention to wording. If the requirement says “extract text from a scanned brochure,” OCR in Vision is likely enough. If it says “process receipts and capture merchant, date, and total,” choose Document Intelligence. If it says “read passport data fields” or “pull account numbers from forms,” that is again document intelligence. The presence of business fields, forms, and structured extraction is the clue.

Document processing questions may also include concepts like prebuilt models and information extraction from layout. AI-900 does not expect deep training workflows, but it may expect you to know that Azure provides purpose-built capabilities for common document types. This is important because Microsoft likes to test whether you understand when a specialized AI service is more appropriate than a general one.

Exam Tip: Ask yourself whether the organization needs text only or text plus meaning and structure. Text only suggests OCR. Text plus named fields, tables, or form values suggests Document Intelligence.

A common trap is choosing Azure AI Language for extracting invoice entities because invoices contain text. Language can identify entities in natural text, but business forms with positional layouts and labeled fields are better handled by Document Intelligence. Another trap is assuming OCR and document extraction are interchangeable. They are related, but not identical. OCR is a capability; document intelligence is a business solution for structured extraction.

In real solutions, services can be combined. A company might use Document Intelligence to extract fields and then Azure AI Language to analyze comments or free-text notes contained in the document. For AI-900, however, the exam usually wants the primary best-fit service. Choose the service that directly solves the main business problem described in the scenario.

Section 4.4: Official domain focus: NLP workloads on Azure

Section 4.4: Official domain focus: NLP workloads on Azure

The second official domain focus in this chapter is natural language processing workloads on Azure. NLP deals with language in text or speech form. For AI-900, this domain includes analyzing written text, understanding user intent, extracting useful information, answering questions, translating between languages, and converting speech to and from text. The exam often tests whether you can identify the right Azure AI service for each language-related need.

A practical way to study NLP for the exam is to divide it into service families. Azure AI Language is used for text analytics and language understanding tasks such as sentiment analysis, key phrase extraction, named entity recognition, conversational language understanding, and question answering. Azure AI Translator is used for text translation between languages. Azure AI Speech is used for speech-to-text, text-to-speech, speech translation, and related speech scenarios. These boundaries are the foundation of most exam questions in this area.

On AI-900, many NLP scenarios are business-driven. A company may want to analyze customer reviews, route support requests by intent, build a multilingual application, transcribe calls, or create a voice-enabled assistant. Your job is to identify the workload type first and the Azure service second. For example, customer review analysis points to text analytics. Voice transcription points to Speech. Multilingual text conversion points to Translator.

Exam Tip: If users are typing or uploading text, start with Azure AI Language or Translator. If users are speaking or listening, start with Azure AI Speech. The input and output modality is often the fastest route to the correct answer.

One common trap is confusing question answering with general search. In Azure AI Language, question answering is intended to provide answers from a defined knowledge source such as FAQs, manuals, or curated content. It is not the same as internet search. Another trap is confusing conversational language understanding with sentiment analysis. Understanding intent in utterances like “book a flight tomorrow” is different from determining emotional tone in a product review.

You should also be ready for combined scenarios. For example, a multilingual customer support bot might use Speech for spoken input, Translator for language conversion, Language for question answering or text analytics, and speech synthesis for spoken output. The exam may ask for the single best component for one step in that pipeline. Read carefully and avoid selecting the broadest-sounding option if the scenario points to a specific capability.

As with vision, responsible AI matters. Language systems can misunderstand context, sarcasm, accents, and domain-specific vocabulary. AI-900 expects you to understand that these services are useful but not infallible, and that human oversight and evaluation are part of real deployment.

Section 4.5: Sentiment analysis, key phrase extraction, entity recognition, question answering, translation, and speech services

Section 4.5: Sentiment analysis, key phrase extraction, entity recognition, question answering, translation, and speech services

This section brings together the core NLP capabilities most likely to appear on the AI-900 exam. Start with sentiment analysis. This workload determines whether text expresses positive, negative, neutral, or mixed sentiment. Typical examples include customer feedback, app reviews, survey comments, and social media posts. If the goal is to understand opinion or emotional tone in text, Azure AI Language is the likely answer.

Key phrase extraction identifies the main terms or phrases in a body of text. This is useful for summarizing topics in documents, tickets, or reviews. Named entity recognition, often called entity recognition, identifies real-world items such as people, organizations, locations, dates, phone numbers, and more. If the exam asks how to pull names, places, or other recognized entities from text, think Azure AI Language.

Question answering is another important AI-900 topic. This capability supports systems that respond to user questions using a curated knowledge base, FAQ repository, or reference content. The exam may describe a customer self-service portal that answers common support questions from existing documentation. That points to question answering in Azure AI Language. The key clue is that answers come from known content sources, not open-ended reasoning.

Translation is simpler from a service-selection perspective. If the requirement is to convert text from one language to another, use Azure AI Translator. If the requirement is to translate spoken conversations or transcribe and translate speech, Azure AI Speech may be involved because the source modality is audio. Distinguish text translation from speech workflows carefully.

Azure AI Speech handles speech-to-text, text-to-speech, and speech translation scenarios. If a company wants live captions for meetings, transcribed call recordings, or synthetic voice output from an application, Speech is the service family to remember. If the scenario mentions spoken input, voice commands, or audio output, that is your strongest clue.

Exam Tip: Sentiment, key phrases, entities, and question answering belong with Azure AI Language. Translation belongs with Translator unless speech is central to the scenario. Speech-to-text and text-to-speech belong with Azure AI Speech.

A common trap is choosing Language for translation because translation deals with language. Microsoft separates this into a dedicated Translator service. Another trap is confusing entity recognition with OCR. OCR extracts characters from images; entity recognition identifies meaning in text that has already been captured. These can work together, but they are not the same task.

When matching business needs to solutions, focus on the primary deliverable. If the deliverable is “understand customer sentiment,” choose Language. If it is “support multilingual text,” choose Translator. If it is “convert meetings to text,” choose Speech. This precise matching is exactly what AI-900 rewards.

Section 4.6: Exam-style scenarios and practice questions for Computer vision workloads on Azure and NLP workloads on Azure

Section 4.6: Exam-style scenarios and practice questions for Computer vision workloads on Azure and NLP workloads on Azure

By this point, your goal is not just knowing definitions but developing exam-style reasoning. AI-900 frequently uses short business scenarios with several plausible Azure services. The best candidates identify the main input, the expected output, and whether the need is general analysis or specialized extraction. This method works especially well for mixed-domain questions involving both vision and language.

For a vision scenario, first ask whether the source is an image, video, or document. Then ask what the organization wants from that source. If it wants a description, tags, object locations, or text read from an image, Azure AI Vision is usually the lead answer. If it wants invoice totals, receipt merchants, or structured form values, Azure AI Document Intelligence is better. If text has already been extracted and now needs sentiment or entity analysis, then Azure AI Language may become the correct next step.

For an NLP scenario, ask whether the source is text or audio. Then identify the function: sentiment, key phrases, entities, question answering, translation, or speech processing. This removes much of the ambiguity. Many wrong answers on AI-900 come from recognizing the domain generally but selecting a neighboring service rather than the exact one needed.

Exam Tip: Watch for layered scenarios. A scanned complaint form might require OCR first and sentiment analysis second. The exam may ask for the first service needed, the best overall service for the main task, or the service for a specific stage. Read every verb carefully: read, extract, analyze, translate, transcribe, answer, detect, classify, or identify.

Common traps include confusing OCR with document extraction, translation with speech translation, and image analysis with text analytics. Another trap is overthinking implementation. AI-900 is a fundamentals exam. If a managed Azure AI service clearly fits the scenario, that is usually preferable to answers implying custom model training or complex architecture.

As you review practice material, build a mental map of trigger phrases. “Scanned invoice fields” means Document Intelligence. “Text in a photo” means Vision OCR. “Customer review mood” means sentiment analysis in Language. “FAQ chatbot from documents” means question answering. “Live meeting transcription” means Speech. “Convert website text between languages” means Translator. This pattern recognition is one of the fastest ways to improve your exam score.

Finally, remember that Microsoft tests selection discipline. The correct answer is the one that best aligns with the stated business need, not the one that might be adapted with extra work. In other words, choose the most direct managed service for the scenario. That exam-ready mindset will serve you well as you move into broader mixed-domain review and your full mock exam.

Chapter milestones
  • Identify key computer vision workloads and Azure services
  • Understand NLP workloads for text, speech, and translation
  • Match business needs to Azure AI Vision and Azure AI Language solutions
  • Master mixed-domain practice questions for vision and NLP
Chapter quiz

1. A retail company wants to process photos taken in stores to identify whether shelves contain products such as bottles, boxes, and cans. The solution must classify visual content in images rather than analyze text. Which Azure service should the company use?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is correct because object detection and image analysis are core computer vision workloads tested on AI-900. Azure AI Language is for text-based NLP tasks such as sentiment analysis, entity recognition, and question answering, so it is not the best fit for identifying items in photos. Azure AI Translator is specifically for language translation and does not analyze image content.

2. A business wants to extract vendor names, invoice totals, and due dates from scanned invoices. The goal is to capture structured fields from documents, not just read lines of text. Which Azure service is the most appropriate?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is correct because AI-900 expects you to distinguish simple OCR from document field extraction. Document Intelligence is purpose-built for forms, invoices, receipts, and similar structured documents. Azure AI Vision OCR can read text from images, but it is not the best answer when the requirement is to extract specific fields and structure. Azure AI Speech is for spoken language scenarios such as speech-to-text and text-to-speech, so it is unrelated.

3. A support team wants to analyze customer feedback submitted in text form and determine whether each comment is positive, negative, or neutral. Which Azure service should they use?

Show answer
Correct answer: Azure AI Language
Azure AI Language is correct because sentiment analysis is a standard NLP workload in the AI-900 exam domain. Azure AI Speech focuses on audio-based tasks such as transcription and speech synthesis, not text sentiment classification. Azure AI Vision is intended for image and visual content analysis, so it would be the wrong service for analyzing written feedback.

4. A global call center needs a solution that can listen to spoken English during a live conversation and provide translated speech output in Spanish. Which Azure service best matches this requirement?

Show answer
Correct answer: Azure AI Speech
Azure AI Speech is correct because speech translation is part of the Azure AI Speech service. This is a common exam distinction: if the input is spoken language and the output also involves speech, start with Speech. Azure AI Translator handles language translation, but it is typically the best match for text translation rather than end-to-end speech scenarios. Azure AI Language supports text analytics and language understanding, not live speech translation.

5. A company has a knowledge base of internal policy documents and wants employees to ask natural language questions and receive relevant answers. Which Azure service should be selected?

Show answer
Correct answer: Azure AI Language
Azure AI Language is correct because question answering is an NLP workload associated with Azure AI Language in the AI-900 skills measured. Azure AI Vision is for images and visual analysis, so it does not fit a text-based knowledge base question-answering scenario. Azure AI Document Intelligence extracts fields and structure from documents, but it is not the primary service for answering natural language questions from a knowledge source.

Chapter 5: Generative AI Workloads on Azure

This chapter prepares you for one of the most visible and fast-growing parts of the AI-900 exam: generative AI workloads on Azure. Microsoft expects you to recognize what generative AI does, how it differs from predictive or analytical AI workloads, and which Azure services support common scenarios such as chat, summarization, content drafting, and copilots. The exam does not require deep engineering implementation, but it does expect clear conceptual understanding and the ability to choose the right Azure offering for a business need.

From an exam-prep perspective, this domain often appears in scenario-based questions. A question may describe a company that wants to generate marketing text, summarize customer conversations, build a conversational assistant, or ground model responses in organizational data. Your task is usually to identify the most appropriate Azure capability, understand why a foundation model is suitable, and recognize responsible AI concerns. This means you should know the language of the domain: foundation models, large language models, prompts, tokens, completions, copilots, and Azure OpenAI Service.

One common trap is confusing generative AI with classic machine learning or other Azure AI services. If a system must classify images, detect sentiment, extract key phrases, or translate text, the best answer may be a specialized vision or language service rather than a generative model. By contrast, if the scenario emphasizes creating new text, answering open-ended questions, drafting content, or supporting conversational interaction, generative AI is more likely the correct fit.

This chapter also connects directly to responsible AI. The AI-900 exam expects you to understand that generative systems can produce useful and natural-sounding content, but they can also generate inaccurate, harmful, or inappropriate output if not carefully designed and governed. Microsoft wants candidates to recognize safeguards such as content filtering, human oversight, prompt design, and transparency in AI-assisted experiences.

Exam Tip: When a question asks what a solution should do and the wording includes generate, draft, summarize, converse, or answer in natural language, think generative AI first. When the wording includes classify, detect, label, score, or predict from structured training data, think traditional machine learning or a specialized AI service.

In the sections that follow, you will learn beginner-friendly generative AI concepts, explore copilots and prompt-based experiences, review Azure OpenAI Service basics, and strengthen your exam reasoning. Read with two goals in mind: understanding what the technology does in practical terms and recognizing how the exam phrases correct and incorrect answer choices.

Practice note for Understand generative AI concepts in beginner-friendly language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explore copilots, prompts, and foundation model use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Learn Azure OpenAI Service basics and responsible use: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice AI-900 questions on generative AI workloads: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand generative AI concepts in beginner-friendly language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explore copilots, prompts, and foundation model use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Official domain focus: Generative AI workloads on Azure

Section 5.1: Official domain focus: Generative AI workloads on Azure

On the AI-900 exam, the generative AI domain is focused on awareness and service selection, not deep model training. Microsoft wants you to identify common generative AI workloads and understand the Azure tools associated with them. In practical terms, you should be able to recognize when an organization needs a generative AI solution for text generation, summarization, conversational interaction, or question answering, and connect that need to Azure OpenAI Service and related Azure patterns.

This exam area also checks whether you can distinguish generative AI workloads from other AI workloads already covered in earlier chapters. For example, image analysis, OCR, speech recognition, translation, entity extraction, and sentiment analysis are all valuable AI scenarios, but they are not the core of this domain unless they are part of a larger generative experience. The test may present answer options that are all Microsoft AI services, and the challenge is choosing the one that matches the scenario most directly.

A useful way to organize your thinking is by business outcome. If the customer wants the system to create original text, answer user questions in a chat interface, summarize documents or meetings, or help users complete tasks through natural language, the workload belongs in the generative AI category. If the customer wants a model to detect fraud, forecast sales, or classify emails, that is more aligned with machine learning or language analytics rather than generative AI.

Exam Tip: The AI-900 exam often rewards identifying the most natural service fit rather than every service that could technically help. Azure OpenAI Service is usually the strongest answer when the prompt describes natural language generation or chat-based completion.

Another tested concept is the role of copilots. Microsoft uses the term copilot to describe AI-assisted experiences that help users create, search, summarize, reason, or perform actions with natural language interaction. You are not expected to build a full copilot on the exam, but you should understand that copilots are common generative AI applications built on foundation models and often paired with enterprise data and safety controls.

Finally, this domain includes responsible AI. Questions may ask how to reduce harmful outputs, support safe deployment, or apply governance to generative systems. The exam is not asking for legal policy design, but it does expect awareness that generative AI must be monitored, filtered, and used with transparency and human judgment.

Section 5.2: What generative AI is and how it differs from traditional AI workloads

Section 5.2: What generative AI is and how it differs from traditional AI workloads

Generative AI refers to AI systems that can create new content. In AI-900 terms, this usually means generating text, but the broader idea can include code, images, summaries, and conversational responses. Instead of simply identifying patterns or assigning labels, a generative model produces output that did not exist previously. This is why it feels different from many traditional AI workloads.

Traditional AI and machine learning often focus on prediction, classification, detection, or recommendation. For example, a traditional model might determine whether a loan application is high risk, predict future demand, or classify a support ticket. The output is usually a category, a score, or a specific predicted value. By contrast, a generative AI model can draft an email response, summarize a long report, answer a question conversationally, or rewrite text in a different style.

This distinction matters on the exam because Microsoft may include tempting but wrong answer options based on familiar older AI patterns. If the scenario says, “Create a chatbot that answers broad user questions in natural language and can generate human-like responses,” that points to generative AI. If the scenario says, “Determine whether feedback is positive or negative,” that points to sentiment analysis, which is a language workload rather than a generative one.

Another important difference is flexibility. Generative AI models can perform many tasks from prompts alone, without task-specific training in the way many traditional ML solutions require. That does not mean they are always the best tool. Specialized AI services can be more accurate, cost-effective, and predictable for narrow tasks like OCR, translation, or named entity recognition.

  • Traditional AI: classify, predict, detect, recommend
  • Generative AI: draft, summarize, transform, converse, create
  • Traditional AI outputs: labels, scores, probabilities
  • Generative AI outputs: natural language or other created content

Exam Tip: Watch the verbs in the question. Verbs are often the fastest way to identify the workload type. Generate and summarize suggest generative AI; analyze and classify suggest non-generative AI.

A final exam trap is assuming generative AI is always more advanced and therefore always the best answer. AI-900 tests practical service selection. If a narrow Azure AI service directly solves the problem, it is often the better exam answer than a broad generative model.

Section 5.3: Foundation models, large language models, tokens, prompts, and completions

Section 5.3: Foundation models, large language models, tokens, prompts, and completions

To answer AI-900 questions confidently, you need a solid vocabulary for how generative AI systems work. A foundation model is a large pre-trained model that can support many downstream tasks. It is trained on broad patterns from large amounts of data and then used for tasks such as question answering, summarization, drafting, classification by instruction, and chat. A large language model, or LLM, is a type of foundation model designed primarily for language tasks.

The exam does not expect you to explain model architecture in detail, but it does expect you to understand the practical implication: foundation models are flexible. Instead of training a brand-new model for every text task, you can provide instructions and examples through prompts. This is a major reason generative AI has become widely used.

Tokens are another important test concept. A token is a unit of text processed by the model. It is not always the same as a word; it may be a whole word, part of a word, punctuation, or another text fragment depending on the tokenizer. On exam questions, you mainly need to know that prompts and responses consume tokens, and token usage affects limits and cost.

A prompt is the input instruction or context you give to the model. Good prompts help guide the model toward the desired output. A completion is the generated response from the model. In a chat setting, the completion may be the assistant’s reply. The exam may describe prompts as instructions, context, or examples supplied to the model to shape behavior.

Exam Tip: If an answer choice mentions training a custom model from scratch for a simple text-generation task, be cautious. AI-900 usually emphasizes using an existing foundation model through prompting rather than full model development.

Prompt quality matters. Clear prompts generally produce better results than vague ones. You should understand high-level prompt ideas such as giving the model a role, stating the task clearly, specifying format, and providing relevant context. However, AI-900 is not a prompt-engineering certification. The exam objective is awareness, not advanced optimization.

Common traps include confusing prompts with training data or assuming completions are guaranteed to be factual. A model can generate fluent but incorrect content. This is why responsible use and grounding matter, especially in enterprise scenarios. If the question hints at improving relevance or accuracy by using trusted organizational information, that points toward retrieval-based patterns discussed in the next section.

Section 5.4: Copilots, chat experiences, content generation, summarization, and retrieval-augmented patterns at a high level

Section 5.4: Copilots, chat experiences, content generation, summarization, and retrieval-augmented patterns at a high level

Many AI-900 generative AI questions are framed around user-facing scenarios. A company may want a virtual assistant for employees, a customer support chat experience, document summarization, or automatic drafting of emails and reports. These are all examples of practical generative AI workloads. Microsoft often refers to assistant-style experiences as copilots because they help users perform tasks rather than operate fully independently.

A copilot typically combines a generative model with user context, application context, and business rules. For example, a sales copilot might summarize customer history and draft follow-up emails. An internal HR copilot might answer benefits questions and provide policy summaries. On the exam, remember that the key value of a copilot is natural language assistance grounded in a work context.

Chat experiences are another common exam topic. In a chat workload, users submit messages and the model generates conversational replies. Unlike a simple keyword bot, a generative chat system can respond more flexibly to varied language. Still, it needs controls. Without limits or grounding, it may answer outside its intended scope or provide inaccurate information.

Summarization and content generation are especially important because they are easy to recognize in exam wording. If a scenario asks for concise meeting notes from a transcript, executive summaries from long documents, or first-draft marketing copy, generative AI is the likely answer. If the scenario asks for extracting exact entities or identifying sentiment, that is more likely a specialized language service.

At a high level, retrieval-augmented patterns help improve relevance by bringing trusted data into the prompt before the model generates a response. You do not need implementation details for AI-900. What you should know is the reason: instead of relying only on the model’s general training, the solution retrieves current or organization-specific information and uses it to support better answers.

Exam Tip: If a scenario says a chatbot must answer using company documents or knowledge bases, think of a retrieval-augmented approach rather than a model answering from general knowledge alone.

A common trap is believing the model “already knows” a company’s private policies or latest data. Foundation models do not automatically have access to internal enterprise content. If organizational grounding is required, the solution must connect to that data in a controlled way.

Section 5.5: Azure OpenAI Service basics, responsible generative AI, and safety considerations

Section 5.5: Azure OpenAI Service basics, responsible generative AI, and safety considerations

Azure OpenAI Service is the main Azure offering associated with generative AI on the AI-900 exam. It provides access to powerful generative models in the Azure environment, allowing organizations to build solutions such as chat assistants, summarizers, and content generation tools. From an exam standpoint, you should know the service at a functional level: it enables natural language generation and conversational AI use cases while integrating with Azure’s enterprise capabilities.

You are not expected to memorize deployment steps or programming details. Instead, focus on what the service is for and when to choose it. If the business wants a model to generate text, answer open-ended questions, or support chat and copilot experiences, Azure OpenAI Service is a strong match. If the need is OCR, image tagging, speech-to-text, or translation, another Azure AI service may be more appropriate.

Responsible generative AI is heavily testable. Generative systems can produce harmful, biased, offensive, or inaccurate content. They can also present made-up information in a convincing tone. Microsoft expects candidates to understand that deploying these systems requires safeguards. These include content filtering, monitoring, user transparency, access controls, human review, and carefully scoped prompts and instructions.

Another safety concept is that not every prompt should be accepted without restriction, and not every completion should be shown to users without oversight. Organizations may need to block unsafe requests, reduce harmful outputs, and log usage for governance. The exam may ask for the best way to make a generative solution safer, and the correct answer usually includes some combination of filtering, grounding, and human oversight rather than blind automation.

  • Use Azure OpenAI Service for generative text and chat scenarios
  • Apply responsible AI principles to reduce harm and misuse
  • Use content filters and governance controls
  • Remember that generated output may be fluent but wrong

Exam Tip: If two answers both mention Azure OpenAI Service, prefer the one that also includes responsible use measures when the question mentions risk, safety, compliance, or harmful outputs.

Common exam traps include assuming generated content is always reliable, assuming safety controls are optional, or confusing responsible AI principles with only technical accuracy. Responsible AI also includes fairness, transparency, accountability, privacy, and user protection. In the context of generative AI, the exam especially emphasizes safe outputs and appropriate oversight.

Section 5.6: Exam-style scenarios and practice questions for Generative AI workloads on Azure

Section 5.6: Exam-style scenarios and practice questions for Generative AI workloads on Azure

This final section is about exam reasoning rather than memorization. AI-900 questions on generative AI workloads often present brief business cases and ask you to identify the best Azure service or the most appropriate concept. Your success depends on spotting clue words and eliminating answers that solve a different AI problem.

For example, if a company wants an assistant that can draft responses to customers, summarize prior interactions, and answer questions conversationally, the clues point to generative AI and Azure OpenAI Service. If the requirement is to extract invoice text or detect objects in images, those clues point away from generative AI even if the distractor answer mentions a modern-sounding AI platform.

Another frequent scenario involves grounding answers in company data. If users must ask questions about internal policies, product manuals, or knowledge articles, the strongest reasoning is that a generative model should be paired with a retrieval-based pattern so responses are informed by trusted information. The exam will likely assess whether you understand why this improves relevance and reduces unsupported responses.

Practice mentally sorting scenario clues into categories:

  • Generate or draft new content: generative AI
  • Summarize long text: generative AI
  • Open-ended chat assistant: generative AI
  • Classify, detect, score, or predict: likely traditional AI or another Azure AI service
  • Need organization-specific answers: generative AI plus retrieval/grounding
  • Need safer outputs: responsible AI controls and content filtering

Exam Tip: On scenario questions, first identify the task type, then the Azure service, then the safety requirement. This three-step method helps you avoid distractors.

Do not overthink the exam. AI-900 is a fundamentals exam, so answers are usually based on straightforward alignment between problem and service. The test is checking whether you can think like a solution selector at a high level. If you can clearly distinguish generation from analysis, prompts from training, and enterprise grounding from unsupported chat, you are well prepared for this chapter’s domain. Use the chapter quiz and later mock exam to reinforce these patterns until the right service choice feels automatic.

Chapter milestones
  • Understand generative AI concepts in beginner-friendly language
  • Explore copilots, prompts, and foundation model use cases
  • Learn Azure OpenAI Service basics and responsible use
  • Practice AI-900 questions on generative AI workloads
Chapter quiz

1. A company wants to build an internal assistant that can draft email responses, answer employee questions in natural language, and summarize long policy documents. Which Azure capability is the best fit for this requirement?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is the best fit because the scenario focuses on generative AI tasks such as drafting text, conversational question answering, and summarization. These are common foundation model and large language model use cases in the AI-900 exam domain. Azure AI Vision is designed for image-based tasks such as object detection or image analysis, not for generating natural language responses. Azure AI Document Intelligence can extract and analyze content from forms and documents, but it does not primarily provide open-ended text generation or conversational drafting capabilities.

2. You are reviewing a proposed AI solution. The business says it needs to classify customer emails into categories such as billing, technical support, and sales. Which option should you identify as the most appropriate approach?

Show answer
Correct answer: Use a traditional language classification capability rather than a generative AI model
A classification requirement points to a specialized language AI capability or traditional machine learning approach, not necessarily generative AI. AI-900 expects you to distinguish between generating new content and classifying existing content. Option A is incorrect because not every language scenario requires a large language model; classification is usually better handled by a specialized service. Option C is incorrect because Azure AI Vision is intended for image and visual workloads, not text email classification.

3. A retail organization wants a copilot that answers questions about company products by using both a foundation model and the organization's product catalog. What is the main reason to include the organization's data in the solution?

Show answer
Correct answer: To ground responses in relevant company information
Including organizational data helps ground model responses in trusted, relevant business content, which improves usefulness and reduces unsupported answers. This aligns with AI-900 concepts around using foundation models with enterprise data for copilots and question-answering scenarios. Option B is incorrect because adding product data does not change the workload into computer vision. Option C is incorrect because responsible AI safeguards such as content filtering, oversight, and transparency are still required even when the model uses company data.

4. A company plans to deploy a chat-based generative AI solution for customer support. Which practice best supports responsible AI use?

Show answer
Correct answer: Use content filtering, human oversight, and clear disclosure that AI is being used
AI-900 expects candidates to recognize that generative AI can produce inaccurate or inappropriate content, so safeguards are necessary. Content filtering, human oversight, and transparency are core responsible AI practices for Azure generative AI workloads. Option A is incorrect because natural-sounding output is not guaranteed to be factual or safe. Option C is incorrect because prompt design and controlled experiences are important safeguards; unrestricted responses generally increase risk rather than reduce it.

5. A marketing team wants an AI solution that can generate first drafts of product descriptions from short prompts such as product name, features, and audience. Which concept best describes how users guide the model's output?

Show answer
Correct answer: Prompting
Prompting is the process of providing instructions or context to guide a foundation model's generated output. In AI-900, prompts are a core concept for generative AI workloads such as drafting and summarization. Option A is incorrect because object detection is a computer vision task used to identify items in images. Option C is incorrect because feature engineering for supervised classification relates to traditional machine learning, not prompt-driven text generation by a large language model.

Chapter 6: Full Mock Exam and Final Review

This final chapter brings the entire AI-900 exam-prep course together into one exam-focused review experience. Up to this point, you have studied the major domains that Microsoft tests in AI Fundamentals: AI workloads and responsible AI considerations, machine learning principles on Azure, computer vision workloads, natural language processing workloads, and generative AI concepts and Azure OpenAI Service scenarios. In this chapter, the goal is not to introduce brand-new material. Instead, it is to sharpen exam-ready reasoning, reinforce service selection patterns, and help you avoid the wording traps that appear frequently in Microsoft fundamentals exams.

The AI-900 exam rewards candidates who can recognize what a scenario is really asking. Many items are not testing whether you can build a model or configure a service in the Azure portal. Instead, they test whether you can identify the correct Azure AI capability for a business problem, distinguish between related concepts, and apply foundational responsible AI principles. This is why a full mock exam matters: it trains your judgment under time pressure and reveals weak spots across the official objective domains.

In the lessons for this chapter, you will work through Mock Exam Part 1 and Mock Exam Part 2 as a full-length, mixed-domain simulation. After that, the Weak Spot Analysis lesson helps you interpret your results by objective, not just by total score. Finally, the Exam Day Checklist lesson turns knowledge into execution. Strong candidates often miss easy points because they rush, overread, or choose answers that sound advanced rather than answers that best fit the stated requirement.

As you review this chapter, keep the exam blueprint in mind. AI-900 is broad rather than deep. You should expect to compare services, identify likely use cases, understand high-level machine learning terminology, and recognize core generative AI concepts such as prompts, copilots, foundation models, and Azure OpenAI use cases. You are not expected to memorize code syntax. You are expected to know what each service is for, when to use it, and which answer most directly solves the stated problem.

Exam Tip: In fundamentals exams, the simplest correct answer is often the best answer. If one option clearly matches the business need with a managed Azure AI service, and another option sounds more complex or custom, the managed service is often the intended choice.

This chapter is organized to mirror the way expert exam coaches think: simulate the test, review answer logic, diagnose recurring mistakes, perform a rapid domain review, and finish with a practical confidence checklist. Use it as your final tuning session before the real exam.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full-length AI-900 mock exam covering all official domains

Section 6.1: Full-length AI-900 mock exam covering all official domains

Your full mock exam should feel like a realistic cross-domain experience, because the actual AI-900 exam does not isolate topics neatly. A question about responsible AI may appear next to one on OCR, followed by a machine learning concept item and then a generative AI scenario. The purpose of Mock Exam Part 1 and Mock Exam Part 2 is to build stamina and force rapid recognition of the exam objective being tested. Before checking any answers, complete the entire mock under timed conditions. That means no notes, no pausing to research, and no changing your pace simply because you feel unsure.

As you work, classify each item mentally into one of the main domains. Ask yourself: is this testing AI workload identification, responsible AI, ML basics, Azure ML, computer vision, NLP, or generative AI? This habit helps because many fundamentals questions become easier once you identify the domain. For example, if the scenario is about extracting printed text from scanned forms, you can quickly place it in computer vision and narrow your choices toward OCR-related services rather than language analysis or machine learning model training.

The exam also tests your ability to distinguish similar-sounding capabilities. You may need to separate classification from regression, computer vision from OCR, translation from sentiment analysis, or traditional conversational AI from generative AI copilots. During the mock exam, do not just choose what sounds familiar. Choose what directly satisfies the requirement stated in the scenario.

  • Read the final line of the question first to identify the decision being requested.
  • Underline mentally the business verb: classify, predict, detect, extract, translate, summarize, generate, or recommend.
  • Notice whether the question asks for a concept, a workload type, or a specific Azure service.
  • Eliminate answers that are technically related but do not solve the exact problem.

Exam Tip: If a mock item seems difficult, do not assume it is advanced. Often the challenge is wording, not content depth. Rephrase the scenario in plain language and ask what the user actually wants the AI system to do.

A full mock exam is not only about score prediction. It is about pattern recognition. By the end of both mock parts, you should see recurring themes: Azure AI services are purpose-built, responsible AI principles are practical rather than abstract, and exam questions reward precise matching between need and capability. Treat the mock as diagnostic training for the real exam environment.

Section 6.2: Answer review with rationale and objective-by-objective mapping

Section 6.2: Answer review with rationale and objective-by-objective mapping

After completing the mock exam, the most important work begins: answer review. Many learners make the mistake of checking only which questions they missed. Expert preparation goes further. For every item, especially the ones you guessed correctly, ask why the right answer is right and why the other options are wrong. This is how you improve transfer of learning from one scenario to another.

Map each question to its AI-900 objective. If you missed an item about fairness, transparency, privacy, or accountability, log it under the responsible AI objective. If you confused classification with regression or misunderstood what Azure Machine Learning is used for, record that under machine learning principles on Azure. If you mixed up OCR, image classification, object detection, or face-related scenarios, place that under computer vision. Do the same for NLP and generative AI. Your goal is to identify not random mistakes, but clusters of weakness.

When reviewing rationale, pay attention to exam language. Microsoft often tests distinctions such as “best service,” “most appropriate capability,” or “identify the workload.” These are not the same. A “workload” answer might be computer vision or natural language processing, while a “service” answer might be Azure AI Vision, Language, Speech, or Azure OpenAI Service. Missing that distinction can turn a familiar topic into an incorrect response.

Exam Tip: If two answer choices both sound plausible, look for the one that matches the exact scope of the requirement. For example, one option may describe a broad category while another describes the specific Azure service that implements it.

Objective-by-objective mapping also helps you decide what to review next. Suppose your mock exam shows strong performance in vision and NLP but repeated errors in responsible AI and ML terminology. That tells you not to waste time rereading everything equally. Focus your final review where point gains are most likely. Effective candidates study selectively at this stage.

A strong answer review process includes three notes for each missed item: the tested objective, the clue you overlooked, and the rule you will use next time. For example: “Objective: NLP on Azure. Missed clue: requirement was translation, not sentiment analysis. Rule: when language must be converted between languages, choose translation capability first.” This transforms mistakes into exam strategy.

Section 6.3: Common traps in Microsoft fundamentals exams and how to avoid them

Section 6.3: Common traps in Microsoft fundamentals exams and how to avoid them

Microsoft fundamentals exams are designed to test understanding, not memorization alone, and that means wording traps are common. One of the biggest traps is choosing an answer because it sounds more sophisticated. In AI-900, the exam usually rewards the Azure service or concept that most directly addresses the need, not the one that implies the most custom engineering. If a managed Azure AI service solves the stated problem, that is often the correct direction.

Another frequent trap is confusing related AI tasks. Candidates may mix up image classification with object detection, OCR with document understanding, intent recognition with generative text generation, or model training with model consumption. The exam expects you to know the difference at a practical level. If the scenario needs labels for an entire image, think classification. If it needs to identify and locate multiple items within an image, think object detection. If it needs to read text from an image, think OCR.

A third trap is overreading requirements. Fundamentals questions often provide extra context that is not necessary for selecting the answer. Focus on the core action requested. Is the business trying to forecast a numeric value, assign a category, extract text, translate speech, or generate content from a prompt? Strip away background details and center the required capability.

  • Watch for broad terms versus specific product names.
  • Separate AI principles from technical implementation details.
  • Do not assume every scenario requires machine learning model training.
  • Be careful when multiple options are all Azure-related but only one fits the exact use case.

Exam Tip: On fundamentals exams, “recognize and choose” matters more than “build and configure.” If an answer requires unnecessary complexity, it is often a distractor.

Finally, beware of partial matches. A distractor may be technically adjacent to the real answer. For example, speech services may analyze spoken audio, but if the requirement is language translation of text already provided, a speech-focused option is incomplete. Likewise, generative AI can create text, but if the requirement is identifying key phrases from existing text, an NLP analytics service is the better fit. Train yourself to reject answers that are related but not exact.

Section 6.4: Rapid review of Describe AI workloads and Fundamental principles of ML on Azure

Section 6.4: Rapid review of Describe AI workloads and Fundamental principles of ML on Azure

Start your rapid review with the broadest domain: AI workloads and responsible AI. On the exam, you should recognize common AI workload categories such as machine learning, computer vision, natural language processing, conversational AI, and generative AI. You should also understand responsible AI principles in practical terms. Fairness means systems should avoid harmful bias. Reliability and safety mean systems should perform consistently and minimize risk. Privacy and security focus on protecting data and access. Inclusiveness considers diverse users and accessibility. Transparency means people should understand AI system behavior at an appropriate level. Accountability means humans remain responsible for outcomes.

When the exam presents a scenario involving ethical concerns, do not overcomplicate it. Ask which principle is most directly affected. If a hiring model disadvantages a group, fairness is the issue. If users cannot understand how outputs are produced, transparency is central. If personal data is mishandled, privacy and security are implicated.

For machine learning on Azure, know the foundational concepts. Classification predicts categories or labels. Regression predicts numeric values. Clustering groups similar items without predefined labels. You should also recognize training data, features, labels, and the distinction between supervised and unsupervised learning. The exam may test these ideas in business language rather than academic terminology.

Azure Machine Learning is the core platform for building, training, deploying, and managing machine learning models on Azure. At the AI-900 level, you are not expected to know deep implementation detail, but you should know the service purpose and where it fits. Automated machine learning helps identify suitable algorithms and models. Training uses data to create a model, while inferencing applies the model to new data.

Exam Tip: If the question is about predicting a yes/no or category outcome, lean toward classification. If it is about predicting a number, think regression. If there are no labels and the goal is grouping similar records, think clustering.

Also remember the exam may contrast machine learning with rule-based solutions. Not every AI problem needs custom ML. If a requirement maps directly to an Azure AI service that already provides the needed capability, that may be preferable to training a model from scratch.

Section 6.5: Rapid review of Computer vision workloads on Azure, NLP workloads on Azure, and Generative AI workloads on Azure

Section 6.5: Rapid review of Computer vision workloads on Azure, NLP workloads on Azure, and Generative AI workloads on Azure

For computer vision on Azure, focus on task recognition and service alignment. Common tested capabilities include image classification, object detection, OCR, facial analysis scenarios, and analysis of image content. The exam wants you to identify what the system must do with visual input. If the requirement is to read printed or handwritten text from images, think OCR-related capabilities. If the goal is to detect objects and their location, think object detection. If the question involves describing or tagging image content broadly, think image analysis. Be careful with face-related scenarios, because the exam may test awareness of Azure AI capabilities while also expecting sensitivity to responsible AI and appropriate use.

For natural language processing, know the main tasks: sentiment analysis, key phrase extraction, entity recognition, translation, question answering, conversational understanding, and speech-related capabilities such as speech-to-text and text-to-speech. A frequent exam trap is choosing a general language option when the need is clearly speech-specific, or vice versa. Always identify the input and output form first: text in, text out; speech in, text out; text in, speech out; or text in, translated text out.

Generative AI is now a major part of AI-900 preparation. You should understand foundation models as large pre-trained models that can be adapted or prompted for many tasks. You should know that prompts guide model behavior and that prompt quality affects output relevance. Copilots are AI assistants embedded into workflows to help users generate content, summarize information, answer questions, or automate parts of tasks. Azure OpenAI Service provides access to generative AI capabilities in Azure with enterprise-oriented governance, security, and integration scenarios.

  • Use computer vision when the primary input is image or video content.
  • Use NLP when the core task is understanding or transforming human language.
  • Use speech capabilities when audio is central to the requirement.
  • Use generative AI when the goal is creating, summarizing, rewriting, or conversationally producing content from prompts.

Exam Tip: Generative AI creates new content; traditional analytics services extract structure or meaning from existing content. If the exam asks for summarization or content generation, generative AI may be the better fit. If it asks for entities, sentiment, or key phrases, choose the analytics-oriented language capability.

In final review, make sure you can explain not just what each service does, but how to tell it apart from the nearest distractor.

Section 6.6: Final confidence checklist, exam-day strategy, and next-step certification planning

Section 6.6: Final confidence checklist, exam-day strategy, and next-step certification planning

Your final preparation should now shift from studying to execution. A strong Exam Day Checklist starts with readiness basics: confirm your exam appointment, identification requirements, testing environment, and technical setup if testing remotely. Reduce avoidable stress. Last-minute panic hurts recall more than it helps. The night before, review your weak-spot notes, service comparison summaries, and the responsible AI principles. Avoid cramming details that were never central to the exam objectives.

During the exam, pace yourself. Read carefully, but do not overanalyze. If a question seems ambiguous, identify the exact skill being tested and eliminate partial matches. Mark difficult items and move on instead of spending too long early. Fundamentals exams often include enough straightforward questions to build momentum if you stay calm.

Use this final confidence checklist:

  • I can distinguish AI workload categories quickly.
  • I can explain responsible AI principles with practical examples.
  • I can tell classification, regression, and clustering apart.
  • I know the purpose of Azure Machine Learning at a high level.
  • I can map image, text, speech, and generative scenarios to the right Azure AI service family.
  • I can separate analytics tasks from generative tasks.
  • I know how to reject answers that are related but not exact.

Exam Tip: Confidence on exam day does not mean knowing everything. It means trusting your process: identify the objective, isolate the requirement, eliminate distractors, and choose the best fit.

After the exam, think beyond the score. AI-900 is a foundation credential. If you plan to continue in Azure AI, data, or machine learning, use your results to guide your next step. Strong interest in model building may lead toward deeper Azure machine learning study. Strong interest in language, vision, or generative AI solutions may point you toward role-based Azure AI certifications and hands-on service implementation. This chapter, and this course, are designed to make you not only test-ready but direction-ready for the next stage of your Microsoft AI learning journey.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A company wants to add a chatbot to its customer support portal. The solution must generate natural-sounding responses to open-ended questions by using a large language model. Which Azure service should you recommend?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is correct because AI-900 expects you to recognize that generative AI scenarios involving natural-language response generation are best matched to Azure OpenAI models. Azure AI Document Intelligence is used to extract data from forms and documents, not to generate conversational answers. Azure AI Vision is used for image analysis tasks such as tagging, OCR, and object detection, not for open-ended text generation.

2. You are reviewing a practice exam question that asks which principle of responsible AI focuses on ensuring an AI system does not treat similar users differently based on irrelevant characteristics. Which principle is being described?

Show answer
Correct answer: Fairness
Fairness is correct because the responsible AI principle addresses avoiding bias and ensuring comparable outcomes for people in similar situations. Transparency is about making AI systems and their decisions understandable, so it does not directly address unequal treatment. Reliability and safety focuses on consistent and dependable operation under expected conditions, not on whether outcomes are biased across groups.

3. A retailer wants to predict whether a customer is likely to cancel a subscription. The historical data includes a column that indicates either Yes or No for cancellation. What type of machine learning problem is this?

Show answer
Correct answer: Classification
Classification is correct because the target outcome is a discrete label, such as Yes or No. Regression would be used if the business wanted to predict a numeric value, such as monthly revenue or time until cancellation. Clustering is an unsupervised technique used to group similar records when no labeled target is provided, so it does not fit a scenario with known historical cancellation labels.

4. A company needs to process scanned invoices and extract vendor names, invoice totals, and due dates by using a prebuilt AI capability. Which Azure AI service should you choose?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is correct because AI-900 commonly tests recognition of managed document-processing services for extracting structured fields from forms and invoices. Azure Machine Learning could be used to build custom models, but the exam often favors the simplest managed service that directly fits the requirement. Azure AI Speech is for speech-to-text, text-to-speech, and related audio workloads, so it does not match invoice extraction.

5. During a final mock exam review, you see the following requirement: 'Recommend the Azure AI solution that most directly identifies the key phrases and sentiment in product reviews.' Which service should you select?

Show answer
Correct answer: Azure AI Language
Azure AI Language is correct because sentiment analysis and key phrase extraction are core natural language processing capabilities in that service. Azure AI Vision is designed for image-related analysis rather than text analytics. Azure Bot Service helps build conversational bots, but it is not the primary service for extracting sentiment or key phrases from text. On the AI-900 exam, the best answer is the managed service that most directly provides the required NLP capability.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.