HELP

AI-900 Practice Test Bootcamp

AI Certification Exam Prep — Beginner

AI-900 Practice Test Bootcamp

AI-900 Practice Test Bootcamp

Master AI-900 with targeted practice and clear exam-ready reviews

Beginner ai-900 · microsoft · azure ai fundamentals · azure

Prepare for Microsoft AI-900 with a focused beginner bootcamp

AI-900: Azure AI Fundamentals is one of the most approachable Microsoft certification exams for learners entering the world of artificial intelligence and Azure. This course is designed for beginners who want a structured, exam-focused path to success using realistic practice questions, domain-based review, and a final mock exam experience. If you are aiming to validate your understanding of AI concepts without needing deep coding skills or prior certification experience, this bootcamp gives you a practical way to prepare.

The course follows the official AI-900 exam direction from Microsoft and organizes your study into six clear chapters. You will begin by learning how the exam works, how registration and scheduling typically happen, what scoring looks like, and how to build a study strategy that fits your time and experience level. From there, the course moves into the major exam domains with targeted explanations and exam-style practice.

Built around the official AI-900 exam domains

The blueprint maps directly to the core skills tested in the Microsoft Azure AI Fundamentals exam. The course covers:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Instead of presenting these topics as theory alone, the course uses a practice-driven structure. Each domain chapter includes concept framing, Azure service recognition, common exam traps, and multiple-choice question practice with explanations. This helps you learn the content in the same style you will encounter on test day.

How the 6-chapter structure helps you pass

Chapter 1 introduces the certification journey and helps you understand the exam before you begin memorizing services or definitions. This is especially helpful for first-time certification candidates who may not know what to expect from Microsoft exams.

Chapters 2 through 5 are aligned to the official objective areas. You will review what different AI workloads are, how machine learning concepts appear in Azure, how computer vision and natural language processing scenarios are described, and what generative AI means in the Azure ecosystem. Because AI-900 often tests recognition, comparison, and scenario matching, the chapters emphasize selecting the right service for the right need.

Chapter 6 serves as your final checkpoint with a full mock exam chapter, answer analysis, weak-area review, and an exam-day checklist. This final stage helps turn passive understanding into exam readiness.

Why this course works for beginners

This bootcamp assumes basic IT literacy but no prior Microsoft certification experience. The lessons are structured to reduce overwhelm, explain essential cloud AI vocabulary, and reinforce learning through repetition. You will not just see the right answer—you will understand why the other choices are wrong, which is critical for improving your score on multiple-choice exams.

By the end of the course, you should be able to interpret AI-900 question wording more confidently, connect official exam objectives to Azure services, and identify the most likely correct answer under time pressure. Whether you are a student, career switcher, business professional, or early-stage cloud learner, this course helps create a reliable path toward Microsoft Azure AI Fundamentals success.

Who should enroll now

  • Beginners preparing for the Microsoft AI-900 exam
  • Learners exploring Azure AI services for the first time
  • Professionals who want a vendor-recognized AI fundamentals credential
  • Anyone who prefers practice-question-based exam preparation

If you are ready to begin, Register free and start building your AI-900 exam confidence. You can also browse all courses to continue your Microsoft certification path after completing this bootcamp.

What You Will Learn

  • Describe AI workloads and common considerations for responsible AI on Azure
  • Explain the fundamental principles of machine learning on Azure, including regression, classification, clustering, and Azure Machine Learning concepts
  • Identify computer vision workloads on Azure and select the right Azure AI Vision and related services for exam scenarios
  • Recognize natural language processing workloads on Azure, including text analysis, speech, translation, and conversational AI
  • Describe generative AI workloads on Azure, including Azure OpenAI concepts, copilots, prompts, and responsible use
  • Apply AI-900 exam strategy using domain-based practice questions, explanations, and full mock exams

Requirements

  • Basic IT literacy and comfort using web applications
  • No prior certification experience is needed
  • No prior Azure or AI experience is required
  • Willingness to practice multiple-choice questions and review explanations

Chapter 1: AI-900 Exam Foundations and Study Strategy

  • Understand the AI-900 exam blueprint
  • Plan your registration and scheduling path
  • Build a beginner-friendly study strategy
  • Set a baseline with diagnostic practice

Chapter 2: Describe AI Workloads

  • Recognize core AI workload categories
  • Differentiate common real-world AI scenarios
  • Connect workloads to Azure AI services
  • Practice exam-style workload selection questions

Chapter 3: Fundamental Principles of ML on Azure

  • Learn machine learning fundamentals
  • Compare regression, classification, and clustering
  • Understand model training and evaluation basics
  • Answer AI-900 ML domain practice questions

Chapter 4: Computer Vision and NLP Workloads on Azure

  • Understand key computer vision scenarios
  • Master NLP service capabilities and use cases
  • Map Azure services to vision and language tasks
  • Strengthen accuracy with mixed-domain practice

Chapter 5: Generative AI Workloads on Azure

  • Explain generative AI concepts for beginners
  • Understand Azure OpenAI and copilot scenarios
  • Review prompts, grounding, and safety concepts
  • Solve AI-900 generative AI practice sets

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Engineer Associate

Daniel Mercer is a Microsoft Certified Trainer with years of experience preparing learners for Azure certification exams. He specializes in Microsoft AI, cloud fundamentals, and exam-focused instruction that helps beginners understand objectives and answer confidently.

Chapter 1: AI-900 Exam Foundations and Study Strategy

The AI-900: Microsoft Azure AI Fundamentals exam is designed to test whether you understand the core ideas behind artificial intelligence workloads and how Microsoft Azure services map to those workloads. This first chapter gives you the framework for the entire course. Before you memorize service names or compare computer vision with natural language processing, you need a reliable exam strategy. Candidates often fail not because the content is too advanced, but because they do not understand the blueprint, underestimate the exam format, or study in a way that does not match the objectives Microsoft actually measures.

This chapter focuses on four practical starting points: understanding the AI-900 exam blueprint, planning your registration and scheduling path, building a beginner-friendly study strategy, and setting a baseline with diagnostic practice. These are not administrative details; they are exam-performance skills. The AI-900 exam is broad rather than deeply technical, so success depends on recognizing categories of AI workloads, identifying the most appropriate Azure service for a scenario, and avoiding distractors that sound plausible but do not match the requirement.

One of the most important mindset shifts is to treat AI-900 as an objective-mapping exam, not just a terminology exam. Microsoft expects you to distinguish machine learning from generative AI, computer vision from document intelligence, and speech capabilities from broader language capabilities. The exam also expects familiarity with responsible AI principles, common Azure AI services, and the kinds of business problems each service is built to solve. When a question describes a scenario, your job is to identify the workload first, then the service, then eliminate answers that belong to a different domain.

Exam Tip: On fundamentals exams, Microsoft often tests whether you can match the simplest correct service to the requirement. Do not over-engineer your answer. If a scenario asks for image tagging, look for Azure AI Vision rather than a broader platform unless the wording specifically requires model training, custom labeling, or another specialized capability.

Another key point is that AI-900 is beginner-friendly, but it is not random-entry easy. The exam rewards structured preparation. You should know the official domains, understand how the objectives are weighted, and use those weights to allocate study time. You should also establish your baseline early through diagnostic practice. A baseline score tells you where you are weak before you become emotionally attached to a study plan that may not be effective.

Throughout this book, the strategy will be consistent: learn the concept, connect it to the exam objective, identify common traps, and practice recognizing the wording Microsoft uses. That is especially important in AI-900 because many answers are close cousins. Azure Machine Learning, Azure AI services, Azure AI Vision, Azure AI Language, Azure AI Speech, Azure OpenAI, and responsible AI principles all live in the same broad ecosystem, but each has a distinct exam role. In this chapter, you will build the foundation for mastering those distinctions efficiently and confidently.

  • Learn who the exam is for and why the certification matters.
  • Plan registration, scheduling, and account setup before your study momentum fades.
  • Understand the exam format, scoring expectations, and question styles.
  • Use Microsoft domain weightings to prioritize study time.
  • Create a realistic study plan with repeatable note-taking and revision habits.
  • Use diagnostic and review cycles to turn practice questions into exam readiness.

If you approach AI-900 as a guided pattern-recognition exam, your preparation becomes much more efficient. The goal of this chapter is to help you start correctly so that every hour you invest later produces higher retention and stronger exam decisions.

Practice note for Understand the AI-900 exam blueprint: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Plan your registration and scheduling path: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: AI-900 exam overview, audience, and certification value

Section 1.1: AI-900 exam overview, audience, and certification value

The AI-900 exam is Microsoft’s entry-level certification for Azure AI concepts. It is designed for learners who want to demonstrate foundational knowledge of artificial intelligence workloads and the Azure services that support them. The target audience includes students, career changers, business analysts, cloud newcomers, non-developer technical professionals, and anyone beginning an Azure AI learning path. It does not assume deep data science or software engineering experience, which makes it approachable, but it still expects clear conceptual understanding.

From an exam-objective perspective, AI-900 covers several major knowledge areas: AI workloads and responsible AI considerations, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI workloads on Azure. The exam tests recognition, comparison, and service selection more than implementation. In other words, you are usually being asked what a service does, when it should be used, or which AI category a business scenario belongs to.

The certification has value because it validates AI literacy in an Azure context. For many candidates, it acts as a confidence-building first certification before more specialized exams. For employers, it signals that you can speak accurately about Azure AI solutions, understand the differences between workload types, and participate in solution discussions without confusing key concepts.

A common trap is assuming a fundamentals exam only tests definitions. In reality, Microsoft often frames questions as business requirements. That means you must recognize signals in the wording. If a scenario describes predicting a numerical value, think regression. If it describes grouping similar items without labeled outcomes, think clustering. If it describes extracting text sentiment, key phrases, or entities, think language analysis. If it describes prompt-based content generation or copilots, think generative AI and Azure OpenAI-related concepts.

Exam Tip: Read the scenario for the business need first, not the technology words. Microsoft may include familiar but misleading terms in the answer choices. The correct answer is the one that best fits the required outcome, not the most advanced-sounding service.

As you move through this course, remember that AI-900 is not about proving that you can build a full AI solution from scratch. It is about demonstrating that you can identify the right category, understand the purpose of Azure offerings, and make sound foundational decisions. That is exactly the level of thinking Microsoft expects from a successful candidate.

Section 1.2: Exam registration, delivery options, and account setup

Section 1.2: Exam registration, delivery options, and account setup

Registration may seem like a simple administrative task, but poor planning here can disrupt your study rhythm. The best approach is to choose a realistic exam window early. Pick a target date that creates urgency without causing panic. Most beginners benefit from scheduling the exam after they have reviewed all domains once and completed at least one diagnostic practice cycle. If you wait until you “feel ready,” you may drift. If you schedule too early, you may study reactively instead of systematically.

Microsoft exams are typically scheduled through the Microsoft certification portal and delivered through an authorized exam provider. Candidates usually choose between in-person testing at a center and online proctored delivery. Each option has tradeoffs. A testing center reduces home-setup risks, while online delivery offers convenience. However, online delivery requires a quiet room, identity verification, rule compliance, and dependable internet and hardware. If your environment is unpredictable, in-person testing may reduce stress.

Make sure your Microsoft Learn account, certification profile, and personal identification details match exactly. Name mismatches are a frequent and avoidable problem. Also verify your time zone, confirmation email, and login credentials in advance. If accommodations are needed, research those requirements early rather than close to exam day.

A useful scheduling strategy is to work backward from your exam date. Divide your preparation into phases: objective review, first practice cycle, weak-area reinforcement, second practice cycle, and final revision. This turns the registration date into a study anchor. The date should motivate consistency, not create uncertainty.

Exam Tip: If you choose online proctoring, do a full technical readiness check before exam week. Candidates sometimes know the material but lose focus because of camera, microphone, browser, or room-compliance issues. Remove avoidable test-day friction.

The exam itself measures knowledge, but your logistics determine whether you can demonstrate that knowledge calmly. Treat registration, delivery selection, and account setup as part of your readiness plan. A well-prepared candidate knows not only what to study, but when and how the exam will be taken.

Section 1.3: Exam format, scoring model, question types, and retake policy

Section 1.3: Exam format, scoring model, question types, and retake policy

Understanding the exam format helps you manage time and reduce anxiety. Microsoft certification exams commonly include a range of question styles rather than one fixed format. You may encounter standard multiple-choice items, multiple-select items, matching-style prompts, scenario-based questions, or other structured response types. Even on a fundamentals exam, the question format can influence difficulty. A simple concept becomes harder when several answer choices are technically true but only one is the best fit for the stated requirement.

The scoring model is scaled rather than a simple raw percentage. Candidates often focus too much on guessing the exact number of questions needed to pass. That is not the most useful way to prepare. Instead, focus on consistency across all measured domains. A scaled passing score means performance is interpreted through the exam’s scoring framework, so your best strategy is balanced competence, not gaming the score.

Question wording matters. Watch for qualifiers such as “best,” “most appropriate,” “should use,” or “requires the least effort.” These words narrow the acceptable answer. In AI-900, Microsoft often tests service recognition through small wording differences. For example, a generic AI service answer may be wrong if a specific workload service is the better fit. Likewise, an answer involving custom model development may be wrong if the scenario only needs a prebuilt Azure AI capability.

Retake policy awareness also matters psychologically. If you do not pass on the first attempt, you can regroup and return with a stronger plan. Knowing there is a formal retake path can reduce pressure, but do not use that fact as an excuse for under-preparation. The better mindset is to aim for a first-pass result by understanding patterns, not memorizing isolated facts.

Exam Tip: When two answers seem correct, ask which one maps most directly to the scenario with the least extra assumption. Fundamentals exams reward precise matching. The broad platform answer is not always better than the focused service answer.

Your goal is to become fluent in Microsoft’s style of asking foundational questions. Study the content, but also train your eye to recognize qualifiers, distractors, and scope. That skill improves your score even before your technical knowledge increases.

Section 1.4: Official exam domains and how Microsoft weights objectives

Section 1.4: Official exam domains and how Microsoft weights objectives

The AI-900 blueprint is your roadmap. Microsoft publishes official skills measured, and those domains tell you what the exam is designed to evaluate. For this course, the major domains align with the expected outcomes: describing AI workloads and responsible AI considerations, explaining machine learning fundamentals on Azure, identifying computer vision workloads, recognizing natural language processing workloads, and describing generative AI workloads on Azure. These domains are not all weighted equally, so your study plan should reflect that reality.

One of the smartest moves a candidate can make is to study proportionally. If a domain carries greater exam weight, it deserves more of your time, more practice, and more revision cycles. That does not mean ignoring lower-weighted areas. It means avoiding the common mistake of overstudying a favorite topic while underpreparing for a larger section of the blueprint. Microsoft measures breadth across the exam, so blind spots are risky.

You should also break each domain into likely exam tasks. For example, machine learning content may test the differences among regression, classification, and clustering, as well as broad Azure Machine Learning concepts. Computer vision may test image analysis, OCR-related capabilities, face-related awareness, or document intelligence use cases. Natural language processing may involve sentiment analysis, key phrase extraction, language detection, speech services, translation, or conversational AI. Generative AI may involve Azure OpenAI concepts, prompts, copilots, and responsible usage boundaries.

Responsible AI is especially important because candidates sometimes treat it as a soft topic and skim it. That is a mistake. Microsoft expects you to understand principles such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. On the exam, these ideas may appear as scenario judgments rather than simple definition recall.

Exam Tip: Turn each official objective into a study question for yourself: “Can I explain this workload, identify the right Azure service, and eliminate similar but incorrect options?” If not, that objective is not exam-ready yet.

Domain weightings are more than percentages. They are a study-prioritization tool. In this bootcamp, you should use the blueprint to decide what to review first, what to practice most often, and what to revisit during your final revision week.

Section 1.5: Study planning, time management, and note-taking method

Section 1.5: Study planning, time management, and note-taking method

A beginner-friendly AI-900 study plan should be simple, consistent, and objective-based. Many candidates fail because they consume too many disconnected resources. A stronger approach is to choose a small number of trusted sources, align them to the Microsoft blueprint, and study in short focused sessions. For example, you might assign specific days to machine learning, vision, language, and generative AI, with one review session each week devoted to responsible AI and service comparison.

Time management matters more than intensity. A sustainable plan of regular sessions usually beats occasional long cramming blocks. Build your schedule around three activities: learning, recall, and review. Learning means reading or watching content. Recall means explaining concepts from memory without looking at notes. Review means checking weak areas and correcting misunderstandings. If your plan only includes learning, you may feel productive while retaining very little.

A practical note-taking method for AI-900 is the comparison-table approach. Create concise notes with columns such as workload, purpose, Azure service, common use case, and common confusion. This is especially helpful because AI-900 tests distinctions. For example, you want notes that clearly separate text analysis from speech, prebuilt vision analysis from custom model development, and traditional machine learning from prompt-based generative AI. Keep another page for responsible AI principles with one sentence and one scenario example for each principle.

Another effective method is the “objective card” system. For every exam objective, write a short card or digital note answering three prompts: what it is, when to use it, and what it is commonly confused with. This builds exam judgment rather than passive recognition.

Exam Tip: If your notes are too detailed to review quickly, they are not optimized for a fundamentals exam. Your final notes should help you compare services and concepts fast, because that is how the exam tests you.

The best study plan is not the most ambitious one. It is the one you can follow consistently until exam day. Build momentum early, track progress by objective, and revise based on evidence from practice rather than guesswork.

Section 1.6: How to use practice questions, explanations, and review cycles

Section 1.6: How to use practice questions, explanations, and review cycles

Practice questions are most useful when they are treated as a diagnostic tool, not a score-chasing tool. Your first goal is to establish a baseline. Take an early set of practice questions after initial exposure to the domains, then analyze the results by objective. Do not be discouraged by a low first score. The value of diagnostic practice is that it reveals where your thinking is weak, where your terminology is fuzzy, and which Azure services you are confusing.

The explanation matters more than whether you got the item right. A correct answer for the wrong reason is still a weakness. When reviewing explanations, ask three questions: Why is the correct answer correct? Why are the other options wrong? What wording in the scenario should have pointed me to the right domain or service? This process trains pattern recognition, which is essential for AI-900.

Use review cycles instead of one-time practice. After each practice session, categorize mistakes into types: concept gap, service confusion, careless reading, or overthinking. Then revise accordingly. Concept gaps require re-study. Service confusion requires side-by-side comparison notes. Careless reading requires slower question analysis. Overthinking requires reminding yourself that fundamentals questions often seek the simplest fitting solution.

A strong cycle looks like this: diagnostic set, review explanations, targeted revision, shorter follow-up set, and then mixed-domain review. As exam day approaches, increase the number of mixed-domain sessions because the real exam does not isolate topics neatly. You need to recognize whether a scenario belongs to machine learning, vision, language, or generative AI without being told.

Exam Tip: Keep an error log. For every missed practice item, write the objective, the trap you fell for, and the clue you missed. Reviewing this log in the final week is often more valuable than rereading all your notes.

Do not memorize practice items. Microsoft changes wording and scenarios. What transfers to the real exam is your ability to identify the workload, map it to the appropriate Azure service, and reject plausible distractors. Used correctly, practice questions become a mirror for your reasoning, and that is exactly what you need to improve before sitting the AI-900 exam.

Chapter milestones
  • Understand the AI-900 exam blueprint
  • Plan your registration and scheduling path
  • Build a beginner-friendly study strategy
  • Set a baseline with diagnostic practice
Chapter quiz

1. You are beginning preparation for the AI-900 exam. Which study approach best aligns with how the exam objectives are measured?

Show answer
Correct answer: Study according to the official exam domains and their weightings, then use practice results to adjust your focus
The correct answer is to study according to the official exam domains and weightings, then refine your plan using practice results. AI-900 is an objective-mapping exam, so candidates should align study time to measured skills rather than treat all topics equally. Memorizing product names first is insufficient because the exam tests matching workloads and scenarios to the correct service. Focusing only on the newest Azure AI features is also incorrect because fundamentals exams emphasize core concepts and service fit, not just recent announcements.

2. A candidate plans to 'study until ready' and wait to register for AI-900 later. Based on recommended exam strategy, what is the best guidance?

Show answer
Correct answer: Plan registration, scheduling, and account setup early so logistics do not interrupt study momentum
The best guidance is to plan registration, scheduling, and account setup early. Chapter 1 emphasizes that these are part of exam readiness, not optional administrative tasks. Waiting until all studying is complete can create avoidable delays and reduce momentum. Skipping account setup until exam day is especially poor advice because identity, account, or scheduling issues can interfere with the test experience and are not something candidates should leave to the last minute.

3. A learner takes a diagnostic quiz at the start of AI-900 preparation and scores lower than expected in questions about service selection. What should the learner do next?

Show answer
Correct answer: Use the diagnostic result to identify weak domains and update the study plan before continuing
The correct response is to use the diagnostic to identify weak areas and adjust the study plan. A baseline is intended to reveal gaps early so the learner can spend time where it matters most. Ignoring the result defeats the purpose of diagnostic practice. Memorizing every service definition may help somewhat, but it is too narrow and does not directly address the broader exam skill of mapping scenarios, workloads, and objectives.

4. A company wants to improve employee readiness for AI-900. The training lead tells candidates: 'When you see a scenario on the exam, first identify the business requirement and AI workload, then choose the Azure service that best fits.' Why is this advice effective?

Show answer
Correct answer: Because AI-900 questions often require distinguishing similar Azure AI offerings by workload and intended use
This advice is effective because AI-900 commonly tests whether candidates can distinguish related services by workload and use case. For example, the exam may require separating vision, language, speech, machine learning, or generative AI scenarios. The exam does not primarily focus on deep implementation steps, so option A is incorrect. Option C is also wrong because although time management matters, the exam rewards correct scenario mapping rather than simple speed-based recall.

5. A practice question asks for the best Azure solution to tag objects in images. One learner selects a broad machine learning platform because it seems more powerful. Based on AI-900 exam strategy, what is the best answer approach?

Show answer
Correct answer: Choose the simplest service that directly matches the requirement unless the scenario specifically calls for custom model training
The correct approach is to choose the simplest service that directly matches the requirement unless the question explicitly requires custom training or advanced customization. On AI-900, Microsoft often tests whether candidates can select the most appropriate out-of-the-box service rather than over-engineer the solution. Choosing the broadest platform is a common distractor and is incorrect if a more direct Azure AI service fits the scenario. Avoiding service-specific answers is also wrong because AI-900 explicitly includes mapping Azure services to AI workloads.

Chapter 2: Describe AI Workloads

This chapter targets one of the most frequently tested AI-900 skill areas: recognizing AI workload categories, matching business scenarios to the correct type of AI solution, and identifying the Azure services most likely to appear in exam questions. Microsoft does not expect you to build models or write code for this exam. Instead, the exam tests whether you can read a short scenario, identify the AI workload being described, and choose the best-fit Azure service or approach. That means your success depends on pattern recognition.

At a high level, AI workloads in AI-900 usually fall into a few major families: machine learning, computer vision, natural language processing, conversational AI, anomaly detection, knowledge mining, and generative AI. The exam often blends these categories with real-world business needs such as predicting sales, extracting text from invoices, analyzing customer sentiment, translating speech, or generating draft content. Your job is to determine what the business is actually asking for. Many wrong answers sound plausible because Azure has several services with overlapping themes. The exam rewards precision.

The first lesson in this chapter is to recognize core AI workload categories. If a scenario asks you to predict a numeric value such as house price, revenue, or time-to-failure, think machine learning and more specifically regression. If it asks you to place an item into a group such as approve or deny, spam or not spam, or churn or no churn, think classification. If it asks you to find patterns in unlabeled data, think clustering. If the scenario involves images, videos, facial features, optical character recognition, or object detection, think computer vision. If it involves text, sentiment, key phrases, entities, translation, speech, or question answering from language, think NLP. If it involves creating new text, code, or images from prompts, think generative AI.

The second lesson is to differentiate common real-world AI scenarios. The AI-900 exam often hides the workload behind business language. A retailer wanting to group customers by purchasing behavior is a clustering scenario, not classification, because no labeled category is given in advance. A bank wanting to decide whether a loan applicant is low, medium, or high risk is classification because the target categories are known. A manufacturer using cameras to detect defects on a production line is computer vision. A company that wants a bot to answer common customer questions may be using conversational AI, and if the bot relies on large language models to generate natural responses, the scenario may also involve generative AI.

The third lesson is to connect workloads to Azure AI services. This is where many candidates lose points. Azure Machine Learning is associated with building, training, and managing machine learning models. Azure AI Vision supports image analysis, OCR, and related visual workloads. Azure AI Language supports text analytics and language understanding tasks. Azure AI Speech supports speech-to-text, text-to-speech, translation in speech scenarios, and speaker-related capabilities. Azure AI Search is often used when the task involves indexing and retrieving information across content repositories. Azure OpenAI Service is associated with generative AI, prompt-driven applications, and copilots.

Exam Tip: On AI-900, do not overcomplicate the scenario. Start by asking, “What is the input? What is the output?” Image in, labels out usually points to vision. Text in, sentiment or entities out points to NLP. Historical data in, future value out points to machine learning. Prompt in, novel content out points to generative AI.

This chapter also prepares you for exam-style workload selection questions. These questions are usually less about implementation details and more about correctly classifying the problem. A common trap is confusing prebuilt AI services with custom machine learning. If the task is common and well-defined, such as OCR, translation, or sentiment analysis, Microsoft often expects you to choose a prebuilt Azure AI service rather than Azure Machine Learning. Another trap is confusing conversational AI with generative AI. A traditional chatbot that follows defined intents and flows is not automatically a generative AI solution. Likewise, a generative copilot that drafts responses is not merely text analytics.

You should also connect workload recognition to responsible AI. AI-900 includes common considerations such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Questions may ask which principle is most relevant when an AI model treats similar groups differently, when users need explanations, or when systems must protect personal data. Even in workload questions, responsible AI can affect the best answer.

By the end of this chapter, you should be able to read a business requirement, identify the workload category, narrow the likely Azure service, eliminate distractors, and make a confident exam choice. That is exactly what AI-900 tests in this domain.

Sections in this chapter
Section 2.1: Describe AI workloads and artificial intelligence concepts

Section 2.1: Describe AI workloads and artificial intelligence concepts

For AI-900, an AI workload is a type of business problem that artificial intelligence techniques can help solve. The exam is not asking whether you can engineer a production system. It is asking whether you understand the difference between workload categories and the kinds of tasks each category handles best. Think of workloads as patterns: prediction, classification, pattern discovery, visual interpretation, language understanding, speech processing, and content generation.

Artificial intelligence is the broad umbrella. Under that umbrella, machine learning is the most common method for discovering patterns from data and making predictions. Computer vision deals with interpreting images and video. Natural language processing focuses on understanding or generating human language. Speech AI handles spoken input and output. Generative AI creates new content based on learned patterns and prompts. Conversational AI combines language technologies to interact with users through chat or voice. On the exam, these terms may appear separately or in blended scenarios.

A core concept is that the same organization may use multiple workloads at once. For example, a retail solution might use computer vision to count store traffic, machine learning to forecast demand, natural language processing to analyze reviews, and generative AI to draft product descriptions. If an exam question asks for the best service for one specific requirement, answer only that requirement, not the broader solution architecture.

Exam Tip: When a question includes words like predict, forecast, estimate, score, classify, detect, extract, recognize, translate, summarize, or generate, treat those as workload clues. These verbs often point directly to the answer category.

Common exam traps include mistaking automation for AI and mistaking analytics for AI. A rules-based system is not necessarily AI. A dashboard that reports historical sales is analytics, not machine learning, unless it predicts future outcomes. Another trap is assuming all chat experiences use generative AI. Some are intent-based conversational systems with defined responses. The exam tests whether you can identify the underlying capability rather than the marketing label.

  • Machine learning: learns from data to predict or classify.
  • Computer vision: interprets images, faces, text in images, and objects.
  • NLP: analyzes text, sentiment, entities, language, and meaning.
  • Speech AI: transcribes, synthesizes, translates, and recognizes speech.
  • Generative AI: produces new content from prompts.

Your goal for this section is to be fluent in the vocabulary. If you can correctly label the workload category from a short business description, you have already solved much of the exam question.

Section 2.2: Identify machine learning, computer vision, NLP, and generative AI scenarios

Section 2.2: Identify machine learning, computer vision, NLP, and generative AI scenarios

This section maps common scenario wording to exam objectives. Machine learning scenarios usually involve historical data and an outcome to learn. The test commonly distinguishes regression, classification, and clustering. Regression predicts a numeric value such as sales amount, temperature, or delivery time. Classification predicts a category such as fraud or not fraud, pass or fail, or product type. Clustering groups similar items when no labels are provided, such as discovering customer segments. If the question says “group similar” or “find patterns without predefined labels,” choose clustering, not classification.

Computer vision scenarios involve understanding visual content. Typical AI-900 examples include detecting objects in photos, identifying whether an image contains unsafe content, extracting printed or handwritten text with OCR, analyzing faces for attributes, or identifying landmarks and image tags. If the input is an image, scanned document, or video feed, vision is the likely workload. A frequent trap is seeing text extraction from an image and thinking NLP because the output is text. The correct workload is still computer vision because the input is visual.

Natural language processing scenarios work with written or spoken language content. Text analytics tasks include sentiment analysis, language detection, key phrase extraction, named entity recognition, and summarization. Translation changes text from one language to another. Speech scenarios include speech-to-text, text-to-speech, and real-time translation of spoken audio. If the business problem is to understand opinions in customer reviews, identify important terms in documents, or detect the language of submitted text, think NLP.

Generative AI scenarios involve creating new content rather than only classifying or extracting existing information. Examples include drafting emails, summarizing large bodies of content in a conversational style, generating code suggestions, producing natural answers from prompts, or building copilots that interact in flexible language. The key clue is novelty: the system is producing original output based on user instructions and context.

Exam Tip: Ask whether the system is predicting, interpreting, or generating. Predicting points to machine learning. Interpreting images points to vision. Interpreting text or speech points to NLP. Producing new text or code points to generative AI.

Another exam trap is blended scenarios. For example, “an app that reads a receipt aloud” combines OCR and speech synthesis. “A customer support assistant that searches internal manuals and drafts answers” may combine Azure AI Search with generative AI. In those questions, focus on the capability the stem specifically asks you to select.

Section 2.3: Common Azure AI services mapped to workload types

Section 2.3: Common Azure AI services mapped to workload types

Once you identify the workload, the next exam skill is mapping that workload to the correct Azure service. This is a major scoring area because AI-900 often presents two or three believable options. Azure Machine Learning is the platform for creating, training, deploying, and managing custom machine learning models. If a scenario involves model training, experimentation, data science workflows, or MLOps concepts, Azure Machine Learning is a strong signal.

For computer vision, look for Azure AI Vision. This service aligns with image analysis, OCR, object detection, tagging, and related prebuilt visual capabilities. If the problem is to extract text from scanned images or analyze the contents of pictures, Azure AI Vision is usually the right answer. Do not select Azure Machine Learning unless the question specifically calls for building a custom model beyond the scope of prebuilt services.

For NLP, Azure AI Language is central. It supports text analytics capabilities such as sentiment analysis, key phrase extraction, entity recognition, question answering, and summarization. For speech-based scenarios, Azure AI Speech is the best match. It covers speech-to-text, text-to-speech, speech translation, and other voice capabilities. On the exam, candidates sometimes confuse language and speech because both process human language. Use the input form as your guide: written text usually means Language; spoken audio usually means Speech.

Azure AI Search appears in scenarios where organizations need to index documents, enable search experiences, or extract value from large stores of content. It is especially relevant in knowledge mining-style questions and in architectures that support grounding generative AI responses with enterprise data.

Azure OpenAI Service maps to generative AI workloads. If the scenario mentions prompts, chat completions, copilots, content generation, summarization through large language models, or responsible generative AI use, Azure OpenAI is the likely answer. This service is also a common exam target for distinguishing traditional NLP from modern generative AI.

  • Azure Machine Learning: custom ML lifecycle and model management.
  • Azure AI Vision: image analysis, OCR, visual detection tasks.
  • Azure AI Language: text analytics and language understanding.
  • Azure AI Speech: speech recognition, synthesis, translation.
  • Azure AI Search: indexing and searching enterprise content.
  • Azure OpenAI Service: LLM-based generative AI applications.

Exam Tip: Prefer prebuilt Azure AI services when the scenario describes a common capability that Microsoft already provides, such as OCR, sentiment analysis, or translation. Choose Azure Machine Learning when the scenario clearly emphasizes building or training a custom predictive model.

Section 2.4: Responsible AI principles and trustworthy AI considerations

Section 2.4: Responsible AI principles and trustworthy AI considerations

AI-900 does not only test what AI can do; it also tests how AI should be used responsibly. Microsoft frames responsible AI around principles such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These principles appear both as standalone questions and as scenario-based judgment calls.

Fairness means AI systems should not produce unjustified different outcomes for similar people or groups. If a hiring or lending system disadvantages one demographic without valid reason, fairness is the concern. Reliability and safety mean the system should perform consistently and avoid harmful failures. This matters in healthcare, manufacturing, and autonomous systems where errors can create real-world harm.

Privacy and security focus on protecting personal data and preventing unauthorized access or misuse. If a scenario involves customer records, voice recordings, images of individuals, or sensitive documents, this principle is especially relevant. Inclusiveness means designing AI that works for people with varied abilities, languages, and backgrounds. Transparency refers to making AI behavior understandable, including when AI is being used and how decisions are made. Accountability means humans and organizations remain responsible for AI outcomes.

The exam may test these principles indirectly. For example, if users need to understand why a model denied a loan, transparency is the key principle. If a facial recognition system performs poorly for some groups, fairness and inclusiveness may be implicated. If an organization must ensure only authorized staff can access prompts and generated outputs, privacy and security are central.

Exam Tip: Match the principle to the problem symptom. Biased outcomes suggest fairness. Need for explanation suggests transparency. Protecting sensitive data suggests privacy and security. Human oversight and ownership suggest accountability.

Generative AI brings extra concerns: hallucinations, harmful content, prompt injection, misuse, and overreliance on generated output. On the exam, responsible use of generative AI often includes content filtering, human review, grounding outputs on trusted data, and clear user expectations. Do not assume accurate-sounding generated content is automatically reliable. Microsoft wants candidates to recognize that trustworthy AI requires both technical controls and governance.

Section 2.5: Choosing the best Azure solution from business requirements

Section 2.5: Choosing the best Azure solution from business requirements

This is where exam strategy matters most. AI-900 questions often present short business requirements rather than naming the workload directly. Your task is to translate the requirement into the correct AI category and then into the best Azure service. Start with three filters: input type, desired output, and whether the need is prebuilt or custom.

If the input is tabular historical data and the output is a prediction, the problem usually belongs to machine learning. If the input is an image or scanned page and the output is extracted or interpreted visual information, choose a vision service. If the input is text or speech and the output is understanding, translation, sentiment, or transcription, choose language or speech services. If the output is newly created content in response to a prompt, choose generative AI.

The next filter is whether the business needs a common, ready-made capability or a custom-trained model. AI-900 often expects you to select prebuilt Azure AI services for standard needs because they reduce complexity and time-to-value. If a company wants sentiment analysis on reviews, Azure AI Language is more appropriate than building a custom sentiment model in Azure Machine Learning. If a company wants OCR on receipts, Azure AI Vision is a better fit than designing a custom model.

A common trap is being distracted by words like “AI” or “chatbot” without reading the actual requirement. A chatbot that answers FAQs from a fixed set of intents might be conversational AI but not necessarily generative AI. A search portal across policy documents might call itself an “AI assistant,” but if the core need is indexing and retrieval, Azure AI Search is essential. Read for function, not branding.

Exam Tip: Eliminate wrong answers by asking what they do not do. Azure AI Speech does not analyze images. Azure AI Vision does not forecast sales. Azure Machine Learning is not the default answer for every AI scenario. Azure OpenAI is not the right tool just because text is involved.

On the exam, the best answer is usually the simplest Azure service that satisfies the stated requirement. If Microsoft gives you a built-in service that directly solves the problem, that is often the intended choice. Reserve custom ML and complex architectures for questions that explicitly demand them.

Section 2.6: Exam-style MCQs on Describe AI workloads

Section 2.6: Exam-style MCQs on Describe AI workloads

This chapter closes by preparing you for the style of multiple-choice thinking you will face, even though the actual practice questions appear elsewhere in the course. In this domain, Microsoft commonly tests whether you can identify the workload category first and the Azure service second. The fastest route to the correct answer is to spot the clue words, classify the task, and then verify the service fit.

When reviewing MCQs, avoid jumping straight to Azure product names. First decide: Is this machine learning, computer vision, NLP, speech, or generative AI? Then ask whether the requirement is prebuilt or custom. This two-step method dramatically reduces confusion between similar answers. For instance, OCR belongs to vision even if the final result is text. Sentiment belongs to language even if it could be modeled in custom ML. Prompt-based drafting belongs to generative AI even if it resembles summarization.

Pay attention to the exact wording of answer choices. The exam often includes one answer that is technically possible but not the best or most direct Azure solution. AI-900 measures appropriateness, not just possibility. A custom model might solve a problem, but if a managed Azure AI service solves it directly, that service is usually the better answer.

Exam Tip: In scenario MCQs, underline the noun and verb mentally. “Invoice image” plus “extract text” points to vision. “Customer reviews” plus “determine positive or negative” points to language. “Historical transactions” plus “predict fraud” points to machine learning classification. “Prompt” plus “draft response” points to Azure OpenAI.

Another useful exam habit is identifying distractor patterns. Distractors often come from adjacent categories: Azure AI Language versus Azure AI Speech, Azure AI Vision versus Azure Machine Learning, or conversational AI versus generative AI. If you can explain why each wrong answer is wrong, your understanding is strong enough for the real test.

As you move into practice questions, keep your focus on workload recognition, service mapping, and responsible AI cues. Those three skills together cover the heart of what AI-900 expects when it asks you to describe AI workloads.

Chapter milestones
  • Recognize core AI workload categories
  • Differentiate common real-world AI scenarios
  • Connect workloads to Azure AI services
  • Practice exam-style workload selection questions
Chapter quiz

1. A retail company wants to predict next month's sales revenue for each store by using historical sales, promotions, and seasonal trends. Which AI workload best fits this requirement?

Show answer
Correct answer: Regression machine learning
This is a regression machine learning scenario because the goal is to predict a numeric value: future sales revenue. Clustering would be used to group stores or customers based on similar patterns when no labeled outcome is provided. Computer vision is incorrect because the scenario does not involve images, video, or visual analysis. On AI-900, predicting a number from historical data is a strong signal for regression.

2. A manufacturer uses cameras on a production line to identify damaged packaging before products are shipped. Which Azure AI service is the best fit for this scenario?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is the best fit because the input is images from cameras and the task is visual inspection. Azure AI Language is for text-based workloads such as sentiment analysis, entity extraction, and key phrase detection, so it does not match an image scenario. Azure AI Speech is used for speech-to-text, text-to-speech, and related audio workloads, which are not part of this requirement. AI-900 commonly tests the pattern of image in, labels or detection out = vision.

3. A bank wants to assign each loan applicant to one of three known risk categories: low, medium, or high. Which type of machine learning problem is this?

Show answer
Correct answer: Classification
This is classification because the model must choose from predefined labels: low, medium, or high risk. Clustering would be appropriate only if the bank wanted to discover natural groupings in applicant data without predefined categories. Optical character recognition is a computer vision capability used to extract text from images or documents, which is unrelated to assigning risk classes. On the AI-900 exam, known categories usually indicate classification.

4. A company wants users to type prompts and receive draft marketing copy generated for new product launches. Which Azure service is most appropriate?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is the best choice because the scenario involves prompt-based generation of new content, which is a generative AI workload. Azure AI Search is primarily used to index, retrieve, and search across content repositories; it does not by itself generate draft copy. Azure Machine Learning is used to build and manage custom machine learning models, but the exam typically maps prompt-in, novel content-out scenarios to Azure OpenAI Service. This aligns with official AI-900 domain knowledge around generative AI services.

5. A support team wants to analyze thousands of customer reviews to determine whether each review expresses a positive, neutral, or negative opinion. Which AI workload is being described?

Show answer
Correct answer: Natural language processing
This is natural language processing because the input is text and the desired output is sentiment. Computer vision is incorrect because there is no image or video data involved. Anomaly detection focuses on identifying unusual patterns or outliers, such as fraudulent transactions or equipment failures, rather than determining sentiment in written text. In AI-900 scenarios, sentiment analysis is a standard NLP use case and is commonly associated with Azure AI Language.

Chapter 3: Fundamental Principles of ML on Azure

This chapter targets one of the highest-value areas on the AI-900 exam: understanding the fundamental principles of machine learning and recognizing how Azure supports common machine learning scenarios. On the test, Microsoft is not trying to turn you into a data scientist. Instead, the exam measures whether you can identify the right machine learning approach for a business problem, understand the meaning of basic training and evaluation concepts, and recognize where Azure Machine Learning fits in the Azure AI ecosystem.

You should expect scenario-based questions that describe a business need and ask you to choose whether the solution is regression, classification, or clustering. You may also see questions that test whether you understand the role of features, labels, datasets, training, validation, and testing. In addition, the exam often checks whether you can distinguish core Azure Machine Learning capabilities from other Azure AI services. That means this chapter is not just about memorizing definitions. It is about learning how to decode exam wording and eliminate plausible but incorrect answer choices.

The first lesson in this chapter is to learn machine learning fundamentals in a practical Azure context. Machine learning is a technique that uses data to train models that can make predictions or identify patterns. In Azure, this idea appears through Azure Machine Learning, automated machine learning, data labeling, model training pipelines, and responsible AI practices. The exam usually presents machine learning as a problem-solving tool: predict a numeric value, assign a category, or discover natural groupings in data. Your job is to map the wording in the scenario to the correct concept.

The second lesson is to compare regression, classification, and clustering. This comparison is heavily tested because the three can sound similar under time pressure. A regression model predicts a numeric value, such as future sales or the price of a house. A classification model predicts a category, such as whether a loan application is approved or whether an email is spam. A clustering model groups similar items when no predefined labels are available, such as segmenting customers by behavior. Many AI-900 candidates miss points here because they focus on the industry use case rather than the prediction target.

The third lesson is to understand model training and evaluation basics. The exam wants you to know what data is used for training versus validation versus testing, what features and labels are, and why evaluation metrics matter. It also helps to recognize overfitting at a conceptual level. If a model performs extremely well on training data but poorly on new data, it may have learned patterns that do not generalize. This is a classic exam trap because answer choices may use general business language to describe what is really a model quality issue.

The fourth lesson is exam application: answer AI-900 ML domain practice questions by identifying clue words. Words like forecast, estimate, and predict amount suggest regression. Words like approve, detect fraud, and categorize suggest classification. Words like group customers, discover segments, and find similarities suggest clustering. On exam day, this clue-word strategy can save time and reduce second-guessing.

Azure Machine Learning is especially important because it is the main Azure platform for building, training, managing, and deploying machine learning models. You do not need deep implementation knowledge for AI-900, but you should understand the service at a high level. Know that Azure Machine Learning supports data preparation, automated machine learning, training, model management, deployment, and monitoring. Also understand that automated machine learning helps users automatically try multiple algorithms and preprocessing options to find a strong model for a given prediction task.

Exam Tip: If a question asks which Azure service data scientists use to train, manage, and deploy machine learning models at scale, the answer is usually Azure Machine Learning, not Azure AI Vision, Azure AI Language, or Azure OpenAI.

Another concept that appears in foundational exam questions is responsible machine learning. Even though AI-900 is introductory, Microsoft expects you to recognize that model quality is not the only concern. Fairness, transparency, reliability, privacy, and accountability matter. If a question asks what organizations should consider when building or deploying models, do not choose an answer that focuses only on accuracy. Responsible AI principles are a recurring exam theme across the course.

As you work through this chapter, think like an exam candidate. For each concept, ask yourself three things: What is the definition? What clues identify it in a scenario? What wrong answer is Microsoft likely to place beside it? That approach will help you move from passive reading to active recognition, which is exactly what this type of certification exam requires.

  • Map numeric prediction problems to regression.
  • Map category prediction problems to classification.
  • Map unlabeled grouping problems to clustering.
  • Know the purpose of training, validation, and testing datasets.
  • Recognize common evaluation language such as accuracy and root mean squared error at a conceptual level.
  • Associate Azure Machine Learning with end-to-end ML lifecycle capabilities.
  • Remember that responsible AI considerations are part of good ML practice and can appear in scenario questions.

By the end of this chapter, you should be ready to explain fundamental machine learning principles on Azure in plain language and identify the best answer in AI-900-style scenarios. That is the real target of this domain: not advanced model building, but confident, accurate recognition of machine learning concepts and Azure service fit.

Sections in this chapter
Section 3.1: Fundamental principles of machine learning on Azure

Section 3.1: Fundamental principles of machine learning on Azure

Machine learning is a branch of AI in which systems learn patterns from data and use those patterns to make predictions or decisions. For AI-900, the exam objective is not to test mathematical depth. Instead, it tests whether you understand the purpose of machine learning and can match common problem types to the correct ML approach. In Azure, the main platform associated with building and operationalizing machine learning solutions is Azure Machine Learning.

At a foundational level, machine learning starts with data. A model is trained using historical examples so it can make predictions on new data. This is why many exam questions describe a business wanting to use past transactions, prior outcomes, sensor readings, or customer records to anticipate future results. If the solution learns from examples rather than following only hard-coded rules, you are likely in machine learning territory.

Azure Machine Learning provides capabilities that support the ML lifecycle, including data preparation, training, experiment tracking, model management, and deployment. The service is designed for data scientists, developers, and ML engineers who need a managed platform for creating and operationalizing models. On the exam, you should recognize Azure Machine Learning as the right service when the scenario mentions training custom models or managing the model lifecycle.

Exam Tip: AI-900 questions often contrast Azure Machine Learning with prebuilt Azure AI services. If the requirement is to use a prebuilt capability such as image analysis or sentiment analysis, Azure AI services may fit better. If the requirement is to build a custom predictive model from your own dataset, think Azure Machine Learning.

A common trap is assuming that all AI on Azure is the same. It is not. Machine learning is one category of AI workload. Computer vision, natural language processing, and generative AI are separate categories, even though they can overlap in real solutions. The exam tests your ability to separate these domains. If the problem is “predict the likely sales amount,” that is ML. If the problem is “identify objects in an image,” that is computer vision.

Another principle to remember is that ML models generalize from examples. They should perform well not only on the data used in training, but also on unseen data. That is why the exam includes concepts like validation, testing, and overfitting. The core principle is simple: a useful ML model must learn patterns that transfer to new situations.

Section 3.2: Regression, classification, and clustering use cases

Section 3.2: Regression, classification, and clustering use cases

This section is one of the most tested areas in the AI-900 machine learning objective. You must be able to compare regression, classification, and clustering quickly and accurately. The easiest way to do that is to focus on the output the model is expected to produce.

Regression predicts a numeric value. Typical examples include forecasting sales revenue, estimating delivery time, predicting energy usage, or calculating insurance cost. If the answer must be a number on a scale rather than a category, regression is usually correct. The exam often uses words such as predict price, estimate amount, or forecast value as clues.

Classification predicts a category or class label. Common examples include determining whether a transaction is fraudulent, deciding if a customer will churn, categorizing emails as spam or not spam, or assigning a medical image to a diagnostic category. Even if there are only two categories, such as yes/no, approve/deny, or fraud/not fraud, the problem is still classification. Many exam takers confuse binary classification with regression because both can look predictive. The distinction is the output: category versus number.

Clustering groups data items based on similarity when labels are not already provided. For example, a company might want to group customers by purchasing behavior, identify usage patterns, or discover naturally occurring segments in data. Clustering is unsupervised because the algorithm is not learning from known correct labels. On the exam, words like group, segment, organize by similarity, and discover patterns are strong clustering indicators.

Exam Tip: Ask yourself, “Is the goal a number, a label, or a grouping?” Number means regression. Label means classification. Grouping without known labels means clustering.

A common trap is answer choices that all sound reasonable from a business perspective. For example, customer segmentation might sound like classification because customers end up in categories, but if those categories are discovered from the data rather than predefined, the correct concept is clustering. Likewise, predicting whether a customer will buy a product is classification, not regression, because the output is a category such as yes or no.

When the exam asks you to compare these methods, think functionally rather than technically. Microsoft is testing whether you can identify the business use case fit, not whether you can derive the algorithm. Keep your focus on the nature of the prediction target and you will avoid many of the most common mistakes.

Section 3.3: Features, labels, datasets, training, validation, and testing

Section 3.3: Features, labels, datasets, training, validation, and testing

To answer ML questions correctly, you need a clean mental model of the data terms the exam uses. Features are the input variables used by a model to make a prediction. For example, if you are predicting house price, features might include square footage, number of bedrooms, and location. The label is the value the model is trying to predict. In the house-price example, the label is the sale price.

In supervised learning scenarios such as regression and classification, the training dataset includes both features and known labels. The model learns relationships between the inputs and the known outcomes. If the data does not contain the correct output column, then a supervised model cannot learn in the usual way. That is why clustering differs: it typically uses unlabeled data to discover groups.

Training is the process of fitting the model to the data. Validation is used during model development to compare models, tune settings, or make decisions about model performance before final testing. Testing is used to evaluate how well the final model performs on unseen data. Although the exact workflow can vary, the exam expects you to know the basic role of each dataset split.

Exam Tip: If a question asks which dataset should be used to assess final model performance on new data, choose the test dataset. If it asks which dataset is used to train the model, choose the training dataset.

A frequent exam trap is wording that suggests the model should be evaluated on the same data used for training. That sounds efficient, but it does not provide a reliable measure of generalization. Good evaluation requires separate data, especially for testing. Another trap is confusing features with labels. Remember: features go in, prediction comes out.

From a practical perspective, the exam also expects you to understand that data quality matters. Missing values, inconsistent formats, duplicate records, or biased sampling can reduce model performance or fairness. While AI-900 does not go deep into data engineering, it does expect you to recognize that machine learning outcomes depend heavily on the quality and representativeness of the training data.

Section 3.4: Model evaluation metrics, overfitting, and responsible ML basics

Section 3.4: Model evaluation metrics, overfitting, and responsible ML basics

Model evaluation is about determining how well a trained model performs. On AI-900, you are expected to know this conceptually rather than mathematically. For regression, common evaluation language includes error-based measures such as mean absolute error or root mean squared error, which reflect how close predicted numeric values are to actual values. For classification, common language includes accuracy, precision, recall, and confusion matrix concepts. You do not need advanced formulas, but you should know that different metrics help measure model effectiveness.

Accuracy alone is not always enough. In some classification problems, especially where classes are imbalanced, a model can appear accurate while still performing poorly on the outcomes that matter most. The exam may not dive deeply into class imbalance, but it can still test your understanding that model quality should be evaluated thoughtfully rather than by one simplistic number.

Overfitting occurs when a model learns the training data too closely, including noise or random patterns, and then performs poorly on new data. This is a core concept because it explains why training performance is not the same as real-world performance. If a scenario says a model performs extremely well during training but poorly after deployment or on test data, overfitting is a likely answer.

Exam Tip: High training performance and low test performance usually point to overfitting. The exam may describe the symptom instead of using the term directly.

Responsible ML basics are also part of the objective domain. A model should not only be accurate; it should also be fair, reliable, transparent, and privacy-conscious. If a dataset underrepresents certain groups, the resulting model may produce biased outcomes. Questions in this area may ask what an organization should consider before deploying an ML solution. Look for answers that include fairness, interpretability, accountability, and ongoing monitoring.

A common trap is choosing the answer that focuses only on speed or accuracy while ignoring ethical or governance concerns. Microsoft’s certification exams consistently reinforce responsible AI principles. If the question asks about good ML practice in production, the best answer often includes both technical quality and responsible use considerations.

Section 3.5: Azure Machine Learning capabilities and automated machine learning concepts

Section 3.5: Azure Machine Learning capabilities and automated machine learning concepts

Azure Machine Learning is Azure’s cloud platform for building, training, managing, and deploying machine learning models. For AI-900, you should understand the broad capabilities of the service rather than implementation details. It supports the end-to-end machine learning lifecycle, including preparing data, running experiments, tracking models, registering assets, deploying models to endpoints, and monitoring them after deployment.

The service is useful when an organization wants to create custom models using its own business data. This is a key distinction from Azure AI services that provide prebuilt intelligence for tasks such as vision or language. In exam scenarios, if the requirement is to train a custom predictive model for business outcomes, Azure Machine Learning is usually the best fit.

Automated machine learning, often called automated ML or AutoML, is another important exam concept. Automated ML helps users train and tune models automatically by trying different algorithms, preprocessing options, and parameter settings to identify a strong-performing model for a particular task. This is especially useful when teams want to accelerate model development or when they do not want to manually test many combinations.

Exam Tip: If the question asks for a way to reduce the manual effort of model selection and hyperparameter tuning, automated machine learning is a strong answer.

Do not confuse automated ML with a no-code AI service that simply consumes text or images. Automated ML is still part of machine learning model development; it just automates much of the trial-and-error involved in selecting and optimizing models. The user still defines the prediction task, provides data, and evaluates results.

Another common exam trap is assuming Azure Machine Learning is only for training. In reality, the service also supports deployment and operational management. If a scenario mentions managing models through their lifecycle, tracking experiments, or deploying models as services, Azure Machine Learning remains the central answer. Learn to associate the service with the full ML workflow, not only with training.

Section 3.6: Exam-style MCQs on Fundamental principles of ML on Azure

Section 3.6: Exam-style MCQs on Fundamental principles of ML on Azure

This chapter ends with an exam-prep strategy section focused on how to answer AI-900-style multiple-choice questions in the machine learning domain. The most effective approach is pattern recognition. Most questions in this objective can be solved by identifying what the organization is trying to predict, what kind of data is available, and whether the requirement points to a machine learning method or an Azure service.

Start by finding the target outcome in the scenario. If the output is a numeric amount, lean toward regression. If the output is a predefined category, lean toward classification. If the task is to discover natural groups without known labels, lean toward clustering. This simple framework eliminates many distractors immediately and is one of the highest-return strategies for this exam domain.

Next, identify the lifecycle clue words. Words like train, deploy, manage, track experiments, and custom model point toward Azure Machine Learning. Words like automatically select the best algorithm or reduce manual tuning effort point toward automated machine learning. When the exam combines business language with service selection, this clue-word approach is especially useful.

Exam Tip: Do not overthink introductory exam questions. AI-900 usually tests broad understanding and service fit, not deep implementation detail. If one answer clearly matches the task at a foundational level, it is often the right choice.

Be careful with partial truths. Many wrong answers describe something related to AI but not specific enough for the requirement. For example, a scenario about grouping unlabeled customers may include classification as a distractor because both involve categories. Your job is to ask whether the categories already exist or must be discovered. That distinction often determines the correct answer.

Finally, use elimination aggressively. Remove answers that mismatch the output type, confuse features and labels, or select a prebuilt AI service when the requirement is to build a custom model. In AI-900, disciplined elimination can raise your score significantly because many items are designed to test concept clarity more than technical depth.

Chapter milestones
  • Learn machine learning fundamentals
  • Compare regression, classification, and clustering
  • Understand model training and evaluation basics
  • Answer AI-900 ML domain practice questions
Chapter quiz

1. A retail company wants to build a model that predicts the total dollar amount a customer will spend next month based on previous purchases, location, and loyalty status. Which type of machine learning should they use?

Show answer
Correct answer: Regression
Regression is correct because the target is a numeric value: the total dollar amount a customer will spend. Classification would be used if the company wanted to predict a category such as high-value or low-value customer. Clustering would be used to group similar customers when no predefined label or target value exists.

2. A bank wants to use historical loan application data to predict whether a new application should be approved or denied. The historical dataset includes applicant income, credit score, and a column that indicates approved or denied. In this scenario, what is the approved/denied column?

Show answer
Correct answer: A label
The approved/denied column is the label because it is the known outcome the model is being trained to predict. Features are the input variables such as income and credit score. A cluster is not a column in labeled supervised training; clustering is an unsupervised technique used when predefined outcomes are not available.

3. You train a machine learning model and find that it performs very well on the training dataset but poorly on new, unseen data. What is the most likely explanation?

Show answer
Correct answer: The model is overfitting
Overfitting is correct because the model has learned patterns too specific to the training data and does not generalize well to new data. Clustering versus classification is a separate problem type decision and does not explain strong training performance with weak test performance. Having many labels is not, by itself, the reason for this pattern; the core exam concept here is poor generalization.

4. A marketing team has customer purchase data but no predefined categories. They want to discover natural segments of customers with similar buying behavior so they can tailor campaigns. Which machine learning approach should they use?

Show answer
Correct answer: Clustering
Clustering is correct because the goal is to find natural groupings in unlabeled data. Regression is used to predict numeric values, not to group records. Classification requires known categories in the training data, but the scenario states that no predefined categories are available.

5. A company wants to build, train, manage, and deploy machine learning models in Azure. It also wants the option to use automated machine learning to try multiple algorithms and preprocessing steps. Which Azure service should the company choose?

Show answer
Correct answer: Azure Machine Learning
Azure Machine Learning is correct because it is the primary Azure service for building, training, managing, deploying, and monitoring machine learning models, including automated machine learning capabilities. Azure AI Search is used for indexing and querying content, not end-to-end ML lifecycle management. Azure AI Document Intelligence is for extracting data from forms and documents, not general-purpose model training and automated ML.

Chapter 4: Computer Vision and NLP Workloads on Azure

This chapter targets one of the most testable domains in AI-900: recognizing common computer vision and natural language processing workloads and mapping them to the correct Azure service. On the exam, Microsoft rarely asks you to build a model or write code. Instead, you are expected to identify the business scenario, match it to the appropriate Azure AI service, and avoid common distractors that sound similar but solve different problems. That means your success depends on understanding what each service is designed to do, what inputs it accepts, what outputs it provides, and where candidates often confuse one Azure service with another.

From an exam-objective perspective, this chapter supports several course outcomes at once. You will identify computer vision workloads on Azure, recognize natural language processing workloads such as text analysis, speech, translation, and conversational AI, and apply exam strategy through scenario-based reasoning. The exam often combines topics, so a question may mention scanned forms, multilingual customer chat, spoken commands, or image tagging all in the same scenario. Your task is to isolate the core requirement. If the requirement is extracting printed text from an image, think OCR. If the requirement is understanding sentiment or key phrases in text, think Azure AI Language. If the requirement is converting speech to text or text to speech, think Azure AI Speech.

A major trap in this exam domain is overthinking architecture. AI-900 is a fundamentals certification. The test is designed to verify that you can distinguish core workloads such as image classification, object detection, facial analysis concepts, optical character recognition, language detection, entity extraction, translation, speech recognition, and question answering. It is not primarily testing advanced model tuning or production engineering. When you see a scenario, begin by asking: Is the input an image, document, spoken audio, or text? Is the task to classify, detect, extract, summarize, translate, answer, or converse? That process will usually eliminate most wrong answer choices quickly.

Exam Tip: Learn to separate the workload from the implementation detail. “Analyze images,” “extract text from receipts,” “detect spoken language,” and “build a chatbot using a knowledge base” are all workload clues. The exam often rewards the candidate who spots the verb in the scenario more than the candidate who memorized product marketing language.

For computer vision, focus on image analysis scenarios, OCR, object detection, and document-focused extraction. For NLP, focus on text analytics, speech services, translation, conversational language understanding, and question answering. Also remember responsible AI considerations: systems that process faces, voices, or customer text can involve privacy, fairness, transparency, and accountability concerns. Even when the exam question is technical, responsible use remains part of the AI-900 blueprint.

  • Computer vision workloads commonly involve analyzing images, detecting objects, reading text in images, and extracting information from forms or documents.
  • NLP workloads commonly involve sentiment analysis, key phrase extraction, entity recognition, translation, speech-to-text, text-to-speech, conversational bots, and question answering.
  • Correct service selection depends on the workload, not the industry. Retail, healthcare, manufacturing, and finance scenarios may use the same Azure AI service if the underlying task is the same.
  • Mixed-domain questions are common. A single solution might require Vision for image content, Language for sentiment or entities, and Speech for spoken interaction.

As you work through this chapter, keep asking the exam-coach question: “What is the one capability the customer actually needs?” Once you identify that, the right answer becomes much easier to spot. The sections that follow map directly to the skills measured and emphasize the distinctions that exam writers like to test.

Practice note for Understand key computer vision scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Master NLP service capabilities and use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Computer vision workloads on Azure and image analysis scenarios

Section 4.1: Computer vision workloads on Azure and image analysis scenarios

Computer vision workloads involve enabling applications to interpret visual input such as photographs, video frames, and screenshots. On AI-900, the exam usually expects you to recognize broad use cases rather than memorize implementation steps. Azure AI Vision is the core service family you should associate with image analysis tasks. Typical scenarios include describing image content, generating tags, identifying objects, detecting brands, reading text from images, and analyzing visual features for accessibility or cataloging.

The key exam skill is scenario matching. If a company wants to organize a large photo library by automatically labeling content such as “dog,” “car,” or “outdoor,” that points to image analysis. If a retailer wants to detect whether an image contains a backpack or a bicycle, that also fits vision analysis, though you must pay attention to whether the question needs simple tagging or precise object location. Some questions use business language like “index product photos for search,” “moderate uploaded content,” or “generate captions for images.” These are all hints that the task is visual analysis rather than text analytics or machine learning model building in Azure Machine Learning.

A common trap is confusing image classification, object detection, and OCR. Image classification answers the question, “What is in this image?” Object detection answers, “What objects are present and where are they located?” OCR answers, “What text appears in the image?” The exam may present answer choices that are all plausible if you only read quickly. Slow down and identify whether the output should be labels, bounding boxes, or extracted text.

Exam Tip: If the scenario emphasizes identifying the presence of general visual content, think image analysis. If it emphasizes locating items within the image, think object detection. If it emphasizes reading printed or handwritten text, think OCR or a document-focused service.

Another tested distinction is between general-purpose prebuilt AI services and custom model training. AI-900 more often emphasizes prebuilt Azure AI services. If the question says the company wants to quickly add image tagging or captioning without building its own model, Azure AI Vision is the likely choice. If the scenario suggests a highly specialized image set and custom labels, the exam may hint at a custom vision approach in older wording, but modern fundamentals questions still center on selecting the correct managed service family for the workload.

Be prepared for responsible AI angles as well. Computer vision systems can affect privacy, especially when images contain people, sensitive locations, or personal identifiers. Questions may not ask for policy detail, but they may expect you to recognize that image analysis should be used responsibly and transparently, especially when automated decisions could affect users.

Section 4.2: Face, OCR, object detection, and document intelligence concepts

Section 4.2: Face, OCR, object detection, and document intelligence concepts

This section covers several related but distinct concepts that exam writers like to place side by side. Face-related capabilities involve detecting human faces in images and identifying attributes or comparing faces, depending on the service features available and the scenario framing. OCR, or optical character recognition, focuses on extracting text from images. Object detection locates and identifies objects within an image. Document intelligence focuses on extracting structured information from forms, invoices, receipts, IDs, and other documents.

Start with OCR. If the business requirement is to read street signs, scanned pages, screenshots, or photographed menus, the exam likely expects OCR. The essential clue is that the important output is text. OCR is not the same as document intelligence. OCR can extract readable text, but document intelligence is more specialized: it is designed to understand document structure and fields such as invoice number, total amount, vendor name, table values, and key-value pairs. If a question says “extract data from invoices” or “process receipts at scale,” document intelligence is the better match because the goal is not merely reading text but turning a document into usable structured data.

Object detection is another frequently tested concept. If a warehouse wants to identify and locate forklifts in images, or a traffic system must detect and locate cars and pedestrians, object detection is the correct concept. The location element matters. Many wrong answers on the exam describe image tagging or classification, which can identify that a car exists somewhere in the image but may not provide the exact position.

Face scenarios require caution. Historically, fundamentals exams referenced Azure Face capabilities for detecting and analyzing faces. However, exam questions may also reflect responsible AI sensitivity around facial recognition. Read carefully. If the scenario only requires detecting that a human face is present in an image, that is different from verifying identity or analyzing demographic attributes. Do not assume every face-related requirement means broad facial recognition should be used, and remember that responsible AI concerns are especially relevant here.

Exam Tip: “Extract text” points to OCR. “Extract fields from forms” points to document intelligence. “Find objects and their positions” points to object detection. “Detect or compare faces” points to face-related capabilities, but watch for policy and responsible AI wording.

A common trap is selecting a language service for document problems because documents contain words. But the first problem in scanned forms is visual extraction, not language interpretation. Another trap is choosing OCR when the real requirement is to parse totals, dates, and vendor names from receipts. On AI-900, the test often checks whether you can distinguish raw text extraction from structured document understanding.

Section 4.3: NLP workloads on Azure and text analytics fundamentals

Section 4.3: NLP workloads on Azure and text analytics fundamentals

Natural language processing workloads involve enabling systems to work with human language in written form. In Azure, the Azure AI Language service is central to many text analytics scenarios. The exam commonly tests whether you can identify tasks such as sentiment analysis, key phrase extraction, entity recognition, language detection, summarization concepts, and classification of text into categories. These are classic fundamentals questions because the workload can be described in plain business terms.

Sentiment analysis determines whether text expresses positive, negative, neutral, or mixed opinion. If a company wants to analyze customer reviews or social media posts to understand satisfaction trends, sentiment analysis is the likely answer. Key phrase extraction identifies important terms or concepts in text, which is useful for indexing support tickets or summarizing topics in survey responses. Entity recognition identifies people, organizations, locations, dates, and similar named items. If the scenario says “extract product names, locations, and dates from text,” that is your cue.

Language detection is another frequent objective. If input text may arrive in English, Spanish, French, or Japanese and the application must identify the language before further processing, Azure AI Language supports that kind of capability. The exam may combine this with translation or sentiment analysis in a multi-step workflow. In such cases, do not lose sight of which service performs which step.

A common exam trap is confusing text analytics with search or conversational bots. Text analytics extracts insights from text; it does not by itself create a chatbot interface. Similarly, translation is a language task but is distinct from sentiment analysis. If the scenario is about converting one language into another, choose translation, not language detection or sentiment.

Exam Tip: Look for verbs such as analyze, extract, detect, classify, and summarize. These often signal Azure AI Language capabilities. If the scenario is about written text rather than spoken audio, start by considering Language before Speech.

The exam also expects practical understanding of outputs. Sentiment analysis gives opinion polarity. Key phrase extraction returns important phrases. Entity recognition returns identified entities and categories. These distinctions help you eliminate distractors. For example, if a question asks which service can identify company names in support emails, sentiment analysis is wrong because it measures opinion, not named items.

Responsible AI also matters in NLP. Text analytics systems can reflect bias in training data, mishandle minority dialects, or expose sensitive customer information. While AI-900 will not dive deeply into mitigations, it may expect you to recognize that NLP solutions should be evaluated for fairness, privacy, and transparency.

Section 4.4: Speech, translation, language understanding, and question answering

Section 4.4: Speech, translation, language understanding, and question answering

Beyond text analytics, AI-900 expects you to recognize additional language-related workloads: speech services, translation, language understanding for intent, and question answering. These capabilities are often presented in customer service, accessibility, and multilingual application scenarios. Your job is to match the scenario to the service category that solves the main problem.

Azure AI Speech is used when the input or output involves audio. Speech-to-text converts spoken words into written text. Text-to-speech converts written text into natural-sounding audio. Speech translation can translate spoken input across languages. Common exam scenarios include transcribing meetings, enabling voice commands in an app, generating spoken output for accessibility, and creating call center transcripts. The main clue is that the workload starts with voice or ends with voice.

Translation workloads focus on converting text from one language to another. If a global support portal must display content in multiple languages or translate incoming chat messages, translation is the relevant capability. Watch for the distinction between translation and language detection: detecting a language tells you what the source language is, but it does not convert it.

Language understanding scenarios involve determining user intent from natural language inputs. In fundamentals wording, this may appear as identifying what a user wants to do, such as booking a flight, checking an order status, or resetting a password. The exam may reference conversational applications or bots. Intent detection is different from question answering. Intent detection tries to understand the user’s goal. Question answering tries to return an answer from a knowledge base, FAQ, or set of documents.

Question answering is commonly tested with support scenarios. If a company has a list of frequently asked questions and wants a bot to return the best answer to user questions, question answering is the likely fit. If the company wants to infer free-form user goals and gather required parameters, that points more toward language understanding in a conversational workflow.

Exam Tip: If the scenario says users will speak, think Speech. If the system must convert between languages, think Translation. If the app must infer intent, think language understanding. If it must answer from known content, think question answering.

A classic trap is choosing a chatbot option for every conversational scenario. A bot is only the interface. The underlying AI capability could be question answering, intent recognition, translation, or speech. The exam often tests whether you can identify the underlying service rather than the front-end application style.

Section 4.5: Choosing between Azure AI Vision, Language, Speech, and related services

Section 4.5: Choosing between Azure AI Vision, Language, Speech, and related services

This section is where exam performance improves the most, because AI-900 repeatedly asks you to choose the right service for a stated requirement. The fastest method is to classify the input type first. If the primary input is an image or video frame, start with Azure AI Vision. If it is a scanned form or receipt and the goal is field extraction, think Document Intelligence. If the input is written text and you need insights such as sentiment, key phrases, or entities, think Azure AI Language. If the input or output is spoken audio, think Azure AI Speech. If the requirement is converting text or speech between languages, think Translation capabilities. If the scenario is FAQ-style response generation from curated content, think question answering.

Many exam distractors are based on adjacent capabilities. For example, an invoice-processing question may offer OCR, Vision, Language, and Document Intelligence. OCR sounds tempting because invoices contain text, but the best answer is usually Document Intelligence if the task is to capture structured fields. Likewise, a customer review analysis question may offer Speech because customers “say” opinions in real life, but if the scenario explicitly uses written reviews, Language is correct.

Use a simple elimination framework:

  • Image content understanding = Vision
  • Text in images/documents = OCR or Document Intelligence depending on whether structure matters
  • Written text insight = Language
  • Spoken audio processing = Speech
  • Convert between languages = Translation
  • Intent or FAQ response = language understanding or question answering

Exam Tip: The exam often hides the core clue in one phrase: “spoken,” “image,” “receipt,” “review,” “FAQ,” or “translate.” Train yourself to circle that clue mentally before reading the options.

You should also recognize when multiple services could be part of a complete solution. A multilingual voice bot, for example, could use Speech for audio input, Translation for language conversion, Language for text analysis, and question answering for FAQ responses. But if the exam asks for the best service for one named task, answer only that task. Do not choose a broad architecture when a single capability is requested.

Another trap is choosing Azure Machine Learning for standard AI workloads that already have prebuilt services. Fundamentals questions usually prefer managed Azure AI services when the requirement is common and well-defined. Machine Learning is more appropriate when you need to build, train, and manage custom models beyond the scope of prebuilt capabilities.

Section 4.6: Exam-style MCQs on Computer vision workloads on Azure and NLP workloads on Azure

Section 4.6: Exam-style MCQs on Computer vision workloads on Azure and NLP workloads on Azure

This chapter ends with strategy for mixed-domain multiple-choice questions, because AI-900 commonly blends computer vision and NLP in the same item set. You are not being tested on memorizing product names in isolation; you are being tested on identifying the requirement hidden inside business wording. The best candidates read the final sentence first, find the exact requested capability, and only then evaluate the options. This is especially helpful when a scenario includes several irrelevant details such as industry, company size, compliance concerns, or user interface preferences.

When practicing MCQs, classify each question using three filters. First, identify the input type: image, document, text, or audio. Second, identify the action: detect, extract, classify, translate, answer, or speak. Third, identify whether the need is general-purpose prebuilt AI or a custom model. In many AI-900 questions, just applying these three filters removes two or three answer choices immediately.

Watch for wording traps. “Analyze customer feedback” often means sentiment analysis, not translation. “Read passport details from a scanned image” usually means document extraction, not generic image tagging. “Allow users to ask spoken questions and hear spoken answers” may involve both Speech and question answering, but if the question asks specifically about converting spoken words to text, then Speech is the target answer.

Exam Tip: On mixed-domain questions, do not choose the first service that seems related. Choose the service that matches the output the business wants. Output-centered reading is one of the most reliable AI-900 strategies.

Also practice distinguishing between similar nouns. A “document” on the exam often implies structure, forms, or fields. An “image” may imply visual content without structure. “Text” implies already-readable language content. “Speech” implies audio. These distinctions are simple, but they drive many exam items. If you keep them straight, you will score well in this objective domain.

Finally, review explanations after each practice set, especially for questions you got right by guessing. Fundamentals exams reward pattern recognition. The more often you map clues like receipt, review, transcript, face, caption, invoice, and FAQ to their correct Azure service families, the more quickly you will recognize the right answers under timed conditions.

Chapter milestones
  • Understand key computer vision scenarios
  • Master NLP service capabilities and use cases
  • Map Azure services to vision and language tasks
  • Strengthen accuracy with mixed-domain practice
Chapter quiz

1. A retail company wants to process photos of store shelves to identify products, detect the presence of objects, and generate descriptive tags for each image. The company does not need to train a custom model. Which Azure service should you recommend?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is the best choice for analyzing image content, detecting objects, and generating tags from images. Azure AI Language is designed for text-based workloads such as sentiment analysis, entity recognition, and key phrase extraction, so it does not fit an image-analysis requirement. Azure AI Document Intelligence is focused on extracting structured information from forms and documents, not general-purpose image tagging and object detection.

2. A support center wants to analyze thousands of customer reviews to determine whether feedback is positive, negative, or neutral and to identify the main topics mentioned in the text. Which Azure service should you use?

Show answer
Correct answer: Azure AI Language
Azure AI Language is the correct service because it supports common NLP workloads such as sentiment analysis and key phrase extraction. Azure AI Speech is for speech-to-text, text-to-speech, and related audio scenarios, so it is not the best fit for analyzing written reviews. Azure AI Translator focuses on language translation rather than understanding sentiment or extracting topics from text.

3. A logistics company scans delivery receipts and wants to extract printed text, tables, and key fields such as invoice number and total amount from the documents. Which Azure service should you recommend?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is designed for document-focused extraction, including OCR, forms, receipts, invoices, and structured fields. Azure AI Vision can perform OCR on images, but the scenario specifically requires extracting key fields and tables from documents, which is a document intelligence workload. Azure AI Translator only translates text between languages and does not extract structured information from scanned receipts.

4. A multinational company wants users to speak commands into a mobile app and have the commands transcribed into text. The app must also read back confirmations aloud to the user. Which Azure service should you choose?

Show answer
Correct answer: Azure AI Speech
Azure AI Speech is the correct choice because it supports both speech-to-text and text-to-speech, which directly match the scenario. Azure AI Language analyzes written text for meaning, sentiment, entities, and similar tasks, but it does not provide core speech synthesis and recognition capabilities. Azure AI Vision is for image and visual-document analysis, so it is unrelated to spoken command processing.

5. A company is building a customer self-service bot that should answer common questions by using a curated set of FAQs and support articles. Which Azure service capability is the best fit?

Show answer
Correct answer: Question answering in Azure AI Language
Question answering in Azure AI Language is the best fit because the requirement is to respond to user questions by using an existing knowledge base of FAQs and documents. Object detection in Azure AI Vision is for identifying items within images and is unrelated to conversational FAQ retrieval. Speech synthesis in Azure AI Speech converts text to spoken audio, but by itself it does not provide knowledge-base-driven answers to customer questions.

Chapter 5: Generative AI Workloads on Azure

This chapter focuses on one of the highest-interest areas on the AI-900 exam: generative AI workloads on Azure. Microsoft expects candidates to recognize what generative AI is, how Azure OpenAI fits into Azure AI solutions, how copilots use prompts and grounding data, and how responsible AI principles apply when systems can generate new content. You are not expected to build advanced production architectures for this exam, but you are expected to distinguish core concepts, identify the most appropriate Azure service in scenario questions, and avoid common distractors that mix generative AI with classic natural language processing, machine learning, or search services.

At the exam level, generative AI refers to AI systems that can create new content such as text, code, images, and conversational responses based on learned patterns from large datasets. In Azure-focused questions, this usually points to Azure OpenAI Service and related solution patterns rather than traditional predictive models. A common test objective is understanding that generative AI does not simply classify or extract information; it synthesizes responses. That distinction matters because exam items often present several plausible services, including Azure AI Language, Azure AI Search, Azure Machine Learning, and Azure OpenAI. Your task is to identify whether the scenario is about generation, retrieval, analysis, or training.

This chapter also connects to earlier course outcomes. Generative AI overlaps with responsible AI, natural language processing, and conversational AI. However, the exam usually tests these as separate categories. For example, if a question asks which service can analyze sentiment in text, that is not generative AI. If the question asks which service can draft an email, summarize a report, or answer questions in natural language using a powerful language model, then Azure OpenAI becomes the likely answer. Watch for wording like generate, compose, summarize, rewrite, chat, or create natural language responses.

Another area the exam targets is beginner-level vocabulary: foundation models, prompts, completions, tokens, chat-based interactions, grounding, copilots, and safety filters. You do not need to memorize every implementation detail, but you should understand the role each concept plays. A foundation model is a large pre-trained model that can be adapted for many tasks. A prompt is the instruction or context given to the model. A completion is the generated output. Grounding adds trusted external context so responses are more relevant to a specific business domain. Copilots are generative AI assistants embedded into applications and workflows.

Exam Tip: If the question focuses on creating human-like text, answering questions conversationally, summarizing, drafting, transforming, or generating content from natural language instructions, start by considering Azure OpenAI Service. If the question focuses on finding documents, indexing enterprise content, or retrieving data, Azure AI Search may be part of the solution, but by itself it is not the generative model.

Expect distractors that blur the line between AI services. Azure Machine Learning is for building, training, and managing machine learning models and workflows. Azure OpenAI provides access to advanced generative models through Azure. Azure AI Language handles capabilities like sentiment analysis, entity recognition, and language understanding tasks. Azure AI Search can help retrieve relevant information to ground a generative response. The exam often rewards service selection accuracy more than deep implementation detail.

As you work through this chapter, focus on four recurring exam skills: identify generative AI scenarios, select Azure OpenAI for appropriate use cases, understand prompts and grounding, and apply responsible AI and safety thinking. The last section then prepares you for exam-style multiple-choice reasoning without placing actual quiz items in the chapter text. If you can explain why one service is correct and why the others are not, you are thinking like a high-scoring AI-900 candidate.

Practice note for Explain generative AI concepts for beginners: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Generative AI workloads on Azure and foundation model basics

Section 5.1: Generative AI workloads on Azure and foundation model basics

Generative AI workloads involve systems that produce new content rather than only analyzing existing data. For AI-900, this usually means recognizing workloads such as drafting documents, summarizing reports, generating code suggestions, answering user questions in conversational form, creating product descriptions, and supporting virtual assistants with natural language responses. These workloads are powered by large models trained on broad datasets and capable of general-purpose language understanding and generation.

A foundation model is a large pre-trained model that serves as the base for many downstream tasks. On the exam, think of a foundation model as a reusable starting point. Instead of training a model from scratch for every task, organizations can use a powerful pre-trained model and guide it through prompting, fine-tuning concepts, or grounding with enterprise data. You are not likely to be tested on low-level training mathematics, but you should know that foundation models are broad, versatile, and adaptable.

Generative AI workloads differ from traditional machine learning workloads. A classification model predicts a category. A regression model predicts a numeric value. A clustering model groups similar items. A generative model creates new outputs. This distinction is a classic exam trap. If the scenario asks for a system that can produce a summary or compose a response in natural language, that is not a classification workload and not a standard text analytics task.

On Azure, generative AI scenarios are commonly associated with Azure OpenAI Service. Microsoft also tests your awareness that generative AI solutions are often part of larger architectures that may include search, data storage, application logic, and monitoring. Still, at the fundamentals level, the key concept is simple: Azure provides managed access to advanced generative models so organizations can build AI experiences securely and at scale.

  • Use generative AI when the goal is to create text, code, or conversational output.
  • Use traditional AI analysis services when the goal is to classify, extract, detect, or translate.
  • Look for scenario verbs such as generate, draft, rewrite, summarize, or answer.

Exam Tip: If a question asks for a model that can perform many language tasks from natural language instructions, that points to a foundation model in a generative AI context, not to a narrowly trained custom classifier.

A common trap is assuming generative AI is always the best answer just because the scenario mentions text. Read the business need carefully. If the requirement is to identify key phrases or detect sentiment, Azure AI Language is usually a better fit. If the requirement is to generate a customer reply based on a support ticket, then generative AI becomes more appropriate. The exam tests your ability to separate content analysis from content generation.

Section 5.2: Azure OpenAI Service capabilities and common use cases

Section 5.2: Azure OpenAI Service capabilities and common use cases

Azure OpenAI Service gives organizations access to advanced generative AI models through Azure. For the AI-900 exam, you should recognize it as the primary Azure service for language generation, conversational experiences, content transformation, and related generative tasks. The service is used to build solutions such as chat assistants, content drafting tools, summarization systems, knowledge assistants, and coding support experiences.

Microsoft exam questions often describe business scenarios in plain language rather than naming the service directly. For example, a company may want to create a helpdesk assistant that answers questions in natural language, summarize long meeting transcripts, or generate first-draft marketing copy. Those are classic Azure OpenAI scenarios. Another common use case is transforming content, such as rewriting technical instructions into simpler language or converting bullet points into an email draft.

It is important to understand what Azure OpenAI is not. It is not primarily a data indexing service, a document database, or a classic machine learning training platform. It also does not replace every other Azure AI service. Questions may include Azure Machine Learning, Azure AI Language, or Azure AI Search as distractors. Choose Azure OpenAI when the value comes from the model generating or composing a response.

Azure OpenAI also supports chat-based experiences, which are very common in exam wording. If users interact conversationally with a system that maintains context across messages and generates human-like answers, Azure OpenAI is usually central to the architecture. In some scenarios, Azure AI Search is combined with it to retrieve relevant organizational data before generating a response. That combination supports more accurate enterprise copilots.

Exam Tip: When the scenario says users will ask questions in natural language and receive generated answers based on a large language model, Azure OpenAI is the core service. If the scenario says the solution must search a document index, Azure AI Search may be a supporting service, not the generator.

Common exam traps include selecting Azure AI Language simply because the workload uses text, or selecting Azure Machine Learning because the scenario mentions a model. Remember that AI-900 emphasizes service purpose. Azure OpenAI is for generative capabilities. Azure AI Language focuses on language analysis tasks such as sentiment, entities, and question answering patterns not centered on large-scale generative text creation. Azure Machine Learning focuses on building and managing machine learning workflows. Match the service to the business outcome, not just the presence of language data.

Section 5.3: Prompts, completions, chat, summarization, and content generation

Section 5.3: Prompts, completions, chat, summarization, and content generation

Prompting is one of the most tested beginner concepts in generative AI. A prompt is the instruction, question, or context you provide to the model. The completion is the generated output. In a chat scenario, the conversation history helps shape the model’s next response. On the exam, you should be able to identify how prompts guide behavior and why prompt quality affects output quality.

Prompt-based use cases include asking a model to summarize a report, classify a piece of text into a custom business format, draft a response to a customer inquiry, rewrite text in a more formal tone, or extract a concise explanation from a long technical document. Although some of these tasks resemble traditional NLP, the exam distinguishes them by the use of a generative model that creates the output in natural language form.

Summarization is especially important. If a scenario asks for reducing long content into a shorter, readable overview, that is a strong sign of a generative AI task. Content generation also includes writing product descriptions, producing email drafts, creating FAQ answers, and generating code suggestions. Chat refers to interactive, multi-turn use where the user and model exchange messages. The model considers the user prompt and often prior conversation context to generate the next reply.

Good prompting provides clear instructions, relevant context, desired format, and constraints. Poor prompts can produce vague, inaccurate, or off-topic outputs. You do not need advanced prompt engineering frameworks for AI-900, but you should understand the basic principle that better instructions generally improve results. Questions may ask why generated content is inconsistent or how to make outputs more aligned to a business need; the answer often involves improving prompts or grounding data.

  • Prompt: what you ask the model to do.
  • Completion: the model’s generated result.
  • Chat: multi-turn interaction with conversational context.
  • Summarization: condensing long content into shorter form.
  • Content generation: creating new text or other output from instructions.

Exam Tip: If the question asks how to influence model output without retraining a model, look for prompt-related concepts. The AI-900 exam often tests this at a conceptual level.

A frequent trap is confusing prompts with training data. A prompt is runtime input, not the original model training corpus. Another trap is assuming every natural language response is factual. Generative models can produce fluent but incorrect responses, which is why grounding and safety measures matter in the next sections.

Section 5.4: Copilots, grounding data, and retrieval-augmented generation concepts

Section 5.4: Copilots, grounding data, and retrieval-augmented generation concepts

A copilot is an AI assistant embedded in an application or workflow to help users complete tasks more efficiently. On the AI-900 exam, copilots are typically described as systems that answer questions, generate drafts, provide recommendations, or assist with productivity tasks while working alongside the user. The key word is assistance. A copilot does not merely automate a fixed workflow; it uses generative AI to support user decision-making and content creation.

Grounding means providing the model with relevant, trustworthy context from a specific data source so that its response is based on domain-relevant information. This is crucial in enterprise scenarios. For example, if an organization wants a copilot to answer employee questions about internal policies, the model should be grounded in the organization’s approved policy documents rather than relying only on general model knowledge.

Retrieval-augmented generation, often shortened to RAG, is the pattern of retrieving relevant information from a knowledge source and then using that information to help generate the response. For the exam, you should recognize the concept even if deep implementation is not tested. Azure AI Search is commonly paired with Azure OpenAI in these solutions. The search component retrieves relevant content, and the generative model produces a useful answer using that content.

This matters because unguided generation can be inaccurate or too generic. Grounding improves relevance, accuracy, and trustworthiness. In scenario-based questions, if the goal is to create a copilot that answers based on company documents, manuals, or product catalogs, the likely concept is grounding with retrieved data. The exam may not always use the term RAG, so focus on the pattern described.

Exam Tip: If a question mentions a copilot using company documents to answer user questions, think of Azure OpenAI plus a retrieval source such as Azure AI Search. The retrieval source provides context; the generative model creates the response.

Common traps include assuming the model already knows internal company data or choosing a pure search service when the requirement is to generate a natural language answer. Search finds information; generative AI composes the answer. In many enterprise copilots, you need both. Another trap is thinking grounding guarantees correctness. It improves relevance, but responsible AI practices and validation are still necessary.

Section 5.5: Responsible generative AI, safety, and governance fundamentals

Section 5.5: Responsible generative AI, safety, and governance fundamentals

Responsible AI remains a major exam theme, and generative AI makes it even more important. Because these systems create content, they can also generate harmful, biased, misleading, unsafe, or confidential output if not properly designed and monitored. On AI-900, you should connect responsible AI principles to practical controls such as content filtering, human oversight, transparency, grounding, access controls, and monitoring.

Safety in generative AI includes reducing harmful responses, preventing misuse, and limiting exposure to inappropriate content. Governance includes deciding who can access models, which data sources can be used, how prompts and outputs are logged, and how organizational policies are enforced. Exam questions may describe an organization concerned about inaccurate answers, offensive content, or sensitive information leakage. In those cases, the correct reasoning usually involves responsible AI safeguards rather than only choosing a bigger model.

One key concern is hallucination, where a model generates content that sounds plausible but is incorrect or unsupported. Grounding can help reduce this risk, but it does not eliminate it. Human review may still be required for high-stakes uses. Another concern is prompt abuse or misuse, where users attempt to get the system to produce unsafe or policy-violating content. Safety filters and policy controls are relevant here.

Transparency is also important. Users should understand when they are interacting with AI-generated content and should know the limits of the system. Fairness matters as well because generated outputs can reflect biases present in training data or prompts. Privacy and security concerns arise when the system has access to business data, customer records, or proprietary documents.

  • Use safety controls to reduce harmful or inappropriate outputs.
  • Use grounding to improve relevance and reduce unsupported answers.
  • Use human oversight for sensitive or high-impact decisions.
  • Use governance to manage access, policies, and monitoring.

Exam Tip: If the question asks how to make a generative AI solution more trustworthy, look for answers involving grounding, content filtering, human review, transparency, and responsible AI practices. Be cautious of answer choices that suggest generative models can be left fully unsupervised in regulated or sensitive scenarios.

A common exam trap is assuming responsible AI is only about ethics statements. Microsoft tests practical application. If a system generates customer-facing content, safety and governance are operational design requirements, not optional extras. Expect scenario wording that asks what the organization should do to reduce risk while still benefiting from generative AI.

Section 5.6: Exam-style MCQs on Generative AI workloads on Azure

Section 5.6: Exam-style MCQs on Generative AI workloads on Azure

This section prepares you for the style of AI-900 multiple-choice questions you will face on generative AI topics. The exam generally rewards service recognition, workload identification, and elimination of distractors. Rather than memorizing isolated facts, practice asking three things for every question: What is the business goal? Is the task generation, analysis, retrieval, or model development? Which Azure service best matches that goal at a fundamentals level?

For example, if a scenario requires drafting responses, summarizing long documents, or answering users conversationally, your first instinct should be Azure OpenAI. If the scenario is about searching documents or indexing content for retrieval, Azure AI Search is likely involved. If the scenario is about sentiment or entity extraction, that points to Azure AI Language. If the scenario is about training and managing custom machine learning pipelines, Azure Machine Learning becomes more likely. Many exam questions are designed to test whether you can resist choosing a service that sounds generally related but is not the best fit.

Pay attention to trigger words. Words like generate, compose, rewrite, summarize, chat, and copilot usually indicate generative AI. Phrases like based on company documents suggest grounding and possibly a retrieval component. Phrases like reduce harmful output or ensure trustworthy use indicate responsible AI and safety concepts. A strong test-taking strategy is to identify the core workload before reading every answer choice in detail.

Exam Tip: Eliminate answers that solve only part of the problem. If the user needs generated answers from enterprise documents, a search-only answer is incomplete and a model-only answer may lack grounding. Think in terms of the full scenario requirement.

Another practical strategy is to watch for overengineered distractors. AI-900 is a fundamentals exam. If one answer introduces unnecessary complexity while another aligns cleanly with the described Azure service capability, the simpler and more direct match is often correct. Also remember that Microsoft may test responsible AI by asking which action improves safety, transparency, or reliability rather than asking for a specific service alone.

Finally, during practice sets, do not just mark right or wrong. Explain why the wrong options are wrong. That habit is one of the fastest ways to raise your score because the real exam often presents several technically possible tools, but only one is the best fit for the described requirement. Master that distinction and you will handle the generative AI domain with confidence.

Chapter milestones
  • Explain generative AI concepts for beginners
  • Understand Azure OpenAI and copilot scenarios
  • Review prompts, grounding, and safety concepts
  • Solve AI-900 generative AI practice sets
Chapter quiz

1. A company wants to build a customer support assistant that can draft answers to user questions in natural language and summarize previous chat conversations. Which Azure service should you select first for this generative AI requirement?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is the best choice because the scenario requires generating new natural language responses and summaries, which is a core generative AI capability tested in AI-900. Azure AI Language is used for language analysis tasks such as sentiment analysis, entity recognition, and other NLP features, but it is not the primary service for drafting conversational responses. Azure AI Search helps retrieve and index content, which can support a solution, but by itself it does not generate the response.

2. You are reviewing a proposed copilot solution. The design states that the model will answer employee questions by using company policy documents as additional context at runtime. Which concept does this describe?

Show answer
Correct answer: Grounding
Grounding is the process of supplying trusted external data, such as company documents, so the model can generate responses that are more relevant and accurate for a specific domain. Classification is a predictive task that assigns labels to data and does not describe providing business context to a generative model. Tokenization relates to how text is split into smaller units for processing, which is important technically but does not describe the use of company policy documents to guide answers.

3. A team is comparing Azure services for an AI-900 exam practice scenario. They need a service that can retrieve relevant documents from an indexed knowledge base to support a chat application, but not generate the final answer by itself. Which service best fits this requirement?

Show answer
Correct answer: Azure AI Search
Azure AI Search is the correct choice because the requirement is to retrieve relevant content from indexed documents. On the exam, this is a common distinction: retrieval points to search, while generation points to Azure OpenAI. Azure OpenAI Service generates responses, but the question explicitly says the service should retrieve supporting documents rather than generate the final answer by itself. Azure Machine Learning is for building, training, and managing custom machine learning workflows and is not the best fit for enterprise document retrieval in this scenario.

4. A company wants to reduce the risk of harmful or inappropriate outputs from a generative AI application built on Azure. Which approach best aligns with responsible AI concepts for generative workloads?

Show answer
Correct answer: Use safety filters and content moderation controls
Using safety filters and content moderation controls is the best answer because AI-900 expects candidates to recognize that responsible AI for generative systems includes applying safeguards to reduce harmful, unsafe, or inappropriate content. Replacing prompts with a larger training dataset does not directly address runtime safety in an Azure generative AI solution and is more related to model development. Storing responses in Azure AI Search may support retention or retrieval scenarios, but it does not mitigate harmful output generation.

5. A business analyst asks what a prompt is in the context of Azure OpenAI and copilots. Which answer is correct?

Show answer
Correct answer: The instruction and context provided to the model
A prompt is the instruction, question, or context given to the model to guide its response. This is a core vocabulary item in the AI-900 generative AI domain. The generated output is called the completion or response, so option A is incorrect. Indexed files used by Azure AI Search may be part of a grounding strategy, but they are not the definition of a prompt, so option C is also incorrect.

Chapter 6: Full Mock Exam and Final Review

This final chapter brings the entire AI-900 Practice Test Bootcamp together into one exam-focused review experience. By this point, you should already recognize the major objective areas tested on the exam: AI workloads and responsible AI, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI concepts on Azure. The purpose of this chapter is not to introduce brand-new material. Instead, it is to help you convert knowledge into exam performance by practicing mixed-domain thinking, recognizing common distractors, and refining your last-minute review strategy.

The AI-900 exam rewards candidates who can identify the best-fit Azure AI service for a business scenario and distinguish between similar but not identical concepts. In many questions, the challenge is not whether you have heard of the service, but whether you can map the scenario to the precise capability being tested. For example, the exam may expect you to separate computer vision from custom vision, speech from text analytics, or classical machine learning from generative AI. This chapter uses the structure of a full mock exam and final review to strengthen that mapping process.

Mock Exam Part 1 and Mock Exam Part 2 are represented here as two full-length mixed-domain practice sets. They are designed to simulate the switching of mental context that occurs on the live exam. You may move from a responsible AI scenario to a regression question, then to image analysis, then to Azure OpenAI concepts. That transition is deliberate. The real test often measures whether you can stay calm and identify the core workload regardless of topic order. Weak Spot Analysis then shows you how to review your misses in a structured way. Instead of simply memorizing correct answers, you will learn how to diagnose why a wrong option looked tempting and how to avoid repeating the same error under time pressure.

As you read, keep one principle in mind: AI-900 is a fundamentals exam, but fundamentals does not mean easy. It means the exam targets high-level understanding, terminology precision, and scenario-based service selection. You do not need deep implementation detail, but you do need clarity. If you confuse classification with regression, OCR with image tagging, language detection with translation, or Azure Machine Learning with Azure OpenAI, the exam will expose that confusion quickly.

Exam Tip: On final review, spend more time on distinction-making than memorization. Ask yourself: what is this service for, what is it not for, and what alternative answer would sound plausible but be wrong?

This chapter closes with an Exam Day Checklist focused on pacing, confidence control, and last-minute preparation. The goal is to ensure that you enter the exam ready not only in content knowledge, but also in decision quality. A candidate who knows 80 percent of the material but manages time well and avoids trap answers can outperform a candidate who knows more but second-guesses every item.

Use the six sections that follow as your final pass through the blueprint. Read actively, compare services mentally, and pay close attention to the recurring exam patterns. By the end of this chapter, you should be ready to approach a full mock exam with discipline and the real AI-900 exam with confidence.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full-length mixed-domain mock exam set A

Section 6.1: Full-length mixed-domain mock exam set A

Your first full-length mixed-domain mock exam should be treated as a realistic rehearsal, not as a casual exercise. Set A should combine all core exam objective areas in an unpredictable order: AI workloads and responsible AI, machine learning fundamentals, computer vision, natural language processing, and generative AI on Azure. The value of a mixed-domain set is that it trains rapid recognition. On the real exam, you are rarely warned in advance which domain comes next, so your skill is not just knowing concepts but classifying the scenario quickly and accurately.

As you work through a set like this, the main task is to identify the workload first. Ask: is the question about predicting a numeric value, assigning categories, grouping similar items, extracting text from images, analyzing sentiment, converting speech to text, translating content, or generating new content from prompts? Many wrong answers become easier to eliminate once you label the workload correctly. For instance, if a scenario requires assigning emails to categories, that points to classification rather than regression. If it asks for detecting objects or reading printed text from an image, that signals computer vision capabilities rather than language services.

Set A should also force you to distinguish Azure service families. Azure Machine Learning supports building and managing machine learning solutions. Azure AI Vision supports image analysis tasks. Azure AI Language supports text-based language workloads. Azure AI Speech supports speech recognition and speech synthesis. Azure OpenAI focuses on generative AI models and prompt-driven experiences. The exam repeatedly tests whether you can match these services to the business need without overcomplicating the problem.

Exam Tip: During a mock set, avoid spending too long on any one item early in the exam. If two options seem close, mark the item mentally, choose your best answer based on the core workload, and move on. Fundamentals exams reward broad consistency more than perfection on a single hard question.

After completing Set A, do not review only the questions you missed. Also review the ones you answered correctly but felt unsure about. Those are often your real weak spots because they reveal unstable understanding. Keep a list of recurring confusion areas such as OCR versus object detection, translation versus summarization, classification versus clustering, or responsible AI principles such as fairness, transparency, reliability and safety, privacy and security, inclusiveness, and accountability.

The best use of Set A is diagnostic. It should show not just your score, but your thinking patterns. Did you rush through service-selection questions? Did you confuse broad AI concepts with Azure product names? Did generative AI scenarios cause you to rely on buzzwords instead of clear definitions? This section is your first mirror. Use it to see how you actually perform under mixed-topic pressure.

Section 6.2: Full-length mixed-domain mock exam set B

Section 6.2: Full-length mixed-domain mock exam set B

Set B should be taken after you have reviewed the patterns from Set A, because its purpose is not simply more practice but improved decision-making. A second mixed-domain exam set helps confirm whether you are correcting your original mistakes or merely repeating them in new wording. Since AI-900 questions often restate the same concepts through different scenarios, progress is measured by better recognition of the concept beneath the wording.

In Set B, pay special attention to scenario qualifiers. Terms such as identify, detect, classify, predict, generate, summarize, translate, cluster, and extract usually signal the tested concept. A common trap on AI-900 is reading too fast and responding to a familiar term instead of the actual business requirement. For example, a question may mention text, but the true need could be sentiment analysis rather than translation. A question may mention images, but the requirement could be OCR rather than image tagging. A question may mention chat, but the intended answer could involve conversational AI or generative AI depending on whether the system follows scripted intents or produces flexible model-generated responses.

Set B is also the right time to refine your handling of responsible AI. The exam expects you to recognize the principles at a practical level. If a scenario concerns biased outputs across user groups, think fairness. If it concerns explaining how a model reached a decision, think transparency. If it concerns safe and dependable operation, think reliability and safety. If it concerns safeguarding personal data, think privacy and security. If it concerns accessibility or broad usability, think inclusiveness. If it concerns human oversight and responsibility, think accountability.

Exam Tip: If two answer choices both sound technically possible, choose the one that most directly satisfies the stated requirement with the least unnecessary complexity. Fundamentals exams usually favor the clearest fit over an advanced but less targeted option.

Another goal of Set B is confidence calibration. You should notice whether your uncertainty is shrinking. By now, you want fewer guesses based on vague familiarity and more answers based on exact domain mapping. Review time spent per item as well. If your pace is inconsistent, that is a warning sign for exam day. This second mock set should leave you with a realistic picture of readiness: not only what you know, but how steadily you can apply it.

Section 6.3: Answer explanations and distractor analysis by domain

Section 6.3: Answer explanations and distractor analysis by domain

This section is where score improvement actually happens. Many candidates waste mock exams by checking only whether an answer was right or wrong. The more useful approach is distractor analysis: understanding why each wrong option was included and what misunderstanding it targets. The AI-900 exam is built around plausible alternatives, so your ability to reject distractors is just as important as your ability to identify the correct answer.

In the AI workloads and responsible AI domain, distractors often swap one principle for another. A scenario about model explanation may tempt you toward accountability, but the more precise principle is transparency. A scenario about data protection may sound like fairness or trustworthiness, but privacy and security is the better fit. The exam tests whether you know the principles as functional ideas, not just as a memorized list.

In machine learning, the most common distractors confuse regression, classification, and clustering. Regression predicts a numeric value. Classification predicts a label or category. Clustering groups items by similarity without labeled outcomes. If you miss these, the issue is usually not Azure knowledge but ML fundamentals. Another trap is confusing training a model with consuming a prebuilt AI service. Azure Machine Learning is for building and managing ML models, while many Azure AI services provide ready-made capabilities without custom model training for common tasks.

For computer vision, distractors often blur together image classification, object detection, face-related capabilities, OCR, and general image analysis. The key is to focus on the required output. If the scenario needs text extracted from an image, think OCR. If it needs identification of objects and possibly their location, think object detection. If it needs a general description or tags for image content, think image analysis. Exam writers frequently rely on candidates choosing a broad image-related answer rather than the precise capability.

In NLP, common distractors include language detection versus translation, key phrase extraction versus summarization, and speech services versus text services. If the input is audio and the requirement is transcription or spoken output, Azure AI Speech is central. If the input is text and the goal is sentiment, entities, key phrases, or classification, think Azure AI Language. Translation can overlap across modalities in scenarios, so read carefully.

Generative AI distractors frequently play on confusion between classical AI and foundation-model use cases. If the task is generating, rewriting, summarizing, extracting with prompt guidance, or supporting copilot experiences, Azure OpenAI concepts are more likely. But if the task is traditional prediction from structured historical data, that is still machine learning, not generative AI.

Exam Tip: When reviewing explanations, write down the exact clue that should have led you to the correct answer. This trains pattern recognition far better than re-reading the answer key.

Section 6.4: Final review of Describe AI workloads and ML on Azure

Section 6.4: Final review of Describe AI workloads and ML on Azure

This final review section covers the first major block of exam content: general AI workloads, responsible AI, and machine learning on Azure. At the broadest level, AI workloads include machine learning, computer vision, natural language processing, conversational AI, and generative AI. The exam expects you to recognize these categories from scenarios and understand that Azure offers both prebuilt AI services and platforms for custom model development.

Responsible AI is not a side topic. It is a recurring exam objective. You should be able to identify the principles and apply them to real-world concerns. Fairness focuses on avoiding harmful bias. Reliability and safety concern dependable performance and risk reduction. Privacy and security deal with protection of sensitive data and controlled access. Inclusiveness means designing for diverse user needs and abilities. Transparency involves understanding and explaining AI behavior. Accountability refers to human responsibility and governance over AI systems. Questions may test these through business outcomes rather than definitions, so always translate the scenario into the principle being emphasized.

For machine learning, know the core model types. Regression predicts continuous numeric values, such as prices or demand. Classification predicts categories, such as approved or denied, spam or not spam, or product type. Clustering groups similar data points without known labels. The exam often includes quick scenario descriptions where one verb reveals the answer if you slow down enough to notice it.

You should also understand the role of Azure Machine Learning at a fundamentals level. It supports creating, training, evaluating, deploying, and managing machine learning models. You do not need deep implementation detail, but you do need to know it is the Azure platform for ML lifecycle tasks. Distinguish this from prebuilt services that solve common AI tasks directly without full custom model development.

Exam Tip: If the scenario centers on historical labeled data being used to predict outcomes, think machine learning. If it centers on ready-made recognition of text, images, or speech, think Azure AI services first.

One final trap in this domain is over-reading technical depth into the question. AI-900 rarely asks for advanced data science detail. It tests whether you can identify the right concept and Azure capability at a business-solution level. Keep your answers aligned to fundamentals, not implementation complexity.

Section 6.5: Final review of Computer vision, NLP, and Generative AI workloads on Azure

Section 6.5: Final review of Computer vision, NLP, and Generative AI workloads on Azure

Computer vision, NLP, and generative AI form a large and highly testable portion of the AI-900 exam because they include many recognizable business scenarios. Your task in final review is to sharpen the boundaries between these workloads so that similar-sounding answer choices no longer confuse you.

For computer vision on Azure, remember the distinction between analyzing visual content, extracting text, and identifying objects or features in images. Azure AI Vision supports common image analysis scenarios, including tagging and describing image content. OCR-related tasks focus on reading text from images or documents. Object-focused tasks involve detecting and locating items within an image. The exam may phrase these in everyday business terms, so convert the requirement into the technical output being asked for.

For NLP on Azure, split the area into text workloads, speech workloads, translation, and conversational AI. Azure AI Language is associated with text analysis tasks such as sentiment analysis, entity recognition, key phrase extraction, summarization, and other language understanding capabilities. Azure AI Speech is used when audio is central, including speech-to-text, text-to-speech, and speech translation scenarios. Translation may appear as either text or speech depending on the prompt, so always identify the input and output modality.

Conversational AI can also appear in exam scenarios. Be careful not to assume every bot scenario is generative AI. Traditional conversational solutions may rely on structured intents and predefined flows, while generative AI solutions use large language models to produce flexible, context-aware output. That distinction matters.

Generative AI on Azure centers on foundation models, prompts, copilots, content generation, summarization, rewriting, and question-answering with model-generated responses. Azure OpenAI concepts belong here. You should understand what a prompt is, what a copilot experience is at a high level, and why responsible use is essential. Generative AI can produce helpful outputs quickly, but it can also introduce risks such as inaccuracies, harmful content, and unintended disclosure if not governed carefully.

Exam Tip: On generative AI questions, watch for answer choices that sound futuristic but do not match the actual need. If the requirement is prediction from structured business data, that is not automatically a generative AI use case.

The exam tests your ability to select the right service family and to avoid collapsing multiple AI domains into one. Keep each workload distinct in your mind, and many questions become much easier.

Section 6.6: Exam-day strategy, pacing, confidence control, and final checklist

Section 6.6: Exam-day strategy, pacing, confidence control, and final checklist

Strong content knowledge can still be undermined by poor exam execution. Your final preparation should therefore include a practical exam-day strategy. Start with pacing. The AI-900 exam is a fundamentals exam, but that can lead candidates to move either too slowly from overthinking or too quickly from overconfidence. Your target should be steady progress with enough time left to review marked items. Do not let a handful of ambiguous questions consume energy early.

Confidence control is equally important. Many candidates lose points not because they lack knowledge, but because they change correct answers after second-guessing themselves. If you selected an answer based on a clear keyword-to-concept match, trust that reasoning unless you later notice a specific detail you missed. Avoid emotional answer changes. Replace them with evidence-based review.

On the day of the exam, read each question for the requirement before looking at the answer choices. This reduces the effect of distractors. Then eliminate options that belong to the wrong domain. If the scenario is speech-based, text-only services are likely distractors. If the requirement is generative output, a classical ML answer may be off target. If the issue is responsible AI governance, a technical service name may not answer the question at all.

  • Review the core differences between regression, classification, and clustering.
  • Review responsible AI principles and how they appear in practical scenarios.
  • Review Azure service mapping: Azure Machine Learning, Azure AI Vision, Azure AI Language, Azure AI Speech, and Azure OpenAI.
  • Sleep well and avoid cramming unfamiliar details at the last minute.
  • Use calm breathing if you hit a difficult question set.
  • Check flagged items only if time remains and your review has a clear purpose.

Exam Tip: Your final checklist should fit on one page or in one mental summary. If your review notes are too long to scan quickly, they are no longer helping you on exam day.

Approach the exam as a pattern-recognition exercise grounded in fundamentals. You do not need expert-level implementation detail. You need clear distinctions, disciplined reading, and enough confidence to choose the best-fit answer and move forward. That is the mindset that turns preparation into a passing score.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A company wants to build a solution that predicts the future sales amount for each retail store based on historical transaction data, seasonality, and local events. Which type of machine learning problem is this?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numeric value, the future sales amount. Classification would be used to predict a category or label, such as whether a store will meet a target or not. Clustering is used to group similar records without predefined labels, which does not match the requirement to predict a specific numeric outcome.

2. A support team needs to process customer emails and identify the overall sentiment of each message as positive, neutral, or negative. Which Azure AI capability is the best fit?

Show answer
Correct answer: Azure AI Language sentiment analysis
Azure AI Language sentiment analysis is correct because it is designed to evaluate text and determine emotional tone such as positive, neutral, or negative. Azure AI Vision image analysis is for visual content such as images, not email text. Azure AI Speech text-to-speech converts written text into spoken audio and does not analyze sentiment.

3. A retailer wants an application that reads text from scanned receipts so the text can be stored and searched. Which capability should you choose?

Show answer
Correct answer: Optical character recognition (OCR)
Optical character recognition (OCR) is correct because the requirement is to extract readable text from scanned documents or images. Image classification would assign an overall label to an image, such as receipt or invoice, but would not return the text content. Object detection identifies and locates objects within an image, which is different from reading printed or handwritten text.

4. A startup wants to build a chatbot that can generate draft marketing copy from prompts. The team specifically wants a generative AI service on Azure rather than a traditional predictive model. Which service should they use?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is correct because it provides generative AI models for scenarios such as text generation, summarization, and conversational experiences. Azure Machine Learning is a broader platform for building and managing machine learning models, but it is not itself the best answer when the exam asks for a specific Azure generative AI service. Azure AI Vision is intended for image-based workloads and does not match text generation requirements.

5. During final review for the AI-900 exam, a candidate notices they often miss questions because two answer choices seem similar, such as OCR versus image tagging or classification versus regression. According to good exam strategy, what should the candidate do next?

Show answer
Correct answer: Spend review time identifying what each service or concept is for and what it is not for
Spending review time on distinctions is correct because AI-900 commonly tests best-fit service selection and closely related concepts. Understanding what a service does and does not do helps eliminate plausible distractors. Memorizing names without scenario mapping is weaker because the exam is scenario-based and often uses similar-sounding options. Skipping mixed-topic practice is also wrong because the real exam intentionally switches domains, and practicing that transition improves exam readiness.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.