HELP

Microsoft AI-900 Azure AI Fundamentals Prep

AI Certification Exam Prep — Beginner

Microsoft AI-900 Azure AI Fundamentals Prep

Microsoft AI-900 Azure AI Fundamentals Prep

Clear, beginner-friendly AI-900 prep built for exam success.

Beginner ai-900 · microsoft · azure ai fundamentals · azure

Prepare with confidence for Microsoft AI-900

Microsoft AI-900: Azure AI Fundamentals is one of the most approachable entry points into AI certification, but beginners still need a clear roadmap. This course is designed specifically for non-technical professionals who want to understand the exam, learn the official objectives in plain language, and build enough confidence to pass on exam day. Whether you work in business, operations, sales, project management, education, or administration, this blueprint helps you study the Microsoft way without assuming a programming background.

The course aligns directly to the official AI-900 exam domains from Microsoft: Describe AI workloads; Fundamental principles of ML on Azure; Computer vision workloads on Azure; NLP workloads on Azure; and Generative AI workloads on Azure. Each chapter is structured to support certification-focused learning, with clear milestones, objective-based sectioning, and exam-style reinforcement. If you are ready to start your certification journey, you can Register free and begin building your study plan.

What this course covers

Chapter 1 introduces the certification itself. You will learn how the AI-900 exam is organized, what kinds of questions to expect, how registration works, how scoring is generally interpreted, and how to create a realistic study schedule as a beginner. This chapter is especially useful for learners with no prior certification experience.

Chapters 2 through 5 focus on the official exam domains. The material is grouped to help you move from broad AI concepts to more specific Azure AI service scenarios. You will start by understanding common AI workloads and Microsoft’s Responsible AI principles, then move into core machine learning concepts such as regression, classification, clustering, training, validation, and evaluation. From there, the course explores computer vision workloads on Azure, natural language processing workloads on Azure, and finally generative AI workloads on Azure, including Azure OpenAI concepts, prompts, copilots, and responsible generative AI practices.

  • Explain AI workloads in business-friendly terms
  • Understand machine learning fundamentals without coding
  • Recognize Azure services used for vision, language, speech, and document scenarios
  • Understand generative AI concepts relevant to the latest AI-900 scope
  • Practice with exam-style reasoning and answer elimination strategies

Why this blueprint helps you pass

Many beginners struggle not because the AI-900 content is too advanced, but because the exam expects you to distinguish between similar terms, services, and use cases. This course solves that problem by organizing the material around the exact objective names used by Microsoft and by emphasizing scenario recognition. Instead of overwhelming you with deep engineering detail, it focuses on the conceptual understanding needed to select the best answer in certification questions.

Every chapter includes milestones that reflect measurable progress, so you always know what you are expected to understand before moving forward. The section structure also makes review easier, especially if you need to revisit a specific domain such as computer vision or generative AI shortly before your exam date. Chapter 6 then brings everything together with a full mock exam chapter, weak-spot analysis, final review guidance, and exam-day tactics.

Built for non-technical learners

This course is intentionally beginner-friendly. You do not need prior certifications, cloud experience, or machine learning knowledge. Basic IT literacy is enough. The explanations are designed to help you build practical exam understanding first, then reinforce that knowledge through domain-based practice. That makes this blueprint ideal for professionals changing careers, students exploring cloud certifications, or team members who need foundational AI literacy with Microsoft Azure.

If you want to continue exploring related training after AI-900, you can also browse all courses on the Edu AI platform. This course gives you a complete structure to prepare effectively, stay focused on the official objectives, and approach the Microsoft AI-900 exam with clarity.

Course outcome

By the end of this exam-prep course, you will understand the Microsoft AI-900 objective set, know how to interpret common exam scenarios, and have a repeatable review strategy for final preparation. Most importantly, you will be able to connect AI concepts to the right Azure services and answer certification questions with greater confidence and accuracy.

What You Will Learn

  • Describe AI workloads and considerations, including common AI scenarios and responsible AI concepts tested on the AI-900 exam
  • Explain the fundamental principles of machine learning on Azure, including regression, classification, clustering, and model evaluation
  • Identify computer vision workloads on Azure and choose the right Azure AI services for image analysis, OCR, facial analysis, and custom vision scenarios
  • Identify natural language processing workloads on Azure, including sentiment analysis, key phrase extraction, language understanding, speech, and translation
  • Describe generative AI workloads on Azure, including copilots, prompts, foundation models, Azure OpenAI concepts, and responsible generative AI practices
  • Apply exam strategy, question analysis, and mock exam practice across all official Microsoft AI-900 exam domains

Requirements

  • Basic IT literacy and comfort using the internet and web applications
  • No prior certification experience is needed
  • No programming or data science background is required
  • An interest in Microsoft Azure and AI concepts is helpful but not mandatory

Chapter 1: AI-900 Exam Foundations and Study Strategy

  • Understand the AI-900 exam format and objectives
  • Plan your registration, scheduling, and test-day logistics
  • Build a beginner-friendly study strategy by domain
  • Establish your baseline with readiness checkpoints

Chapter 2: Describe AI Workloads and Responsible AI

  • Recognize core AI workloads and business use cases
  • Differentiate machine learning, computer vision, NLP, and generative AI
  • Explain responsible AI principles in Microsoft exam language
  • Practice exam-style questions on Describe AI workloads

Chapter 3: Fundamental Principles of Machine Learning on Azure

  • Understand foundational machine learning concepts for AI-900
  • Compare regression, classification, and clustering
  • Interpret training, validation, metrics, and overfitting basics
  • Practice exam-style questions on ML on Azure

Chapter 4: Computer Vision Workloads on Azure

  • Identify common computer vision tasks and outcomes
  • Map computer vision scenarios to Azure AI services
  • Understand image analysis, OCR, face, and custom vision concepts
  • Practice exam-style questions on Computer vision workloads on Azure

Chapter 5: NLP and Generative AI Workloads on Azure

  • Explain core NLP workloads and language AI capabilities
  • Choose Azure services for speech, text, translation, and conversational AI
  • Understand generative AI workloads, prompts, and Azure OpenAI concepts
  • Practice exam-style questions on NLP and Generative AI workloads on Azure

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Engineer Associate

Daniel Mercer is a Microsoft Certified Trainer with extensive experience teaching Azure, AI, and cloud fundamentals to first-time certification candidates. He specializes in translating Microsoft exam objectives into simple, practical learning paths that help beginners build confidence and pass certification exams.

Chapter 1: AI-900 Exam Foundations and Study Strategy

The Microsoft AI-900 Azure AI Fundamentals exam is designed to validate foundational knowledge of artificial intelligence concepts and the Azure services that support them. This is not an exam for deep coding, advanced mathematics, or solution architecture at expert level. Instead, it tests whether you can recognize core AI workloads, match business scenarios to the correct Azure AI capabilities, and distinguish between related concepts such as machine learning, computer vision, natural language processing, and generative AI. Because the exam is fundamentals-level, many candidates underestimate it. That is a mistake. Microsoft often rewards careful reading, precise terminology, and an understanding of what a service is intended to do.

This chapter gives you the foundation for the rest of the course by showing you how the exam is organized, how to prepare like a beginner without feeling overwhelmed, and how to avoid the most common traps that cause preventable score loss. Throughout the AI-900 exam, success depends less on memorizing obscure facts and more on recognizing patterns. For example, if a scenario is about extracting printed or handwritten text from an image, the exam is testing OCR-related understanding. If the scenario is about predicting a numeric value, it is probably testing regression. If the prompt asks about fairness, reliability, transparency, accountability, privacy, or security, it is likely probing responsible AI concepts. Learning to identify those signals is one of the most valuable exam skills you can build.

The objectives of AI-900 align well with the major Azure AI workload families. You are expected to describe AI workloads and considerations, explain basic machine learning principles, identify computer vision scenarios, identify natural language processing scenarios, and understand generative AI concepts in Azure. The exam may also test whether you can tell the difference between prebuilt Azure AI services and custom model development options. In other words, this is a matching exam as much as a knowledge exam: match the need to the concept, the concept to the Azure service, and the service to the scenario.

Exam Tip: Do not study AI-900 as a list of disconnected definitions. Study it as a set of business problems and Azure solutions. Microsoft frequently describes needs in plain business language rather than textbook terminology.

This chapter also helps you plan registration, scheduling, and test-day logistics. Those items seem administrative, but they matter. Many candidates perform worse because they are rushed, unsure about identification requirements, or unfamiliar with the online proctoring process. Good preparation includes removing logistical uncertainty before exam day.

Finally, this chapter establishes your baseline through readiness checkpoints. A strong exam strategy begins with honest assessment: Which domains already make sense to you, and which ones blur together? Can you explain, in simple language, the difference between classification and clustering? Can you identify when to use sentiment analysis versus key phrase extraction? Can you distinguish traditional AI workloads from generative AI workloads? If not yet, that is normal. The purpose of this chapter is to create a plan so that each domain becomes manageable, reviewable, and test-ready.

As you move through this course, keep one principle in mind: fundamentals exams reward clarity. If you can explain an Azure AI concept simply, compare it to nearby concepts, and identify where it fits in a scenario, you are studying the right way. This chapter is your roadmap for doing exactly that.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Plan your registration, scheduling, and test-day logistics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a beginner-friendly study strategy by domain: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: Introduction to Microsoft Azure AI Fundamentals and the AI-900 certification

Section 1.1: Introduction to Microsoft Azure AI Fundamentals and the AI-900 certification

AI-900 is Microsoft’s entry-level certification for candidates who want to demonstrate basic knowledge of AI concepts and related Azure services. It is suitable for students, business stakeholders, technical beginners, and IT professionals expanding into cloud AI. The exam does not assume that you are a data scientist or software engineer, but it does expect accurate understanding of terminology and service purpose. That distinction matters. You may not need to build models in code, yet you must still know what types of models exist, what kinds of problems they solve, and how Azure products map to those problems.

The exam is fundamentally about AI workloads and recognition. Microsoft wants to know whether you can identify common scenarios such as image classification, text analysis, speech recognition, conversational AI, anomaly detection, or generative AI assistance. You should also understand responsible AI concepts because Microsoft consistently frames AI within ethical and operational guardrails. Topics such as fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability are not side notes; they are part of the tested foundation.

From an exam-prep perspective, think of AI-900 as a vocabulary-and-scenarios exam. A business statement like “predict future sales” points toward regression. A requirement to “group customers by similarity” points toward clustering. A need to “extract text from receipts” points toward OCR. A prompt about “creating content from a natural language request” points toward generative AI and foundation model usage. If you can spot these clues quickly, you will answer more confidently.

Exam Tip: Fundamentals-level exams often include distractors that sound advanced but are not appropriate for the stated need. Choose the simplest correct Azure AI capability that directly solves the scenario.

Another key point is that AI-900 is not only about technology names. It tests conceptual understanding. You should know what machine learning is, what computer vision is, what NLP is, and how generative AI differs from traditional predictive AI. If you only memorize service labels without understanding the problem types they address, the wording of scenario-based questions can still mislead you. Build from concepts first, then attach Azure services to those concepts.

Section 1.2: Official exam domains, weighting logic, and how Microsoft frames objectives

Section 1.2: Official exam domains, weighting logic, and how Microsoft frames objectives

Microsoft organizes AI-900 around several major domains, and each domain represents a category of skills rather than a single topic. These domains typically include describing AI workloads and considerations, describing fundamental machine learning principles on Azure, describing features of computer vision workloads on Azure, describing features of natural language processing workloads on Azure, and describing features of generative AI workloads on Azure. While exact percentages may change over time, Microsoft publishes objective areas and relative weightings so you can prioritize study.

The weighting logic matters because candidates often over-study familiar material and under-study broad domains with many scenario variations. For example, a learner with some machine learning exposure may spend too much time on regression and classification while neglecting NLP and generative AI service identification. On the exam, however, weighted domains can produce a larger number of questions than expected, especially when subtopics are spread across multiple scenario types.

Microsoft also frames objectives with action verbs such as describe, identify, recognize, and select. Those verbs are clues. They tell you that the exam usually emphasizes recognition and applied understanding rather than implementation detail. If the objective says describe responsible AI principles, you should expect to explain or identify examples of fairness, transparency, and accountability. If the objective says identify Azure AI services for a workload, expect scenario matching rather than coding syntax.

A practical study strategy is to break each domain into three layers: concept, scenario, and service. First, define the concept in plain language. Second, identify the business scenarios that belong to it. Third, connect the relevant Azure offering. This method mirrors how Microsoft writes questions.

  • Concept: What is the core idea being tested?

  • Scenario: What business need or data type signals this idea?

  • Service: Which Azure AI capability best fits?

Exam Tip: When two answer choices seem similar, ask which one matches the exact objective wording. Microsoft often distinguishes between understanding a workload category and selecting a specific service within that category.

Be careful with broad wording. “Analyze text” could involve several NLP tasks, but “determine whether customer feedback is positive or negative” specifically signals sentiment analysis. “Extract important terms” signals key phrase extraction. “Translate speech” combines speech and translation concepts. Train yourself to notice the precise action being requested.

Section 1.3: Registration process, exam delivery options, identification, and rescheduling rules

Section 1.3: Registration process, exam delivery options, identification, and rescheduling rules

Administrative readiness is part of exam readiness. Registering for AI-900 typically begins through Microsoft’s certification portal, where you select the exam, choose your delivery method, and schedule an appointment with the testing provider. Delivery options commonly include in-person testing at an authorized center or online proctored testing from an approved location. Both are valid, but your choice should reflect your test-taking habits and environment.

In-person testing is often preferable for candidates who want a controlled setting with minimal home distractions. Online testing can be convenient, but it requires strict compliance with room, desk, device, and identity rules. Before scheduling, read current policies carefully rather than relying on assumptions from other exams. Requirements can change, and fundamentals candidates sometimes lose time or appointments because they overlook technical checks or check-in rules.

Identification requirements are especially important. The name in your certification profile should match your government-issued identification exactly enough to satisfy the provider’s rules. Mismatches involving middle names, abbreviations, or ordering can create major problems on exam day. If anything looks inconsistent, resolve it early instead of hoping it will be fine.

Rescheduling and cancellation rules also deserve attention. Providers usually allow changes before a stated cutoff window, but late changes may be restricted or penalized. Do not schedule your exam casually. Choose a date that gives you enough study time and a small buffer for review. If possible, schedule your test when you know your calendar is stable.

Exam Tip: Treat test-day logistics as part of your study plan. A well-prepared candidate can still underperform if they begin the exam stressed by ID issues, software setup, or last-minute timing confusion.

For online exams, complete any required system checks well in advance. Understand whether external monitors, notes, phones, watches, or background noise are prohibited. For test-center delivery, know the travel time, parking situation, and check-in expectations. The more predictable the logistics, the more mental energy you save for the actual exam. Certification success is not only about knowledge; it is also about reducing avoidable friction.

Section 1.4: Scoring model, question types, passing strategy, and time management

Section 1.4: Scoring model, question types, passing strategy, and time management

Microsoft certification exams use scaled scoring, and AI-900 commonly uses a passing score threshold of 700 on a scale that reports up to 1000. Candidates should understand that scaled scoring does not necessarily mean every question contributes equally or in the same visible way. The practical lesson is simple: aim well above the minimum instead of trying to calculate a narrow passing line. Build a margin through balanced preparation across all domains.

Question types may include traditional multiple-choice items, multiple-select items, scenario-based prompts, matching-style formats, or statement evaluation formats. Because the exam is fundamentals-level, the real challenge is often not technical depth but precision. A single overlooked phrase such as “best service,” “most appropriate workload,” or “numeric prediction” can change the answer.

Your passing strategy should focus on disciplined reading. First, identify the core task: classify, predict, extract, detect, translate, summarize, generate, or evaluate. Second, identify the data type involved: text, image, speech, tabular data, or prompts to a generative model. Third, eliminate answers that belong to a different workload category. This process reduces confusion quickly.

Time management matters even on fundamentals exams. Many candidates move too quickly because questions seem easier at first glance. That leads to avoidable mistakes on subtle wording. On the other hand, spending too long on one uncertain item can create end-of-exam panic. Move steadily. If a question is unclear, eliminate obvious wrong choices, make the best supported choice, and continue.

Exam Tip: Watch for answer choices that are technically related to AI but not appropriate for the scenario. The exam often tests whether you can pick the most direct fit, not merely a plausible technology.

A strong exam-day approach is to reserve a little time for final review if the platform allows it. Use that time to revisit questions where two Azure services seemed close. In many cases, the correct answer becomes clearer once you re-check the exact business requirement. Good scoring on AI-900 comes from consistent accuracy across many medium-difficulty decisions, not from solving a few difficult items perfectly.

Section 1.5: Study planning for beginners using domain-based review and repetition

Section 1.5: Study planning for beginners using domain-based review and repetition

Beginners often fail not because the material is too advanced, but because they study without structure. The best approach for AI-900 is domain-based review. Instead of jumping randomly between topics, organize your study around the official objective areas. This keeps your preparation aligned with the exam and helps you notice connections between concepts. For example, after studying machine learning basics, it becomes easier to contrast those ideas with generative AI workloads. After studying computer vision, it becomes easier to separate image analysis from OCR and facial analysis scenarios.

A practical weekly method is to assign one main domain per study block and revisit previous domains through spaced repetition. On the first pass, focus on definitions and examples. On the second pass, focus on comparing similar concepts. On the third pass, focus on scenario recognition and service selection. This layered repetition is especially effective for AI-900 because many wrong answers are built from partially correct concepts used in the wrong context.

For example, do not merely memorize that classification predicts categories and regression predicts numbers. Learn to recognize how Microsoft describes each one in business language. Likewise, do not only memorize service names. Learn what problem each service solves, what input it expects, and what output it produces. That is the level at which the exam tends to operate.

  • Review one domain deeply before moving on.

  • Create simple comparison notes between related concepts.

  • Revisit earlier domains every few days.

  • Use short self-checks to confirm you can explain concepts without notes.

Exam Tip: If you cannot explain a concept in plain language, you probably do not understand it well enough for scenario-based questions.

Keep your study plan realistic. A beginner-friendly schedule is better than an ambitious plan you abandon after three days. Consistency matters more than intensity. Even 30 to 60 minutes of focused, repeated review can build strong retention when your study is organized by domain and reinforced through frequent recall.

Section 1.6: Common exam pitfalls, confidence building, and readiness self-assessment

Section 1.6: Common exam pitfalls, confidence building, and readiness self-assessment

One of the most common AI-900 pitfalls is confusing adjacent concepts. Candidates mix up regression and classification, OCR and image analysis, sentiment analysis and key phrase extraction, or traditional AI prediction and generative AI creation. Another major pitfall is overthinking. Because the exam covers modern AI topics, some candidates assume the answer must be the most sophisticated option. In reality, Microsoft often rewards selecting the straightforward service that directly matches the stated need.

A second trap is ignoring responsible AI. Some learners treat it as a softer topic and focus only on technical services. That is a mistake. Responsible AI principles are part of the exam’s foundation and can appear in direct or scenario-based wording. If a question involves bias, user trust, explainability, or safe model behavior, the tested skill may be ethical understanding rather than service memorization.

Confidence building comes from measurable readiness checkpoints. Ask yourself whether you can do the following without looking at notes: define each major AI workload, distinguish machine learning model types, identify common Azure AI services by use case, explain responsible AI principles, and recognize when a scenario belongs to generative AI. If any area feels vague, return to concept-plus-scenario review instead of trying to memorize more isolated terms.

Readiness self-assessment should be honest and domain-specific. Do not tell yourself “I understand NLP” if you really mean “I recognize one or two services.” Break confidence down into skills: can you identify the data type, the task, and the best Azure fit? Can you eliminate distractors that belong to neighboring workloads? Can you spot wording that changes a problem from analysis to generation?

Exam Tip: Readiness is not the feeling that you know everything. Readiness is the ability to consistently choose the best answer from realistic exam wording.

As you continue through this course, use each lesson to strengthen one part of that readiness model. The goal is not perfection. The goal is reliable recognition, disciplined reading, and enough breadth across all official domains to score confidently on exam day.

Chapter milestones
  • Understand the AI-900 exam format and objectives
  • Plan your registration, scheduling, and test-day logistics
  • Build a beginner-friendly study strategy by domain
  • Establish your baseline with readiness checkpoints
Chapter quiz

1. You are beginning preparation for the Microsoft AI-900 exam. Which study approach is MOST aligned with the way the exam typically measures skills?

Show answer
Correct answer: Study business scenarios and practice matching needs to AI workload types and Azure services
AI-900 is a fundamentals exam that commonly tests whether you can recognize AI workloads, identify scenario signals, and match business needs to the correct Azure AI capabilities. Studying business scenarios and mappings is therefore the best approach. Option A is weaker because the exam often uses plain business language rather than direct textbook definitions, so isolated memorization is not enough. Option C is incorrect because AI-900 does not emphasize deep coding or expert-level implementation.

2. A candidate plans to take AI-900 through online proctoring. Which action is MOST likely to reduce preventable score loss on exam day?

Show answer
Correct answer: Review identification requirements, test environment rules, and check-in steps before the exam
The chapter emphasizes that registration, scheduling, and test-day logistics matter because candidates can underperform when rushed or unfamiliar with the proctoring process. Reviewing ID requirements and check-in procedures helps remove avoidable stress. Option B is wrong because logistical uncertainty can affect performance even when technical knowledge is sufficient. Option C is also wrong because AI-900 is not centered on advanced mathematics, and ignoring exam-day readiness is a poor tradeoff.

3. A study group is creating readiness checkpoints for AI-900. Which checkpoint BEST measures whether a learner is building the kind of understanding the exam expects?

Show answer
Correct answer: The learner can explain the difference between classification and clustering in simple terms and identify example use cases
AI-900 readiness is best assessed by whether a candidate can clearly distinguish related concepts and apply them to scenarios. Explaining classification versus clustering and recognizing use cases directly reflects exam expectations. Option B is incorrect because advanced mathematical derivations are outside the intended depth of AI-900. Option C is also incorrect because memorizing names without understanding purpose does not prepare you for scenario-based questions.

4. A company wants to train employees on how to identify common AI-900 question patterns. Which signal-to-concept mapping is correct?

Show answer
Correct answer: Extracting printed or handwritten text from an image -> optical character recognition (OCR)
Extracting printed or handwritten text from images is a classic OCR scenario and is exactly the kind of pattern recognition AI-900 expects. Option B is wrong because predicting a numeric value is typically a regression problem, not clustering. Option C is wrong because fairness and accountability are responsible AI considerations, not object detection tasks.

5. A beginner says, "I am overwhelmed because AI-900 seems like a huge list of unrelated terms." What is the BEST recommendation based on Chapter 1?

Show answer
Correct answer: Build a study plan by exam domains and connect each concept to business problems and nearby concepts
Chapter 1 recommends a beginner-friendly strategy built by domain, with emphasis on comparing related concepts and matching them to business scenarios. That approach makes the material manageable and aligned with the exam. Option A is wrong because AI-900 is not best approached as disconnected definitions. Option C is wrong because readiness improves through honest assessment and targeted work on unclear domains, not by avoiding them.

Chapter 2: Describe AI Workloads and Responsible AI

This chapter maps directly to one of the most tested entry-level areas on the Microsoft AI-900 exam: recognizing AI workloads, matching them to business scenarios, and explaining responsible AI concepts in Microsoft’s preferred language. On the exam, you are often not asked to build a solution. Instead, you must identify what kind of AI problem is being described, decide which broad Azure AI capability fits, and avoid distractors that sound technical but do not match the workload. That makes this chapter foundational for the rest of the course.

The first skill to master is workload recognition. Microsoft expects you to distinguish between machine learning, computer vision, natural language processing, conversational AI, document intelligence, and generative AI. In many questions, the scenario is short and business-oriented: a company wants to predict sales, extract text from scanned forms, analyze customer reviews, translate speech, create a chatbot, or generate a draft summary. Your job is to classify the request correctly before thinking about specific Azure services.

A common exam trap is confusing a business goal with an implementation detail. For example, if a scenario mentions invoices and forms, candidates may jump to OCR alone, but the better answer may involve document intelligence because the task is not just reading text; it is extracting structured fields from documents. Similarly, if a question involves predicting a numeric value such as future revenue, delivery time, or energy usage, the workload is likely regression in machine learning, not simple reporting or dashboard analytics.

Another key exam objective in this chapter is responsible AI. Microsoft consistently emphasizes six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. On AI-900, these principles are tested conceptually, often through examples of what a team should do when designing or deploying AI. You are not expected to memorize legal frameworks, but you should be able to connect each principle to practical behavior, such as monitoring bias, securing personal data, explaining AI limitations, and assigning human oversight.

Exam Tip: When a question describes an organization that wants to automate repetitive decisions using fixed if-then logic, that is usually not machine learning. The exam often checks whether you can distinguish true AI workloads from standard software automation or business intelligence. AI is most useful when patterns must be learned from data, language must be interpreted, images must be analyzed, or content must be generated.

This chapter also supports later domains in the course. If you cannot identify the workload type, you will struggle to choose between Azure AI services in later chapters. Think of this material as your mental sorting framework. Start by asking: Is the task predicting from data, understanding images, understanding language, interacting through conversation, extracting from documents, or generating new content? Then ask: what responsible AI concerns apply? That two-step process will help you answer many AI-900 questions accurately and quickly.

  • Recognize common AI workloads and business use cases in Microsoft exam wording.
  • Differentiate machine learning from rules-based automation and standard analytics.
  • Identify computer vision, NLP, conversational AI, document intelligence, and generative AI scenarios.
  • Explain Microsoft’s six responsible AI principles using practical examples.
  • Apply scenario analysis to eliminate distractors and choose the best exam answer.

As you read the six sections in this chapter, focus on pattern recognition. The exam is not trying to trick you with deep mathematics. It is testing whether you can hear a scenario and say, “That is classification,” “That is OCR plus document extraction,” “That is sentiment analysis,” or “That raises privacy and transparency concerns.” Candidates who learn the language of the exam usually perform much better than those who only memorize service names.

Practice note for Recognize core AI workloads and business use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Differentiate machine learning, computer vision, NLP, and generative AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Describe AI workloads and common real-world business scenarios

Section 2.1: Describe AI workloads and common real-world business scenarios

In AI-900, the phrase AI workload refers to a broad category of tasks that artificial intelligence systems can perform. Microsoft typically organizes these workloads into machine learning, computer vision, natural language processing, conversational AI, document intelligence, and generative AI. Your exam task is often to look at a business need and identify which workload category best matches it.

Common business scenarios appear repeatedly in exam-style language. Predicting loan default risk, estimating house prices, forecasting inventory demand, and recommending actions based on data are machine learning scenarios. Reading handwritten forms, identifying objects in an image, or detecting whether an image contains unsafe content points to computer vision. Determining whether a product review is positive or negative, extracting key phrases from support tickets, translating text, and converting speech to text are natural language processing scenarios. Building a virtual agent that answers user questions is conversational AI. Extracting names, totals, and dates from invoices or receipts is document intelligence. Drafting email responses, summarizing long text, or generating code suggestions indicates generative AI.

A major exam skill is separating similar-sounding scenarios. For example, a company may want to “understand customer feedback.” That could mean sentiment analysis if the goal is positive/negative opinion, key phrase extraction if the goal is identifying main topics, or language understanding if the goal is detecting user intent in messages. Likewise, “analyzing documents” might mean OCR when the need is simply reading text from an image, but it might mean document intelligence when the need is to extract labeled fields and preserve structure.

Exam Tip: Focus on the verb in the scenario. Words like predict, classify, detect, extract, translate, answer, and generate usually reveal the workload more clearly than the industry context.

Another trap is overthinking the business domain. Whether the scenario is healthcare, retail, manufacturing, or finance does not usually change the AI category being tested. The exam is measuring foundational understanding, not industry-specific regulations or implementation architecture. If a retailer wants to forecast sales, it is still a prediction problem. If a hospital wants to extract fields from intake forms, it is still a document extraction problem.

When answering scenario questions, ask yourself three things: what is the input, what is the expected output, and what kind of pattern must the system recognize? If the input is tabular business data and the output is a future value or category, think machine learning. If the input is images or video, think computer vision. If the input is text or speech, think NLP. If the output is newly created text, code, or media, think generative AI. This simple framework is one of the most useful tools for this exam domain.

Section 2.2: Machine learning workloads versus rule-based automation and analytics

Section 2.2: Machine learning workloads versus rule-based automation and analytics

Machine learning is one of the most important AI workloads on the AI-900 exam, but Microsoft also expects you to know what machine learning is not. Machine learning uses data to train models that find patterns and make predictions or decisions without being explicitly programmed for every possible case. In contrast, rule-based automation follows fixed instructions defined by a human, and traditional analytics reports what has happened based on existing data.

This distinction appears frequently in exam wording. Suppose a company uses a simple business rule such as “if invoice total exceeds a threshold, require approval.” That is automation, not machine learning. If the company instead wants a model to predict whether an invoice is likely fraudulent based on patterns in historical transactions, that is machine learning. Similarly, a dashboard showing last quarter’s sales is analytics. A model predicting next quarter’s sales is machine learning.

For exam purposes, understand the major machine learning workload types. Regression predicts a numeric value, such as price, temperature, or demand. Classification predicts a category or label, such as approve/deny, spam/not spam, or churn/no churn. Clustering groups similar data points without pre-labeled categories, such as segmenting customers by behavior. You do not need advanced formulas for AI-900, but you must match scenario wording to the correct learning type.

A common trap is confusing classification with regression. If the output is a number, it is generally regression. If the output is one of several labels, it is classification. Another trap is choosing machine learning when deterministic logic is sufficient. The exam may describe a scenario that can be solved by straightforward business rules. In that case, machine learning is not the best answer unless the scenario explicitly involves learning patterns from historical data.

Exam Tip: Look for phrases such as historical data, train a model, predict likelihood, forecast, or identify patterns. These are strong signals that the workload is machine learning rather than standard reporting or automation.

Microsoft also expects a basic understanding of model evaluation, even in a workload-identification chapter. If a model predicts well on training data but poorly on new data, it may be overfitting. If a model is used to classify outcomes, metrics like accuracy, precision, recall, and confusion matrices may be relevant in broader exam coverage. Here, remember the higher-level exam objective: machine learning is about generalizing from data, not memorizing rules. When you can explain that difference clearly, you will avoid many distractor answers.

Section 2.3: Computer vision, NLP, conversational AI, and document intelligence overview

Section 2.3: Computer vision, NLP, conversational AI, and document intelligence overview

This exam domain requires you to distinguish several language- and perception-related workloads that are easy to blur together. Computer vision works with images and video. Natural language processing works with text and speech. Conversational AI focuses on dialogue between a user and a system. Document intelligence sits at the intersection of vision and structured extraction, because documents often contain both text and layout that must be understood.

Computer vision scenarios include image classification, object detection, facial analysis concepts, OCR, and image description. If a question asks about identifying products in shelf images, recognizing handwritten text, or detecting whether a photo contains specific visual elements, computer vision is the right category. On the exam, OCR is especially important: it means reading text from images or scanned documents. However, if the goal is to capture specific fields such as invoice number, total amount, vendor name, or due date, document intelligence is usually the better fit because structure matters, not just raw text.

NLP scenarios include sentiment analysis, key phrase extraction, entity recognition, language detection, translation, and speech services. If a company wants to analyze product reviews for satisfaction, think sentiment analysis. If it wants to pull out important topics from articles, think key phrase extraction. If it wants to detect names, organizations, or locations, think entity recognition. If the scenario mentions spoken input or output, think speech-to-text, text-to-speech, or translation.

Conversational AI is often tested in broad terms. A chatbot or virtual agent that engages in dialogue, answers common questions, and routes requests is a conversational AI solution. The trap is assuming any text-related scenario is just NLP. In reality, conversational AI may combine NLP for understanding with dialogue management for interaction. On AI-900, if the emphasis is on an interactive bot experience, choose conversational AI over a single isolated text-analysis task.

Exam Tip: When you see forms, receipts, invoices, tax documents, or ID cards, pause before choosing OCR. The exam often wants you to recognize that extracting structured fields from business documents is a document intelligence scenario.

To identify the best answer, think about the expected outcome. Is the system reading what is on a page, understanding what a sentence means, speaking with a user, or extracting business fields from semi-structured documents? These are different workloads even though they may use related technologies. Microsoft tests whether you can pick the most precise category rather than the broadest possible one.

Section 2.4: Generative AI workloads, copilots, and content creation use cases

Section 2.4: Generative AI workloads, copilots, and content creation use cases

Generative AI is now a major part of Azure AI Fundamentals. Unlike traditional predictive AI, which classifies, detects, or forecasts, generative AI creates new content based on patterns learned from large datasets. On the exam, you should recognize scenarios involving text generation, summarization, code assistance, question answering over content, image generation concepts, and copilots that help users complete tasks more efficiently.

A copilot is typically an AI assistant embedded into an application or workflow. It does not just chat for conversation’s sake; it helps users perform actions such as drafting emails, summarizing meetings, generating reports, answering questions from enterprise content, or suggesting next steps. In exam scenarios, if the system assists a user interactively by generating content or recommendations in context, the workload likely involves generative AI and copilot-style functionality.

One common trap is confusing generative AI with search or retrieval alone. If a system only finds existing documents, that is not generative AI by itself. If it creates a summary, draft, or response based on retrieved information, then generative AI is involved. Another trap is confusing a rules-based chatbot with a generative AI assistant. A fixed decision-tree bot can answer predefined questions but does not necessarily generate novel responses.

The exam may also use terms such as foundation models, prompts, and Azure OpenAI concepts. You do not need deep implementation detail, but you should understand that a foundation model is a large pretrained model that can be adapted to many tasks, and a prompt is the input instruction that guides the model’s output. Prompt quality matters because generative systems respond to the instructions and context they receive.

Exam Tip: If the output is newly created language such as a summary, draft, explanation, rewrite, or suggested response, think generative AI first. If the output is a predicted label or score, think traditional machine learning instead.

Responsible generative AI is also part of this workload. Generative systems can hallucinate, reflect bias, produce unsafe content, or reveal sensitive information if not controlled. Therefore, scenario questions may ask what additional consideration applies when deploying a content-generation solution. The best answer often involves safety filters, human review, grounding responses in approved data, transparency with users, and privacy protection. For AI-900, keep the distinction clear: generative AI creates content; other AI workloads mostly analyze, predict, or extract.

Section 2.5: Responsible AI principles including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Section 2.5: Responsible AI principles including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Microsoft’s six responsible AI principles are highly testable on AI-900, and you should know both their names and their practical meaning. Fairness means AI systems should treat people equitably and avoid harmful bias. Reliability and safety means systems should perform consistently and minimize unintended harm. Privacy and security means data should be protected and used appropriately. Inclusiveness means solutions should be designed for people with diverse needs and abilities. Transparency means stakeholders should understand how and why AI is used, including limitations. Accountability means humans remain responsible for AI outcomes and governance.

The exam often tests these principles through short scenarios. For example, if a hiring model disadvantages applicants from certain groups, the issue is fairness. If a system makes unsafe decisions when unusual input appears, reliability and safety is the concern. If personal customer data is exposed or used without proper controls, privacy and security is the principle involved. If an application is difficult for users with disabilities to access, inclusiveness is at stake. If users are not told that AI is generating recommendations, transparency is lacking. If no team owns oversight of the system after deployment, accountability is missing.

A common trap is mixing transparency with explainability in a narrow technical sense. Explainability can support transparency, but on AI-900, transparency is broader: users should know when AI is being used, what it is intended to do, and what its limitations are. Another trap is assuming privacy and security are the same thing. They are related, but privacy focuses on appropriate use and protection of personal data, while security focuses on defending systems and data from unauthorized access or attack.

Exam Tip: When two answer choices both seem ethical, choose the one that most directly addresses the specific risk in the scenario. Bias points to fairness; hidden AI usage points to transparency; lack of human oversight points to accountability.

Microsoft’s exam language is practical rather than philosophical. You may be asked what an organization should do when deploying AI. Strong answer patterns include testing for bias, monitoring model performance, protecting sensitive data, designing for accessibility, documenting system behavior, communicating limitations, and assigning human review or escalation processes. Responsible AI is not a separate afterthought; it applies across machine learning, vision, NLP, and generative AI workloads. If you remember that the exam wants principle-to-example matching, you will handle this objective effectively.

Section 2.6: AI workload selection and exam-style scenario analysis for the Describe AI workloads domain

Section 2.6: AI workload selection and exam-style scenario analysis for the Describe AI workloads domain

The final skill for this chapter is exam-style scenario analysis: selecting the correct AI workload when multiple options sound plausible. On AI-900, success often comes from disciplined elimination. Start by identifying the input type: tabular data, images, text, speech, documents, or user prompts. Next, identify the required output: prediction, label, extraction, translation, conversational response, or generated content. Finally, ask whether the task relies on learned patterns, visual interpretation, language understanding, structured document extraction, or content generation.

For example, if a company wants to assign incoming support emails to categories such as billing, technical issue, or cancellation, that is a classification-style machine learning or text classification scenario depending on wording. If the same company wants to determine whether each email expresses frustration, that is sentiment analysis. If it wants a bot to interact with customers and answer questions, that is conversational AI. If it wants the system to draft a reply for an agent to review, that is generative AI. Notice how small wording changes alter the correct answer.

Another common exam pattern is choosing between broad and specific answers. If a scenario is about extracting totals and dates from scanned invoices, “computer vision” is broadly related, but “document intelligence” is more precise and therefore usually better. If a system must predict a numeric amount, “machine learning” is broad but “regression” is more accurate when offered. Always prefer the answer that matches the business requirement most exactly.

Exam Tip: Read the last line of the scenario carefully. Microsoft often places the true requirement there, such as predict the value, extract fields, analyze sentiment, or generate a summary. Earlier details may just provide business context.

Also watch for distractors involving technologies that are possible but unnecessary. The exam typically asks for the best fit, not every tool that could contribute. A report dashboard is not the best answer for a forecasting problem. OCR alone is not the best answer for structured invoice extraction. A rules engine is not the best answer when the organization wants predictions from historical data. A traditional chatbot is not the best answer when the goal is dynamic content generation.

As a final strategy, translate each scenario into plain language. “They want to know something before it happens” suggests prediction. “They want to read what is in an image” suggests OCR or vision. “They want to understand what text means” suggests NLP. “They want to talk with users” suggests conversational AI. “They want to create new content” suggests generative AI. “They want to do this responsibly” points back to the six responsible AI principles. That translation habit is one of the strongest exam-prep techniques for this domain.

Chapter milestones
  • Recognize core AI workloads and business use cases
  • Differentiate machine learning, computer vision, NLP, and generative AI
  • Explain responsible AI principles in Microsoft exam language
  • Practice exam-style questions on Describe AI workloads
Chapter quiz

1. A retail company wants to predict next month's sales revenue for each store based on historical sales, promotions, and seasonal trends. Which AI workload should they use?

Show answer
Correct answer: Machine learning regression
This scenario requires predicting a numeric value from historical data, which is a regression task in machine learning. Computer vision object detection is used to locate and classify objects in images, so it does not fit a sales forecasting scenario. Rules-based process automation uses fixed logic and does not learn patterns from data, which is a common AI-900 exam distractor when the requirement is prediction.

2. A bank wants to process scanned loan application forms and extract fields such as applicant name, address, income, and application ID into a structured system. Which workload best matches this requirement?

Show answer
Correct answer: Document intelligence
Document intelligence is the best answer because the requirement is not just reading text from scanned documents, but extracting structured fields from forms. OCR only identifies printed or handwritten text and is too limited for field extraction and document understanding. Conversational AI is used for chatbot or voice assistant interactions, so it does not apply to form processing.

3. A company wants to analyze thousands of customer reviews to determine whether each review expresses a positive, negative, or neutral opinion. Which AI workload should they choose?

Show answer
Correct answer: Natural language processing for sentiment analysis
Sentiment analysis is a natural language processing workload because it interprets the meaning and emotional tone of text. Computer vision image classification applies to images, not written reviews. Generative AI creates new content such as summaries or drafts, but the scenario is about analyzing existing text rather than generating new text.

4. A team is deploying an AI system that helps approve insurance claims. They want to ensure affected customers can understand the factors that influenced each recommendation and the limits of the model. Which responsible AI principle does this most directly support?

Show answer
Correct answer: Transparency
Transparency is the correct answer because it focuses on helping users and stakeholders understand how an AI system works, what influenced its outputs, and what its limitations are. Inclusiveness is about designing AI systems that can be used effectively by people with diverse needs and abilities, which is not the main concern in this scenario. Reliability and safety relates to dependable performance and avoiding harmful failures, but the question specifically emphasizes explainability and understanding.

5. A company uses fixed if-then rules to route support tickets: if a message contains the word 'billing,' it goes to the finance queue; if it contains 'password,' it goes to IT support. An employee says this is machine learning. How should you classify the solution?

Show answer
Correct answer: It is rules-based automation, not machine learning
This is rules-based automation because the behavior is determined by predefined if-then logic rather than patterns learned from training data. On AI-900, Microsoft often tests the distinction between true AI workloads and standard software automation. It is not machine learning simply because it automates decisions. It is also not generative AI, because it is not creating new content such as text, images, or summaries.

Chapter 3: Fundamental Principles of Machine Learning on Azure

This chapter covers one of the most heavily tested conceptual areas on the AI-900 exam: the fundamental principles of machine learning on Azure. Microsoft does not expect you to build production-grade data science solutions for this exam, but you do need to recognize the types of machine learning problems, understand how models are trained and evaluated, and identify where Azure tools fit into the process. Many exam items are written to test whether you can match a business scenario to the correct machine learning approach rather than whether you can write code.

At this level, machine learning is best understood as a way to learn patterns from data so that a model can make predictions, assign categories, discover groups, or identify unusual behavior. The exam blueprint emphasizes regression, classification, clustering, training and validation concepts, and Azure Machine Learning capabilities such as automated machine learning and designer-based options. You should be able to distinguish core ideas clearly and quickly, because several wrong answers on the exam look plausible if you only partially understand the terminology.

A strong exam strategy is to first identify what the organization is trying to predict or discover. If the output is a number, think regression. If the output is a category, think classification. If there are no predefined labels and the goal is to find structure in data, think clustering. If the scenario focuses on evaluating model quality, then metrics and overfitting concepts become the key. Azure-specific questions then often ask which service or feature helps you perform that task with minimal code, guided experimentation, or managed lifecycle support.

Exam Tip: AI-900 questions often hide the correct answer in the wording of the outcome. Focus on the target result: numeric value, category label, or grouped similarity. Do not let Azure branding distract you from the underlying machine learning concept.

This chapter naturally integrates the lessons you must master for the exam: understanding foundational machine learning concepts, comparing regression, classification, and clustering, interpreting training and validation basics, and practicing scenario-based reasoning about machine learning on Azure. As you read, pay special attention to common exam traps, especially confusing classification with clustering, confusing evaluation metrics across model types, and assuming Azure Machine Learning always requires coding.

  • Machine learning workflow basics: data, training, validation, evaluation, deployment
  • Supervised learning: regression and classification
  • Unsupervised learning: clustering and related pattern-discovery thinking
  • Model quality concepts: overfitting, underfitting, metrics, and validation
  • Azure Machine Learning features: automated ML, designer, and managed experimentation
  • Exam reasoning: identify the best answer from realistic business scenarios

By the end of this chapter, you should be able to read an AI-900 machine learning question and determine not only the right category of problem, but also the most likely Azure-oriented solution approach Microsoft expects. That combination of conceptual understanding and exam-focused pattern recognition is what separates a guess from a confident answer.

Practice note for Understand foundational machine learning concepts for AI-900: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Compare regression, classification, and clustering: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Interpret training, validation, metrics, and overfitting basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style questions on ML on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Fundamental principles of machine learning on Azure and ML workflow basics

Section 3.1: Fundamental principles of machine learning on Azure and ML workflow basics

Machine learning is the process of using data to train a model that can generalize patterns and make predictions on new data. For AI-900, you are not expected to perform advanced mathematics, but you are expected to understand the standard workflow and the role Azure plays in supporting it. The typical workflow includes collecting data, preparing data, selecting features, training a model, validating it, evaluating its performance, and then deploying it for use. Azure Machine Learning provides a cloud-based environment to support these activities.

One of the most important principles is that a model learns from examples. If the examples include correct answers, such as known sales totals or approved loan statuses, the learning process is supervised. If the system is looking for patterns without predefined labels, such as grouping customers by similarity, the process is unsupervised. The exam often tests whether you understand that machine learning starts with data and that the quality of outcomes depends heavily on the quality and relevance of that data.

Azure adds practical value by offering managed tools for experimentation, data handling, model training, and tracking. In exam terms, Azure Machine Learning is often the best answer when the scenario mentions building, training, comparing, and operationalizing machine learning models in a structured cloud platform. The exam may also reference no-code or low-code experiences, which are designed for users who want to build models without writing extensive code.

Exam Tip: If a question asks about the general lifecycle of creating predictive models in Azure, think beyond just training. Microsoft likes to test the full workflow: ingest data, train, validate, evaluate, and deploy.

A common trap is confusing machine learning with simple rule-based automation. If the scenario says a system follows fixed if-then logic created by a developer, that is not machine learning. Another trap is assuming that all AI workloads are machine learning. AI-900 also covers vision, language, and generative AI services, but this chapter focuses on the machine learning foundations that support predictive and pattern-discovery scenarios.

When reading exam questions, ask yourself three things: What is the input data? What is the desired output? Is the model learning from labeled examples or discovering patterns independently? Those three checks will usually point you to the correct concept quickly.

Section 3.2: Supervised learning concepts with regression and classification examples

Section 3.2: Supervised learning concepts with regression and classification examples

Supervised learning uses labeled data, meaning each training example includes the correct outcome. On the AI-900 exam, the two main supervised learning categories you must know are regression and classification. These are frequently tested because they represent the most common predictive machine learning patterns.

Regression is used when the model predicts a numeric value. Typical examples include forecasting house prices, estimating delivery time, predicting energy consumption, or calculating future revenue. The key sign is that the output is a continuous number rather than a label. If a scenario asks you to predict how much, how many, or what value, regression is usually the right answer.

Classification is used when the model predicts a category or class label. Examples include determining whether an email is spam or not spam, whether a customer will churn or stay, or whether a transaction is fraudulent or legitimate. Classification may be binary, with two possible outcomes, or multiclass, with more than two categories such as product type, sentiment category, or document class.

The exam often uses subtle wording to test whether you can distinguish the two. For example, predicting the probability that a customer will leave is still associated with classification because the underlying target is a class outcome, even if a confidence score is produced. By contrast, predicting the exact amount a customer will spend is regression.

Exam Tip: Do not focus only on the presence of numbers in the data. Many classification problems use numeric features such as age, income, or transaction amount. What matters is the form of the output, not the type of input feature.

  • Regression output: numeric value
  • Classification output: category label
  • Supervised learning requires labeled training data
  • Binary classification: two classes
  • Multiclass classification: more than two classes

A common trap is confusing classification with ranking or scoring language. On the exam, if the final business decision is to assign one of several labels, it is classification even if the system internally calculates a score. Another trap is choosing clustering because the problem mentions customer segments. If the segments are already known and labeled in the data, that is classification. If the segments must be discovered from unlabeled data, that is clustering.

Azure Machine Learning supports both regression and classification scenarios. In AI-900, you mainly need to know that Azure provides tools to train and compare these models, rather than needing to know specific algorithms in detail.

Section 3.3: Unsupervised learning concepts with clustering and anomaly-style thinking

Section 3.3: Unsupervised learning concepts with clustering and anomaly-style thinking

Unsupervised learning works with data that does not include known labels. Instead of learning from correct answers, the model looks for hidden patterns, structures, or relationships. The primary unsupervised concept tested on AI-900 is clustering. Clustering groups data points based on similarity, helping organizations identify natural segments in their data.

A classic clustering example is customer segmentation. A company may have data about purchasing behavior, geography, or usage frequency, but no predefined labels such as premium, casual, or high-risk. A clustering algorithm can group similar customers together so the organization can analyze and act on the patterns. Other examples include grouping similar support tickets, organizing products by usage patterns, or finding naturally similar devices in telemetry data.

Clustering does not predict a known category from labeled examples. That is the key exam distinction. Instead, clustering discovers structure. On test day, if the scenario says the organization wants to find groups, identify patterns, or detect naturally occurring segments without pre-labeled outcomes, clustering is usually the right answer.

AI-900 questions sometimes include anomaly-style wording, such as identifying unusual transactions or detecting outliers. While full anomaly detection is not always presented as a major standalone algorithm category in introductory exam content, you should understand the underlying thinking: the system is looking for data points that do not fit typical patterns. This is conceptually closer to unsupervised pattern analysis than to standard classification unless labeled examples of fraud or defect are provided.

Exam Tip: If the question says there are no existing labels and the goal is to discover hidden groups, choose clustering. If it says the organization already knows the categories and wants to predict which category a new item belongs to, choose classification.

Common traps include mistaking clustering for classification because both produce groups. The difference is whether the groups are known in advance. Another trap is assuming unsupervised learning means no business objective. It still has a clear objective, but the objective is discovery rather than prediction of a known target. Azure Machine Learning can support clustering experiments just as it supports supervised learning tasks, and the exam may expect you to recognize it as the appropriate Azure platform for that workflow.

Section 3.4: Training data, feature engineering basics, validation, and model evaluation metrics

Section 3.4: Training data, feature engineering basics, validation, and model evaluation metrics

Even at the fundamentals level, the AI-900 exam expects you to know that model performance depends on more than choosing a learning type. Good training data, meaningful features, and proper evaluation are central to machine learning success. Training data is the data used to teach the model. Validation and testing concepts are used to assess how well the model performs on data it has not already seen. This matters because a model that only memorizes training data is not useful in real-world scenarios.

Features are the input variables used by the model to learn patterns. In a house-price prediction model, features might include square footage, location, age of property, and number of bedrooms. Feature engineering refers to selecting, transforming, or deriving useful inputs from raw data. For AI-900, just understand that better features often improve results and that irrelevant or poor-quality features can reduce model quality.

Overfitting is a major exam concept. An overfit model performs very well on training data but poorly on new data because it has learned noise or overly specific patterns. Underfitting is the opposite problem: the model is too simple to capture important relationships, so it performs poorly even on training data. Microsoft likes to test whether you can recognize these ideas in scenario language.

Evaluation metrics depend on the task. For regression, common metrics include mean absolute error, mean squared error, or root mean squared error, all of which measure prediction error for numeric outputs. For classification, common metrics include accuracy, precision, recall, and F1 score. Accuracy measures overall correctness, but it can be misleading if classes are imbalanced. Precision focuses on how many predicted positives were actually positive, while recall focuses on how many actual positives were correctly identified.

Exam Tip: Match metrics to the model type. Numeric prediction errors belong to regression. Accuracy, precision, and recall belong to classification. The exam may include these as distractors across answer choices.

Common traps include choosing accuracy for every classification problem. In fraud or medical detection scenarios, false negatives may matter greatly, so recall may be especially important. Another trap is forgetting that validation data exists to estimate generalization performance before deployment. The exam does not require deep statistical detail, but it does expect you to know why training data alone is not enough.

Section 3.5: Azure Machine Learning concepts, automated machine learning, and no-code options

Section 3.5: Azure Machine Learning concepts, automated machine learning, and no-code options

Azure Machine Learning is Microsoft’s cloud platform for creating, training, managing, and deploying machine learning models. On the AI-900 exam, you should recognize it as the central Azure service for end-to-end machine learning workflows. Questions often test high-level use cases rather than technical implementation. If a scenario involves experimenting with models, tracking runs, managing data science workflows, or deploying predictive models at scale, Azure Machine Learning is often the best answer.

One core feature is automated machine learning, commonly called automated ML or AutoML. This capability helps users identify the best model and preprocessing approach for a dataset by automating much of the trial-and-error process. It is especially useful when the goal is to build regression or classification models efficiently without manually testing every algorithm. On the exam, if the wording emphasizes quickly training and comparing many model options, automated ML is a strong choice.

Another important concept is the no-code or low-code experience. Azure Machine Learning includes visual tools that allow users to build workflows and train models without extensive programming. This is helpful for AI-900 because Microsoft wants candidates to know that Azure supports a range of users, from developers and data scientists to analysts and technical decision-makers.

Azure Machine Learning also supports responsible operational practices such as model management, repeatable workflows, and deployment. You do not need deep MLOps knowledge for AI-900, but you should know that Azure Machine Learning is more than a training tool. It is a managed environment for the machine learning lifecycle.

Exam Tip: If a question asks for an Azure service specifically designed to build custom machine learning models, compare experiments, and deploy them, choose Azure Machine Learning rather than a prebuilt AI service such as Vision or Language.

A common trap is selecting Azure AI services when the scenario actually requires custom model training with your own dataset. Prebuilt AI services are best when you want ready-made capabilities such as OCR or sentiment analysis. Azure Machine Learning is the stronger answer when the task is to build and train your own predictive model from business data.

Section 3.6: Exam-style machine learning scenarios, terminology traps, and best-answer reasoning

Section 3.6: Exam-style machine learning scenarios, terminology traps, and best-answer reasoning

The AI-900 exam rewards disciplined reading. Machine learning questions are rarely hardest because of technical depth; they are hardest because several answers sound reasonable. Your job is to identify the best answer by isolating the business need, the expected output, the presence or absence of labels, and whether Azure is being used for custom modeling or prebuilt AI functions.

Start with the target outcome. If a company wants to estimate next month’s sales amount, that is regression. If it wants to determine whether a support case is urgent or non-urgent, that is classification. If it wants to discover natural customer segments from unlabeled data, that is clustering. If it wants to compare models and automate model selection in Azure, think automated ML in Azure Machine Learning.

Terminology traps are common. The words classify, categorize, group, segment, predict, score, and detect can overlap in everyday business language. On the exam, you must translate them into technical meaning. Group and segment often suggest clustering when labels do not exist. Categorize and assign a label usually indicate classification. Predict a value points to regression. Detect unusual behavior may suggest anomaly-oriented pattern analysis, especially if there are no labeled examples.

Another trap is overreading Azure product names. The exam sometimes places Azure Machine Learning next to Azure AI services in answer choices. Ask whether the scenario requires a custom model trained on the organization’s own data or a prebuilt capability. Custom predictive modeling usually belongs to Azure Machine Learning. Prebuilt tasks like OCR or sentiment analysis generally belong to Azure AI services covered in later chapters.

Exam Tip: Eliminate answers in two passes: first by machine learning type, then by Azure service fit. This greatly improves your odds even when two choices seem close.

Finally, watch for evaluation wording. If a model performs well on training data but poorly on new data, think overfitting. If the question asks which metric applies to binary prediction quality, think precision, recall, or accuracy rather than mean squared error. The best-answer mindset is not about memorizing isolated definitions. It is about seeing the scenario pattern that Microsoft is testing and mapping it cleanly to the correct machine learning principle on Azure.

Chapter milestones
  • Understand foundational machine learning concepts for AI-900
  • Compare regression, classification, and clustering
  • Interpret training, validation, metrics, and overfitting basics
  • Practice exam-style questions on ML on Azure
Chapter quiz

1. A retail company wants to predict the total dollar amount a customer is likely to spend next month based on historical purchase data. Which type of machine learning should they use?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numeric value: total dollar amount. Classification would be used if the company wanted to predict a category such as whether a customer will churn or not. Clustering would be used to group similar customers when no predefined label exists. On AI-900, Microsoft commonly tests whether you can identify the target output first: number means regression.

2. A healthcare organization wants to assign incoming support requests to one of three categories: billing, appointment scheduling, or prescription refill. Which machine learning approach best fits this requirement?

Show answer
Correct answer: Classification
Classification is correct because the model must choose from predefined categories. Clustering is incorrect because clustering is used when labels are not already defined and the goal is to discover natural groupings. Regression is incorrect because the output is not a continuous numeric value. AI-900 questions often distinguish classification from clustering by asking whether known labels already exist.

3. A bank has a large dataset of customers and wants to discover groups of customers with similar spending behaviors without using any existing labels. Which approach should be used?

Show answer
Correct answer: Clustering
Clustering is correct because the bank wants to find structure in unlabeled data by grouping similar customers. Classification is incorrect because there are no predefined classes to predict. Regression is incorrect because the goal is not to predict a numeric output. In the AI-900 exam domain, unlabeled data and similarity-based grouping are key indicators of clustering.

4. You train a machine learning model in Azure Machine Learning and observe that it performs very well on the training data but poorly on validation data. What does this most likely indicate?

Show answer
Correct answer: The model is overfitting
Overfitting is correct because strong performance on training data combined with weak validation performance indicates the model has learned the training set too closely and does not generalize well. Unsupervised learning is unrelated to this pattern because overfitting can occur in multiple training contexts and is not identified by comparing training and validation performance alone. Clustering versus classification is also not the issue, because the symptom described is about model generalization and evaluation, not problem type selection.

5. A company wants to build and compare machine learning models on Azure with minimal coding effort. The team wants Azure to help automate algorithm selection and training. Which Azure Machine Learning feature should they use?

Show answer
Correct answer: Azure Machine Learning automated ML
Azure Machine Learning automated ML is correct because it is specifically designed to help users train and compare models with minimal code by automating tasks such as algorithm selection and experimentation. Azure AI Vision is incorrect because it is a prebuilt AI service for vision scenarios, not a general machine learning model-building feature. The custom Python SDK option is incorrect because Azure Machine Learning does not always require coding; AI-900 expects you to know that automated ML and designer support low-code or no-code workflows.

Chapter 4: Computer Vision Workloads on Azure

Computer vision is a core AI-900 exam topic because Microsoft expects candidates to recognize common image-based AI scenarios and match them to the correct Azure service. On the exam, you are rarely asked to implement code. Instead, you are tested on whether you can identify the business need, determine the type of computer vision workload involved, and select the most appropriate Azure AI capability. This chapter focuses on the services and concepts most likely to appear: Azure AI Vision, OCR, face-related analysis concepts, custom vision scenarios, and document intelligence. If a question mentions images, printed text in images, object recognition, face attributes, or extracting fields from forms, you should immediately think in terms of computer vision workloads.

A reliable exam strategy is to classify the scenario before choosing the service. Ask yourself: Is the goal to analyze a general image, read text from an image, detect or classify objects, analyze face-related visual features, or extract structured data from documents such as invoices and receipts? The AI-900 exam rewards this kind of categorization. Many wrong answers are plausible because they are all Azure AI services, but only one aligns cleanly with the stated requirement. For example, extracting values from a receipt is not just OCR; it is a document extraction problem best associated with document intelligence. Likewise, identifying whether an image contains a dog, bicycle, or tree in a general-purpose way points to Azure AI Vision, while training a model to recognize your company’s specific product defects points to a custom vision approach.

Another common exam pattern is the distinction between prebuilt AI capabilities and custom-trained models. If the scenario requires broad, out-of-the-box analysis of common visual content, Microsoft usually expects you to choose a prebuilt service. If the scenario involves domain-specific image classes, niche objects, or a need to train on your own labeled dataset, then a custom model is a better fit. Questions may also test whether you understand responsible AI boundaries, especially around face-related capabilities. Read the wording carefully: image analysis, OCR, facial analysis, and identity verification are not interchangeable terms.

Exam Tip: On AI-900, the fastest path to the right answer is often to map the verb in the scenario to the workload. “Describe” or “tag” an image suggests image analysis. “Read” text suggests OCR. “Extract fields from forms” suggests document intelligence. “Train using your own images” suggests custom vision. “Analyze facial attributes” suggests face-related analysis concepts, but be careful with identity-related uses and responsible AI limitations.

As you work through this chapter, focus on the decision rules the exam tests: choosing the right service, understanding what each service is designed to do, avoiding overlap traps, and recognizing when a question is testing conceptual understanding rather than implementation detail. That is exactly how computer vision appears on the AI-900 blueprint.

Practice note for Identify common computer vision tasks and outcomes: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Map computer vision scenarios to Azure AI services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand image analysis, OCR, face, and custom vision concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style questions on Computer vision workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Computer vision workloads on Azure and image-based AI use cases

Section 4.1: Computer vision workloads on Azure and image-based AI use cases

Computer vision workloads involve deriving meaning from images, video frames, and scanned documents. For AI-900, you should understand the common task categories rather than deep model architecture. The exam expects you to recognize use cases such as image classification, object detection, optical character recognition, face-related analysis, and document data extraction. These categories matter because Azure offers different services for different outcomes, and exam questions often describe the business problem first and never directly name the service.

Typical image-based AI use cases include generating captions for an image, tagging visible objects or scenes, detecting the location of objects within an image, reading printed or handwritten text, and identifying document fields such as invoice totals or receipt merchants. The key exam skill is separating these outcomes. For instance, “What is in the image?” is different from “Where is the object in the image?” and both are different from “What text appears in the image?”

You should also know that some workloads use prebuilt models and others rely on custom training. Prebuilt services are appropriate when the AI task involves common objects, scenes, text, or standard document layouts. Custom-trained solutions are appropriate when the organization needs to recognize its own products, defects, logos, packaging states, or specialized visual categories. This distinction appears frequently in exam scenarios.

  • Image analysis: describe, tag, categorize, or detect common visual content.
  • OCR: extract printed or handwritten text from images and documents.
  • Face-related analysis: detect human faces and infer visual attributes, subject to responsible AI considerations.
  • Custom vision: train a model using your labeled images for specialized classes or object locations.
  • Document intelligence: extract structured fields, key-value pairs, tables, and layout from forms and business documents.

Exam Tip: If the scenario emphasizes “business documents” or “forms,” do not stop at OCR. The exam often expects you to recognize that extracting structured fields is more than simply reading raw text.

A common exam trap is choosing a service based on a familiar keyword instead of the full scenario. For example, a scanned invoice does contain text, but if the requirement is to capture invoice number, vendor name, and total amount into a system, that is a document extraction workload, not just generic image OCR. Another trap is assuming all image recognition tasks are the same. Classification answers the question “which category best fits this image?” while object detection identifies and locates objects within the image. The AI-900 exam often tests these distinctions in straightforward but easy-to-rush scenario wording.

Section 4.2: Azure AI Vision for image analysis, tagging, detection, and OCR

Section 4.2: Azure AI Vision for image analysis, tagging, detection, and OCR

Azure AI Vision is the key service to associate with general-purpose image analysis on the AI-900 exam. It is designed to extract information from images using prebuilt capabilities. You should connect this service with tasks such as generating image captions, detecting common objects, identifying tags, and reading text through OCR-related features. When a question describes a broad, out-of-the-box need to analyze image content without custom training, Azure AI Vision is usually the leading answer.

Image analysis features help applications understand the contents of an image. For exam purposes, that means recognizing common objects and scenes, assigning descriptive tags, and producing natural-language descriptions or captions. If a retailer wants to automatically tag uploaded product lifestyle images with labels like “outdoor,” “bicycle,” or “person,” image analysis is a natural fit. If a media company wants searchable tags on a large image library, the same reasoning applies.

Object detection is related but more specific. Detection not only identifies an object class but also finds its location within the image. That is useful when the scenario mentions bounding boxes, locating multiple items, or counting instances. The exam may compare image classification and object detection. Remember that classification labels the whole image or predicts the main class, while detection identifies one or more objects and their positions.

OCR is another major tested concept. Optical character recognition extracts text from images, screenshots, signs, and scanned pages. On AI-900, if the question is simply about reading text embedded in an image, OCR is the right concept. If the prompt goes further and asks for structured extraction from receipts, tax forms, or invoices, that points more strongly to document intelligence, which is covered later in this chapter.

Exam Tip: Watch for wording such as “analyze photos uploaded by users,” “generate tags,” “detect common objects,” or “read text from an image.” These are strong Azure AI Vision clues. By contrast, “train with our own labeled images” is a clue that the scenario is not purely prebuilt image analysis.

A frequent trap is overcomplicating a simple requirement. If the scenario says a company wants to identify landmarks, common objects, or text in everyday photos, you do not need a custom model. Another trap is confusing OCR with language analysis. OCR reads the text from the image; it does not determine sentiment or extract key phrases as a language service would. The exam tests your ability to stop at the correct layer of capability and not choose a service that solves a different downstream problem.

Section 4.3: Face analysis concepts, responsible use, and identity-related distinctions

Section 4.3: Face analysis concepts, responsible use, and identity-related distinctions

Face-related AI appears on AI-900 primarily as a concepts topic rather than a deployment tutorial. You should know that face analysis can involve detecting a face in an image and analyzing visual characteristics. However, you must also understand the responsible AI concerns and the distinction between analyzing a face and establishing identity. Microsoft exams often use this topic to test both technical understanding and ethical awareness.

The first distinction is face detection versus identity verification or recognition. Detecting that a face exists in an image, or identifying visual landmarks and attributes, is not the same as determining who the person is. Identity-related tasks involve comparing faces or associating a face with a known person. On the exam, read carefully for phrases like “verify a user matches their ID photo” versus “detect whether an image contains a face.” Those are different problem statements, and the latter is less identity-oriented.

Responsible use is especially important. Face-related AI can have significant privacy, fairness, transparency, and accountability implications. AI-900 may test your awareness that not every technically possible facial scenario is automatically appropriate. Questions may point toward limiting use, requiring human oversight, or recognizing that sensitive use cases demand caution. The exam does not expect legal analysis, but it does expect awareness that responsible AI principles matter strongly here.

Exam Tip: If a scenario includes identity-sensitive decisions, surveillance implications, or high-impact use, pause before selecting a face-related answer just because the word “face” appears. Microsoft often tests whether you can recognize responsible AI concerns and the difference between visual analysis and identity determination.

A common trap is assuming face services are the default answer anytime people appear in photos. If the business need is simply to understand scene content in a photo library, general image analysis may be enough. Another trap is ignoring the distinction between authentication scenarios and descriptive analysis. “Count faces in a crowd” is not the same as “confirm this employee’s identity.” The test may also include plausible distractors such as OCR or custom vision simply because an image is involved; your job is to identify the exact type of visual intelligence being requested.

For exam readiness, remember the big idea: face analysis is a specialized computer vision area with stronger responsible AI implications than many other image tasks. When these questions appear, Microsoft often wants you to show both service recognition and ethical judgment.

Section 4.4: Custom vision scenarios, model training concepts, and classification versus detection

Section 4.4: Custom vision scenarios, model training concepts, and classification versus detection

Custom vision is the right conceptual answer when an organization needs to train an image model on its own labeled examples. This is a major exam distinction: prebuilt vision services handle common image analysis, while custom vision addresses specialized categories or objects unique to a business. If the prompt mentions uploading labeled images, training a model to recognize proprietary products, or detecting manufacturing defects specific to the company, think custom vision.

AI-900 commonly tests the difference between classification and object detection in a custom vision setting. Classification predicts the category of an image. For example, a company may want to classify photos of fruit as apples, bananas, or oranges. Object detection goes further by locating one or more objects within the image, often with bounding boxes. For example, a warehouse may need to detect and locate damaged packages on a conveyor image.

The exam also expects a basic understanding of training concepts. A custom model requires labeled training images. Better quality labels and representative images generally improve usefulness. You do not need to know advanced tuning details for AI-900, but you should recognize that custom vision depends on data collection, labeling, training, and testing. If no custom dataset exists and the scenario only involves common visual categories, then prebuilt Azure AI Vision is usually more appropriate than custom training.

  • Choose classification when the goal is to assign an overall label to an image.
  • Choose detection when the goal is to identify and locate instances of objects.
  • Choose custom vision when the classes are specialized or business-specific.
  • Choose prebuilt vision when the task is general-purpose and out-of-the-box.

Exam Tip: The phrase “using our own images” is one of the strongest clues for custom vision. The phrase “locate each object” is one of the strongest clues for object detection rather than classification.

A common exam trap is selecting custom vision for every image problem because it sounds more powerful. That is usually wrong when the requirement is simple and common, such as tagging beach photos or reading street-sign text. Another trap is confusing classification with detection. If the image contains several items and the requirement is to know where each appears, classification alone is insufficient. Microsoft often writes answer choices so that both seem almost correct unless you pay attention to whether location information is required.

Section 4.5: Document intelligence concepts for forms, receipts, and structured data extraction

Section 4.5: Document intelligence concepts for forms, receipts, and structured data extraction

Document intelligence is the service area to associate with extracting structured information from documents. This includes forms, invoices, receipts, business cards, and similar files where the goal is not merely to read text, but to identify meaningful fields and organize them into usable data. On AI-900, this topic often appears in scenario-based questions that contrast simple OCR with more advanced form understanding.

Think of the distinction this way: OCR returns text that appears in the document. Document intelligence goes further by understanding document layout and extracting key-value pairs, tables, and labeled fields. For example, if a company wants to process receipts and automatically capture merchant name, transaction date, and total amount, document intelligence is the better conceptual match. If the requirement is just to digitize a scanned page of text, OCR may be enough.

This service category is highly relevant to business automation. Common examples include invoice processing, expense management, claims handling, and application form intake. The exam may describe a workflow where employees currently type values from forms into a database and ask which Azure AI capability can automate the process. When the requirement includes fields, tables, or structured output, document intelligence is the concept to recognize.

Exam Tip: When you see forms, receipts, invoices, IDs, or tables, ask whether the organization needs raw text or structured data. If it is structured extraction, choose document intelligence over generic OCR.

A classic trap is choosing Azure AI Vision purely because documents are images. While technically true, the exam is testing the intended workload. Document intelligence is specialized for extracting structured business information from documents. Another trap is choosing custom vision because the company uses its own forms. Unless the question specifically emphasizes training an image model to identify custom visual categories, forms and field extraction still point toward document intelligence. Read carefully for words such as “extract,” “fields,” “key-value pairs,” “layout,” and “tables.” Those are the signals Microsoft uses to guide you to the correct answer.

For exam success, remember the hierarchy: if the task is reading text from an image, think OCR; if the task is understanding the structure and meaning of business documents, think document intelligence.

Section 4.6: Exam-style scenario questions for Computer vision workloads on Azure

Section 4.6: Exam-style scenario questions for Computer vision workloads on Azure

In the AI-900 exam, computer vision questions are usually short scenario prompts with answer options that all sound credible. Your job is not to memorize every product detail, but to quickly identify the workload category and eliminate mismatched services. Start by underlining the outcome the organization wants. Is it general image tagging, text extraction, facial analysis, custom recognition, or structured document processing? Once you know the outcome, the correct Azure service usually becomes much clearer.

A strong exam method is to use a three-step filter. First, determine whether the scenario is prebuilt or custom. Second, determine whether the input is a general image or a business document. Third, determine whether the desired output is labels, object locations, text, facial information, or structured fields. This process is especially useful when Microsoft includes distractors such as language services or machine learning options that are related to AI but not the best fit for the specific task.

Look out for wording patterns. “Users upload photos and the app must generate descriptive tags” maps to image analysis. “The company must read serial numbers from photographed equipment labels” maps to OCR. “The system must train on thousands of labeled product images to detect packaging defects” maps to custom vision with detection. “The accounting team wants totals and vendor names captured from receipts” maps to document intelligence. “The application needs face-related analysis” requires extra caution and awareness of responsible AI distinctions.

Exam Tip: The exam often hides the clue in one noun phrase. “Bounding boxes” suggests detection. “Labeled images” suggests training. “Scanned forms” suggests document intelligence. “Read text in images” suggests OCR. Train yourself to recognize these trigger phrases instantly.

Common mistakes include answering with the broadest service instead of the most precise one, confusing OCR with document extraction, and ignoring whether a solution needs custom training. Another frequent error is being distracted by the industry context. Whether the scenario is healthcare, retail, manufacturing, or government matters less than the actual AI task being described. Focus on the task, not the business domain.

As a final review for this chapter, remember the core exam map: Azure AI Vision for general image analysis and OCR-related reading tasks, face analysis concepts for face-related scenarios with responsible AI awareness, custom vision for training image models on your own labeled data, and document intelligence for extracting structured data from forms and receipts. If you can consistently classify scenarios into those buckets, you will perform well on this AI-900 objective area.

Chapter milestones
  • Identify common computer vision tasks and outcomes
  • Map computer vision scenarios to Azure AI services
  • Understand image analysis, OCR, face, and custom vision concepts
  • Practice exam-style questions on Computer vision workloads on Azure
Chapter quiz

1. A retail company wants to analyze photos from its online catalog to generate captions and identify common objects such as chairs, tables, and lamps without training a custom model. Which Azure AI service should the company use?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is correct because it provides prebuilt image analysis capabilities such as tagging, captioning, and detecting common objects in images. Azure AI Document Intelligence is designed for extracting structured data from forms, invoices, and receipts rather than general image content. Azure AI Custom Vision would be more appropriate if the company needed to train a model on its own labeled images for domain-specific categories instead of using out-of-the-box image analysis.

2. A company scans paper receipts and wants to extract merchant names, transaction totals, and purchase dates into a business system. Which Azure AI service is the best fit?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is correct because the scenario is not just about reading text; it is about extracting structured fields from documents such as receipts. Azure AI Vision image analysis can perform OCR, but it is not the best choice when the requirement is to identify document fields and key-value pairs. Azure AI Face is unrelated because the scenario does not involve facial analysis or identity-related image processing.

3. A manufacturer wants to detect whether images of products on an assembly line contain one of three specific defect types unique to its products. The company has a labeled image dataset for training. Which approach should it use?

Show answer
Correct answer: Use Azure AI Custom Vision to train a model on the defect images
Azure AI Custom Vision is correct because the requirement is domain-specific and uses a labeled dataset to train a model to recognize custom defect classes. Azure AI Vision is intended for broad, prebuilt analysis of common visual content and is not the best fit for niche product defects unique to one manufacturer. Azure AI Document Intelligence focuses on document and form extraction, not object or defect classification in product images.

4. You need to build a solution that reads printed text from street signs in photos taken by a mobile app. Which computer vision capability should you select?

Show answer
Correct answer: OCR
OCR is correct because the requirement is to read printed text from images. Face analysis is used for detecting and analyzing face-related visual features, which is not relevant to street signs. Object tracking focuses on following objects across frames or images and does not address extracting text content. On the AI-900 exam, verbs such as read or extract text strongly indicate OCR.

5. A company wants an application to analyze photos and determine whether a face is present and identify visual facial attributes. Which Azure AI capability most closely matches this requirement?

Show answer
Correct answer: Azure AI Face
Azure AI Face is correct because it is the Azure AI capability associated with detecting faces and analyzing facial attributes. Azure AI Document Intelligence is for extracting structured information from documents and forms, not analyzing faces in images. Azure AI Language is used for text-based AI workloads such as sentiment analysis or entity recognition, so it does not match a face-related computer vision scenario. This reflects the AI-900 objective of distinguishing image analysis, OCR, document extraction, and face-related analysis.

Chapter 5: NLP and Generative AI Workloads on Azure

This chapter maps directly to a major AI-900 exam objective: identifying natural language processing workloads, selecting the correct Azure AI service for language and speech scenarios, and recognizing the fundamentals of generative AI on Azure. On the exam, Microsoft often tests whether you can match a business requirement to the most appropriate capability rather than whether you can configure a service in detail. That means you must recognize the difference between analyzing text, understanding spoken audio, translating content, answering questions from a knowledge source, and generating new content from prompts.

Natural language processing, or NLP, focuses on helping systems work with human language in text or speech form. In Azure, exam questions commonly revolve around services that analyze sentiment, extract key phrases, identify entities, summarize text, convert speech to text, synthesize speech, translate language, and build conversational experiences. The test also expects basic awareness of generative AI concepts such as copilots, prompts, foundation models, grounding, and responsible AI safeguards. You are not being tested as an engineer implementing production architectures, but you are expected to understand the use cases and choose the right service.

A common exam trap is confusing traditional NLP with generative AI. If the scenario asks you to detect whether customer feedback is positive or negative, that is a text analytics task, not a generative AI task. If the scenario asks you to draft an email, summarize a document conversationally, or generate code or content in response to a prompt, that points toward generative AI. Another common trap is mixing up speech translation with text translation. Read the input carefully: if users are speaking into a microphone, think speech capabilities; if the input is already text, think language translation capabilities.

Exam Tip: When two answer choices sound similar, identify the input, the required output, and whether the goal is analysis, prediction, retrieval, or generation. That three-part check often reveals the correct Azure AI service.

This chapter integrates the lesson goals you need for exam success: explaining core NLP workloads and language AI capabilities, choosing Azure services for speech, text, translation, and conversational AI, understanding generative AI workloads and Azure OpenAI concepts, and applying exam strategy to scenario-based questions. As you read, focus on recognizing patterns in wording. AI-900 questions are often short but packed with clues.

By the end of this chapter, you should be able to distinguish core language workloads, identify speech and translation solutions, understand conversational AI and question answering scenarios, explain foundation model and prompt basics, and navigate responsible generative AI concepts such as grounding and content safety. These are exactly the kinds of distinctions Microsoft expects an Azure AI Fundamentals candidate to make.

Practice note for Explain core NLP workloads and language AI capabilities: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Choose Azure services for speech, text, translation, and conversational AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand generative AI workloads, prompts, and Azure OpenAI concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style questions on NLP and Generative AI workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explain core NLP workloads and language AI capabilities: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: NLP workloads on Azure including sentiment analysis, key phrases, entities, and summarization concepts

Section 5.1: NLP workloads on Azure including sentiment analysis, key phrases, entities, and summarization concepts

In AI-900, NLP questions frequently test whether you can recognize common text analysis workloads. These include sentiment analysis, key phrase extraction, entity recognition, and summarization. The exam objective is not to make you memorize every API detail, but to ensure that you understand what each capability does and when it should be used. If a company wants to process customer reviews, support tickets, social media posts, articles, or internal documents, you should immediately think about Azure language capabilities.

Sentiment analysis determines whether a piece of text expresses a positive, negative, mixed, or neutral opinion. A classic exam scenario is a company that wants to monitor customer satisfaction from online reviews. If the business wants an emotional or opinion-based assessment, sentiment analysis is the right fit. Do not confuse this with classification. Sentiment analysis is a specialized language workload focused on opinion in text.

Key phrase extraction identifies the main talking points in text. If a scenario says a business wants to quickly identify the important terms in a large set of comments, articles, or case notes, key phrase extraction is likely the answer. Entity recognition goes a step further by detecting and categorizing named items such as people, locations, organizations, dates, quantities, or other structured references. If the requirement is to find company names, addresses, products, or dates in unstructured text, entity recognition is a better match than key phrase extraction.

Summarization condenses long text into shorter content while preserving the important meaning. On the exam, summarization may appear in two forms: extracting important content from a document or generating a concise summary of the document. The key idea is reducing volume while retaining essential information. If users need fast review of long reports, articles, or transcripts, summarization is usually the best choice.

  • Sentiment analysis: opinion or emotional tone
  • Key phrase extraction: main topics or notable terms
  • Entity recognition: people, places, organizations, dates, and similar items
  • Summarization: shorter version of long text

Exam Tip: If the requirement says “identify what the text is about,” think key phrases. If it says “identify names, locations, dates, or organizations,” think entities. If it says “determine how the customer feels,” think sentiment.

A common trap is choosing generative AI for summarization just because modern chat experiences can summarize. In AI-900, Microsoft may still expect you to recognize summarization as a language workload concept, especially when the scenario emphasizes extracting meaning from text rather than creating broad open-ended content. Always use the simplest service that directly matches the requirement.

Another exam pattern is combining multiple capabilities in one scenario. A retailer may want to analyze reviews for customer mood, identify mentioned products, and create a short overview for managers. That means sentiment analysis, entities, and summarization may all be involved. When the question asks for the “best service,” look for the Azure service family that supports language analysis rather than a custom machine learning approach.

The exam tests understanding at the scenario level: what kind of text problem is being solved, and which capability fits best. Learn the vocabulary, but more importantly, learn the intent behind each workload.

Section 5.2: Speech workloads on Azure including speech to text, text to speech, and translation

Section 5.2: Speech workloads on Azure including speech to text, text to speech, and translation

Speech workloads are another core AI-900 area. Microsoft expects you to know the difference between converting spoken language into text, generating spoken audio from text, and translating content across languages. These sound similar in conversation, but on the exam they are distinct capabilities with distinct use cases.

Speech to text transcribes spoken words into written text. Typical scenarios include captioning meetings, transcribing call center audio, enabling voice commands, and making audio searchable. If the input is live or recorded speech and the output is text, speech to text is the right concept. Text to speech does the opposite: it synthesizes natural-sounding speech from written text. This is useful for voice assistants, automated reading of notifications, accessibility tools, and interactive systems that speak back to users.

Translation can appear in either text or speech scenarios. Text translation converts written text from one language to another. Speech translation handles spoken input and translates it, sometimes into text and sometimes as spoken output depending on the design. AI-900 questions often simplify this into identifying whether the scenario begins with speech or text. That clue matters.

For example, a multinational event platform that displays subtitles for speakers in different languages points to speech translation. A web app that converts product descriptions from English to French points to text translation. A customer service bot that reads responses aloud to a caller points to text to speech. A compliance team that needs written transcripts of recorded calls needs speech to text.

Exam Tip: Focus first on modality. Is the source input speech, text, or both? Many wrong answers become obvious once you identify the input and desired output.

  • Speech to text: spoken input to text output
  • Text to speech: text input to spoken output
  • Translation: language conversion for text or speech content

A frequent trap is assuming translation always requires generative AI. It does not. Translation is a standard AI language workload and is tested as such. Another trap is confusing voice bots with language understanding. If the scenario emphasizes recognizing spoken words, that is speech recognition. If it emphasizes determining user intent from what was said, that enters language understanding territory. In many real solutions both are used, but exam questions usually ask you to identify the primary required capability.

Speech services also connect to accessibility and user experience scenarios. Converting lectures to captions supports accessibility. Converting written instructions to spoken audio helps users in hands-free environments. Microsoft likes practical business examples, so prepare to interpret requirements in context rather than just memorizing terms.

When reviewing answer choices, ask three questions: What is the input? What is the output? Is the main challenge recognition, synthesis, or translation? This method is highly effective for AI-900 speech questions.

Section 5.3: Conversational AI, question answering, and language understanding scenarios

Section 5.3: Conversational AI, question answering, and language understanding scenarios

Conversational AI on Azure includes solutions that interact with users through natural language. On the AI-900 exam, this usually appears in scenarios involving chatbots, virtual agents, support assistants, FAQ systems, and applications that need to interpret what a user wants. The key is to distinguish between question answering and language understanding, since the exam often places them side by side.

Question answering is appropriate when a system needs to respond to user questions using a known source of information, such as FAQs, manuals, policy documents, or knowledge bases. If the scenario says users ask common support questions and the organization already has answers in existing documents, question answering is likely the correct concept. The system is not inventing answers from scratch; it is using available knowledge to return relevant responses.

Language understanding focuses on identifying user intent and extracting useful information from user utterances. If a user says, “Book me a flight to Seattle next Tuesday,” the system may need to identify the intent as booking travel and extract entities such as destination and date. On the exam, language understanding is the right fit when the application must interpret commands, route requests, or collect structured information from free-form text.

Conversational AI often combines multiple capabilities. A chatbot may use question answering for FAQ responses, language understanding for task-oriented actions, and speech services for voice input and output. However, exam questions usually narrow the requirement to one main need. Read carefully. If the scenario emphasizes “answer questions from documents,” choose question answering. If it emphasizes “determine what the user wants to do,” choose language understanding.

Exam Tip: Question answering retrieves an answer from known content. Language understanding interprets intent and entities in user input. If you confuse retrieval with intent detection, you may miss easy points.

A common trap is choosing generative AI whenever a chatbot is mentioned. Not every chatbot is generative. Traditional conversational AI can be built around predefined knowledge sources, intent recognition, and orchestration. If the requirement is reliability, structured workflows, and responses from approved content, question answering or language understanding may be more appropriate than open-ended generation.

Microsoft also likes scenarios involving customer support. For example, a company may want a bot that answers return-policy questions from an internal FAQ and routes warranty claims to the correct process. That scenario suggests both question answering and language understanding. If asked for the best service for the FAQ part, choose question answering. If asked for interpreting “I want to return my laptop” as a return request, choose language understanding.

The exam objective here is practical selection, not architecture design. Know the role of each capability and look for wording cues such as “FAQ,” “knowledge base,” “intent,” “entities,” “user utterance,” and “virtual agent.”

Section 5.4: Generative AI workloads on Azure including copilots, prompt engineering, and foundation model basics

Section 5.4: Generative AI workloads on Azure including copilots, prompt engineering, and foundation model basics

Generative AI is now a visible part of the AI-900 blueprint, and Microsoft expects foundational understanding rather than deep model science. Generative AI workloads involve creating new content such as text, code, summaries, chat responses, and other outputs based on user prompts. On the exam, you should recognize scenarios involving copilots, content drafting, document generation, conversational assistance, and prompt-based interaction.

A copilot is an AI assistant embedded in an application or workflow to help users complete tasks. Examples include drafting emails, summarizing meeting notes, helping with code, or assisting customer service agents. In exam wording, if the solution supports a human user by proposing, drafting, or accelerating work, “copilot” is a strong clue. The AI is assisting rather than fully automating every decision.

Foundation models are large pre-trained models that can be adapted or prompted for many tasks. The exam does not require knowledge of model internals, but you should know that a foundation model can support multiple use cases without training a separate model from scratch for each one. This flexibility is one reason generative AI is so powerful. Instead of building separate systems for every wording variation, a prompt can guide the model to perform the needed task.

Prompt engineering refers to designing inputs that guide model behavior. Prompts can specify the role, task, tone, format, constraints, and context for a response. For AI-900, the key point is that better prompts usually produce more useful outputs. If the exam asks how to improve output quality without retraining the model, prompt design is an important concept. You may see references to giving instructions, examples, or context.

  • Copilots assist users with tasks and workflows
  • Foundation models are broadly capable pre-trained models
  • Prompts guide the model toward the desired output

Exam Tip: If the requirement is to create new text or assist interactively with broad language tasks, think generative AI. If the requirement is to classify, detect sentiment, or extract entities, think standard NLP.

A common trap is assuming generative AI is always the best answer because it seems more advanced. On the exam, the correct answer is the service that best fits the requirement, not the one with the most impressive technology. If a business only needs keyword extraction from documents, a language analytics capability is more direct and controlled than a broad generative model.

Generative AI scenarios may also mention grounding, safety, and limitations. Remember that generated content can be fluent but incorrect. Microsoft wants candidates to understand the opportunities and the need for responsible controls. The exam may not ask for algorithmic details, but it will expect awareness that prompts, context, and safeguards affect output quality and trustworthiness.

Section 5.5: Azure OpenAI concepts, responsible generative AI, grounding, and content safety basics

Section 5.5: Azure OpenAI concepts, responsible generative AI, grounding, and content safety basics

Azure OpenAI brings powerful generative AI models into the Azure ecosystem, and AI-900 tests conceptual understanding of how these models are used responsibly. At this level, you should know that Azure OpenAI can support chat, content generation, summarization, and similar generative tasks. More importantly, you need to understand the ideas of responsible generative AI, grounding, and content safety.

Grounding means providing relevant, trusted context so the model can generate responses based on approved information. This is especially important in enterprise scenarios, such as answering questions from company policies, product manuals, or internal documents. Without grounding, a model may produce plausible but inaccurate responses. On the exam, if a scenario asks how to make responses more relevant to organizational data or how to reduce unsupported answers, grounding is a key concept.

Responsible generative AI includes designing systems that are fair, reliable, safe, transparent, and accountable. In AI-900 language, this often appears as avoiding harmful content, reducing bias, protecting privacy, and ensuring outputs are reviewed appropriately. Microsoft wants candidates to recognize that generative AI is not just about capability; it is also about governance and trust.

Content safety refers to mechanisms that help detect or filter harmful, unsafe, or disallowed content in prompts and outputs. If a question mentions preventing abusive, violent, hateful, or inappropriate responses, think content safety controls. This may also include moderating user prompts, restricting outputs, and enforcing organizational policies.

Exam Tip: If the scenario asks how to improve factual relevance, choose grounding. If it asks how to reduce harmful or inappropriate responses, choose content safety. If it asks for broader ethical use, think responsible AI principles.

A common trap is treating grounding as training. Grounding does not necessarily mean building or retraining a model from scratch. It means supplying relevant context so the model can answer more accurately within a task. Another trap is thinking content safety guarantees correctness. It helps manage harmful content, but factual quality still depends on context, prompts, and design.

Azure OpenAI questions may also frame responsible use in terms of human oversight. For high-impact decisions, generated content should not be accepted blindly. The exam may reward the answer that includes review, transparency, and use of approved sources. Microsoft frequently aligns exam questions with practical governance, not just technical capability.

When evaluating answer choices, separate three ideas clearly: model capability, contextual grounding, and safety controls. The strongest exam candidates can explain why these are related but not identical. That distinction is exactly the kind of reasoning AI-900 assesses.

Section 5.6: Exam-style scenario questions for NLP workloads on Azure and Generative AI workloads on Azure

Section 5.6: Exam-style scenario questions for NLP workloads on Azure and Generative AI workloads on Azure

This final section is about exam approach rather than memorization. AI-900 scenario questions on NLP and generative AI are often short, but they rely on precision. Microsoft commonly presents a business need and asks which capability or Azure AI service is most appropriate. Your job is to identify the primary task being described and avoid being distracted by extra words.

Start by classifying the workload. Ask yourself whether the scenario is about analyzing existing language, understanding user intent, working with speech, translating between languages, answering from known content, or generating new content. This first classification eliminates many wrong answers immediately. For example, sentiment analysis, entity recognition, and key phrase extraction all belong to text analysis; they do not generate new content. Chat-based drafting and copilot assistance belong to generative AI. Voice transcription belongs to speech recognition.

Next, identify the input and output. This is especially useful for speech and translation questions. If the input is audio and the output is text, think speech to text. If the input is text and the output is spoken audio, think text to speech. If the core need is changing one language into another, determine whether the source is text or speech before selecting the answer.

Then look for clues about source knowledge. If the solution must answer from manuals, FAQs, or policy documents, that points toward question answering or grounding in a generative AI context. If the system must decide what action a user wants, that suggests language understanding. If the system must draft or summarize content in flexible ways, that suggests generative AI.

Exam Tip: Microsoft often includes one answer that is technically possible but not the best fit. Choose the most direct and purpose-built capability for the stated requirement.

Watch for these frequent traps:

  • Choosing generative AI when a standard NLP feature exactly matches the requirement
  • Confusing speech translation with text translation
  • Confusing question answering with language understanding
  • Assuming content safety solves factual accuracy problems
  • Assuming grounding means retraining a model

During the exam, underline mental keywords such as sentiment, summarize, transcribe, translate, intent, FAQ, copilot, prompt, grounded, and safe content. These words often map directly to tested concepts. Also pay attention to whether the organization wants analysis, assistance, automation, or generation. The verbs in the scenario matter.

As part of your preparation, practice rewriting each scenario in one plain sentence: “This company wants to detect opinion,” or “This app needs to answer from known documents,” or “This tool should draft content for users.” If you can restate the need clearly, the answer becomes much easier to spot. That is the core exam skill for this chapter: matching real-world language and generative AI scenarios to the correct Azure AI concept quickly and accurately.

Chapter milestones
  • Explain core NLP workloads and language AI capabilities
  • Choose Azure services for speech, text, translation, and conversational AI
  • Understand generative AI workloads, prompts, and Azure OpenAI concepts
  • Practice exam-style questions on NLP and Generative AI workloads on Azure
Chapter quiz

1. A company wants to analyze thousands of customer review comments to determine whether each comment expresses a positive, negative, or neutral opinion. Which Azure AI capability should you choose?

Show answer
Correct answer: Sentiment analysis in Azure AI Language
Sentiment analysis in Azure AI Language is the correct choice because the requirement is to classify opinion in text as positive, negative, or neutral, which is a core NLP analysis workload measured in AI-900. Question answering is designed to return answers from a knowledge source, not evaluate emotional tone. Azure OpenAI text generation creates new content from prompts, which is generative AI and not the best fit for straightforward sentiment detection.

2. A business wants to build a solution in which callers speak into a phone system in Spanish and receive real-time spoken responses in English. Which Azure service capability best matches this requirement?

Show answer
Correct answer: Speech translation in Azure AI Speech
Speech translation in Azure AI Speech is correct because the input is spoken audio and the scenario requires translation across languages in real time. Text Translation is used when the input is already text, which is a common exam trap. Key phrase extraction analyzes important terms in text and does not handle spoken language translation.

3. A support team wants a chatbot that can return answers from a curated set of FAQs and product manuals. The bot should retrieve the most relevant answer rather than generate a creative response. Which Azure AI capability should you recommend?

Show answer
Correct answer: Question answering in Azure AI Language
Question answering in Azure AI Language is the best fit because the scenario is based on retrieving answers from an existing knowledge source such as FAQs and manuals. Azure OpenAI completions are intended for generative scenarios and may produce novel responses instead of grounded retrieval-based answers. Speech-to-text only converts audio into text and does not answer questions from documents.

4. A marketing team wants to provide a prompt such as "Draft a professional product announcement based on these bullet points" and have AI produce new text. Which Azure service is most appropriate for this workload?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is correct because the requirement is to generate new content from a prompt, which is a generative AI workload and a key AI-900 concept. Named entity recognition identifies people, places, organizations, and similar items in existing text; it does not create drafts. Azure AI Translator converts text between languages, but the scenario is about content generation, not translation.

5. You are designing a generative AI solution that answers employee questions by using an internal policy library as reference material. You want to reduce unsupported or fabricated responses by ensuring the model uses the provided source content. Which concept does this describe?

Show answer
Correct answer: Grounding
Grounding is correct because it refers to providing relevant source data so a generative AI model can base its responses on trusted content, an important Azure OpenAI and responsible AI concept in the AI-900 exam domain. Sentiment analysis evaluates the emotional tone of text and is unrelated to reducing fabricated answers. Optical character recognition extracts text from images and does not address response quality in generative AI systems.

Chapter 6: Full Mock Exam and Final Review

This final chapter brings the entire Microsoft AI-900 Azure AI Fundamentals Prep course together into one exam-focused review experience. By this point, you have studied the major objective areas that Microsoft expects candidates to recognize at a foundational level: AI workloads and responsible AI concepts, machine learning principles on Azure, computer vision services, natural language processing workloads, and generative AI concepts including Azure OpenAI. The purpose of this chapter is not to introduce brand-new material, but to convert what you already know into exam performance. That means practicing how Microsoft phrases questions, identifying distractors, and strengthening weak areas before test day.

The AI-900 exam is designed to assess conceptual understanding rather than hands-on engineering implementation. Many candidates lose points not because they do not know the technology, but because they misread the task, confuse similar Azure services, or overthink the level of detail required. In this chapter, the mock exam material is organized in two broad parts, followed by weak spot analysis and an exam day checklist. The structure mirrors how successful candidates prepare in the final days before the exam: first simulate domain switching, then review service mapping, then tighten strategy and pacing.

When working through a full mock exam, think in terms of objective alignment. Ask yourself what the exam is really testing in each item. Is the task asking you to identify an AI workload category, select the most appropriate Azure AI service, distinguish classification from regression, or apply responsible AI principles? This mindset helps you ignore unnecessary wording and focus on the tested skill. AI-900 often rewards precision in matching business scenarios to service capabilities. For example, recognizing when a requirement calls for OCR rather than image classification, or when language detection is different from sentiment analysis, can quickly separate a correct answer from a tempting distractor.

Exam Tip: Microsoft fundamentals exams often include answer choices that are technically related but not the best fit. Your job is not to find something that could work in real life; your job is to choose the answer that most directly matches the stated requirement using Microsoft terminology.

The mock exam portions of this chapter should be used as performance diagnostics. In Mock Exam Part 1, focus on broad domain coverage and your ability to switch rapidly among AI workloads, ML principles, and responsible AI. In Mock Exam Part 2, increase attention to service differentiation in computer vision, NLP, and generative AI. After each practice block, perform weak spot analysis immediately. Do not just count your score. Identify patterns such as mixing up Azure AI Language capabilities, forgetting evaluation metrics, or confusing foundation models with copilots. These patterns are far more valuable than any single question result.

As you complete your final review, prioritize high-yield comparisons and memorization aids. The AI-900 exam consistently tests recognition-level understanding of what each service does best. Build mental maps: regression predicts numeric values, classification predicts categories, clustering groups unlabeled data; OCR extracts text from images, image analysis describes and tags images, facial analysis concerns face-related attributes, and custom vision handles task-specific trained image models. Likewise, for NLP, remember that sentiment analysis detects opinion polarity, key phrase extraction pulls important terms, entity recognition identifies known categories of information, and translation converts language while speech services handle spoken input and output.

Exam Tip: If two answer choices seem close, look for the operational verb in the question stem. Words like classify, detect, extract, group, translate, generate, summarize, and predict usually point directly to a specific AI workload or Azure service category.

The final part of the chapter centers on exam day readiness. A strong AI-900 performance is built on calm reading, disciplined elimination, and reliable pacing. Do not rush early questions just because they seem easy, and do not let one difficult item drain your confidence. Use the review screen strategically if your exam delivery option provides it. Mark any item where you had to guess between two services or where a responsible AI principle felt uncertain. These are often recoverable points during final review if you return with a clearer head.

Remember that AI-900 tests foundational literacy in Azure AI, not deep architecture design. Keep your attention on business needs, service fit, core ML concepts, and responsible use. If you can explain why one answer is correct and why the common distractors are less suitable, you are operating at the right exam level. Use the sections that follow as your final full-domain blueprint, targeted mock review, weak spot diagnostic guide, and exam day confidence checklist.

Sections in this chapter
Section 6.1: Full-domain mock exam blueprint aligned to Microsoft AI-900 objectives

Section 6.1: Full-domain mock exam blueprint aligned to Microsoft AI-900 objectives

Your final mock exam should feel like the real AI-900 experience: mixed topics, changing contexts, and frequent service-mapping decisions. The exam does not stay within one domain for long. A question about responsible AI may be followed immediately by one about clustering, then by one about OCR, then by one about copilots or prompts. For that reason, a good blueprint for Mock Exam Part 1 should deliberately alternate domains instead of grouping everything into isolated blocks. This trains the exact mental switching the real exam demands.

Map your review to the official-style domains covered in this course outcomes set. First, be ready to describe AI workloads and considerations, including common scenarios and responsible AI concepts. Second, review machine learning on Azure: regression, classification, clustering, model training, and evaluation. Third, ensure strong recognition of computer vision workloads and the Azure services used for image analysis, OCR, facial analysis, and custom vision. Fourth, confirm your understanding of natural language processing tasks such as sentiment analysis, key phrase extraction, language understanding, translation, and speech. Fifth, review generative AI concepts including copilots, foundation models, prompts, Azure OpenAI, and responsible generative AI practices.

A practical blueprint should assign more attention to high-confusion areas rather than only high-familiarity areas. Many candidates are comfortable with general definitions of AI but lose points in service selection. Others remember the names of services but forget what the exam is asking the service to do. Your mock design should therefore mix conceptual and scenario-based items. Include tasks where you must identify the workload, choose the service, recognize the output expected, and eliminate near-match distractors.

  • AI workloads and responsible AI: distinguish AI categories, fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
  • Machine learning on Azure: identify when a scenario represents regression, classification, or clustering; understand training versus inference; recognize common evaluation ideas.
  • Computer vision: separate image analysis, OCR, face-related capabilities, and custom image models.
  • NLP: separate sentiment, key phrase extraction, entity recognition, translation, speech, and conversational understanding.
  • Generative AI: recognize prompts, completions, copilots, grounding concepts, and safe, responsible use.

Exam Tip: Build your mock exam in a way that forces you to explain each answer choice aloud or in notes. If you cannot say why the wrong options are wrong, you may be memorizing labels rather than understanding exam logic.

Finally, score your mock by objective, not just by total percentage. The point of a blueprint is diagnostic precision. A score report that says you missed three ML items, two NLP items, and one responsible AI item is much more useful than a single overall score. That directly sets up the Weak Spot Analysis section later in this chapter.

Section 6.2: Mixed exam-style questions covering Describe AI workloads and ML on Azure

Section 6.2: Mixed exam-style questions covering Describe AI workloads and ML on Azure

Mock Exam Part 1 should begin with the broadest and most foundational material: AI workloads and machine learning concepts on Azure. This part of the exam commonly tests whether you can identify the type of problem before selecting the Azure approach. In real exam wording, Microsoft may describe a business need such as predicting sales totals, assigning incoming messages to categories, or grouping customers based on behavior. Your first task is to classify the problem itself. If the output is a number, think regression. If the output is a category label, think classification. If the data is unlabeled and the system is finding patterns or groups, think clustering.

One common trap is mistaking similarity-based grouping for classification. Classification requires known labels during training. Clustering does not. Another trap is assuming every prediction problem is classification. The exam often uses subtle wording such as estimate, forecast, or predict a value, which points to regression instead. Train yourself to locate the expected output type in the scenario. That is usually the fastest route to the correct answer.

For Azure-specific ML questions, the exam may test high-level understanding of model lifecycle concepts rather than implementation detail. Know the difference between training a model and using a trained model for inference. Know that model evaluation assesses performance, and that different tasks use different metrics. You do not need deep mathematics for AI-900, but you do need conceptual recognition that evaluation matters because a model that fits training data poorly or generalizes badly is not useful.

Responsible AI also appears in this area because Microsoft expects foundational candidates to understand that AI systems must be designed and used ethically. Fairness concerns whether outcomes affect groups equitably. Reliability and safety concern consistent, dependable behavior. Privacy and security address the protection of data and systems. Inclusiveness means designing for diverse users. Transparency means understanding how AI is used and, at a suitable level, how decisions are made. Accountability means humans remain responsible for outcomes and governance.

Exam Tip: When a scenario mentions sensitive personal information, user consent, secure handling, or preventing unauthorized access, your best clue is usually privacy and security, not transparency.

Another trap is to over-technicalize AI workloads. AI-900 may ask about conversational AI, anomaly detection, forecasting, recommendation, or computer vision at a general level. If the stem is asking for the workload category, do not jump immediately to a product name. First identify the workload, then map to the most likely Azure service if needed. Candidates often lose easy points by skipping that first step.

In your review notes, create a quick decision map for this domain: output number equals regression, output category equals classification, unknown groups equals clustering, policy and ethics concerns map to responsible AI principles, and business scenarios should be converted into workload types before you look at answer choices. That habit makes mixed-domain mock items much easier to manage.

Section 6.3: Mixed exam-style questions covering Computer vision and NLP workloads on Azure

Section 6.3: Mixed exam-style questions covering Computer vision and NLP workloads on Azure

Mock Exam Part 2 should strongly emphasize service differentiation in computer vision and natural language processing because this is where many AI-900 candidates encounter close-answer distractors. Microsoft frequently describes a realistic business scenario and then asks you to select the most appropriate Azure AI capability. In computer vision, always begin by asking what the system must do with the image. If it must detect and extract printed or handwritten text, think OCR. If it must describe, tag, or analyze general image content, think image analysis. If it must identify face-related information, think face analysis capabilities. If the organization needs a model trained for its own specialized image categories, think custom vision-style custom image modeling.

A classic trap is confusing OCR with document understanding in a broad sense. On AI-900, the tested concept is usually simpler: if the requirement is text extraction from visual input, OCR is the key idea. Another trap is choosing a custom model when a built-in vision capability already matches the need. If the scenario is generic, the exam often expects the built-in service. If it emphasizes organization-specific categories or unique images, then a custom approach is more likely.

In NLP, focus on the action being performed on text or speech. Sentiment analysis determines positive, negative, or neutral opinion. Key phrase extraction identifies important terms. Entity recognition finds items such as places, people, dates, or organizations. Language detection identifies the language used. Translation converts content from one language to another. Speech services handle speech-to-text, text-to-speech, and speech translation scenarios. If the scenario is about understanding user intent in a conversation, the exam may point toward language understanding or conversational AI concepts rather than basic sentiment or keyword extraction.

Exam Tip: Read nouns and verbs carefully. If the scenario says extract the main topics, that suggests key phrases. If it says determine whether a review is favorable, that suggests sentiment. If it says convert spoken audio into written words, that suggests speech-to-text.

One of the most frequent exam traps in this domain is selecting a service because its name sounds more advanced. AI-900 rewards the best fit, not the most sophisticated option. If the requirement is straightforward translation, do not choose a broader language service solely because it seems powerful. Likewise, if the need is OCR, do not drift toward image classification. Keep the problem statement and expected output at the center of your decision.

To prepare effectively, build side-by-side comparison notes for vision and NLP. Put similar capabilities next to each other and write one-line distinctions. That approach is especially useful in weak spot analysis because most missed questions in this domain come from capability overlap rather than lack of familiarity with the service names themselves.

Section 6.4: Mixed exam-style questions covering Generative AI workloads on Azure

Section 6.4: Mixed exam-style questions covering Generative AI workloads on Azure

The generative AI portion of AI-900 is conceptually straightforward, but it introduces modern terminology that candidates sometimes mix up. The exam expects you to understand what generative AI does, what prompts are, what foundation models are, how copilots support users, and how Azure OpenAI concepts fit into responsible deployment. Generative AI systems create new content such as text, code, summaries, or other outputs based on patterns learned from large data sets. A prompt is the instruction or context provided to guide the output. A foundation model is a large pre-trained model that can be adapted or prompted for many tasks. A copilot is an application experience that uses AI to assist a user in completing tasks.

The exam may test whether you understand that generative AI is not automatically factual, unbiased, or context-aware. This leads directly to responsible generative AI topics. Candidates should recognize issues such as harmful content, hallucinations, data leakage, prompt misuse, and the need for human oversight. Microsoft expects foundational awareness that safeguards, filtering, grounding, and review processes matter in enterprise AI use. You do not need deep architecture knowledge, but you do need to know why these controls exist.

Common distractors include confusing a model with the application built on the model, or confusing a prompt with training. On AI-900, a prompt is not model training. It is an input that guides generation at inference time. Likewise, a copilot is not the same thing as the underlying model. It is the user-facing assistant experience built with AI capabilities.

Exam Tip: If an answer choice describes the user experience that helps complete tasks, think copilot. If it describes the reusable large model behind many tasks, think foundation model. If it describes the instruction entered by the user, think prompt.

Another important tested idea is fit-for-purpose service use. If a question asks about Azure OpenAI concepts, focus on language generation, summarization, transformation, and conversational assistance rather than classic NLP extraction tasks like key phrase extraction unless the stem clearly blends them. Generative AI can overlap with other AI services, but the exam usually expects you to identify the primary workload. If the task is to create original text or summarize large content, that points toward generative AI. If the task is to detect sentiment or extract entities from existing text, that points more directly toward Azure AI Language capabilities.

During final review, make a small glossary for this domain: prompt, completion, grounding, foundation model, copilot, safety filter, hallucination, and responsible AI. This vocabulary often unlocks questions that seem difficult at first glance but are actually testing simple distinctions.

Section 6.5: Final review of high-yield terms, service mapping, and last-minute memorization aids

Section 6.5: Final review of high-yield terms, service mapping, and last-minute memorization aids

Your Weak Spot Analysis should now turn into a final high-yield review list. The goal is not to reread everything. The goal is to memorize distinctions that repeatedly appear on the exam and to reinforce the mappings most likely to create hesitation. Start with workload-type clues. Numeric prediction means regression. Category assignment means classification. Pattern-based grouping without labels means clustering. Text from images means OCR. Image content description means image analysis. Opinion detection means sentiment analysis. Important terms means key phrase extraction. Spoken audio conversion means speech-to-text. New content generation means generative AI.

Next, review responsible AI principles because these often appear in wording-based questions. Fairness is about equitable treatment and outcomes. Reliability and safety are about dependable performance. Privacy and security protect data and systems. Inclusiveness accounts for users with varied needs and backgrounds. Transparency concerns explainability and disclosure of AI use. Accountability means humans remain responsible for decisions and governance. These principles are simple to memorize, but candidates often miss them because scenario language feels broad. Match the clue words in the stem to the principle, and avoid selecting a principle just because it sounds generally positive.

  • Regression = number
  • Classification = label
  • Clustering = groups
  • OCR = read text from image
  • Image analysis = describe or tag image
  • Custom vision = train for specific image categories
  • Sentiment = opinion polarity
  • Key phrase extraction = important terms
  • Translation = language conversion
  • Speech = spoken input or output
  • Generative AI = create or summarize content

Exam Tip: Memorization works best when tied to output type. Ask, “What is the system producing?” The output often tells you the service or workload immediately.

For last-minute review, avoid deep implementation notes unless they directly support service identification. AI-900 is a fundamentals exam. It is more useful to know which service solves a problem than to know every configuration detail. Also review near-miss pairs: OCR versus image analysis, sentiment versus key phrase extraction, classification versus clustering, prompt versus training, foundation model versus copilot. These pairings account for a large share of avoidable mistakes.

If you have only a short amount of time before the exam, spend it on service mapping tables, responsible AI principles, and output-based workload identification. Those are among the highest-yield review targets in the course.

Section 6.6: Exam day strategy, answer elimination techniques, pacing, and confidence checklist

Section 6.6: Exam day strategy, answer elimination techniques, pacing, and confidence checklist

On exam day, your job is to convert knowledge into disciplined execution. Begin by reading each question stem fully before examining the answer choices. Many AI-900 mistakes happen because candidates notice a familiar service name and answer too quickly. Slow down just enough to identify the actual requirement: classify, predict, extract, translate, detect, generate, or analyze. Once you know the verb and expected output, the correct answer is usually much easier to isolate.

Use elimination aggressively. First remove answers that belong to the wrong AI domain. If the scenario is about extracting text from an image, eliminate speech and sentiment-related options immediately. Next remove answers that are too broad or not the best fit. Then compare the final two choices by asking which one most directly satisfies the requirement using Microsoft terminology. This process is especially effective in AI-900 because distractors are often plausible but imprecise.

Pacing matters. Do not let one difficult item consume excessive time. If the exam platform allows marking for review, use it for questions where you are down to two plausible answers. Your later recall may improve after seeing related items elsewhere in the exam. Maintain steady momentum and avoid score panic. Fundamentals exams often contain a mix of very direct items and more nuanced scenario questions; encountering a hard one does not mean you are performing badly.

Exam Tip: If you feel stuck, restate the scenario in plain language. For example: “They want to read text from photos,” or “They want to predict a number,” or “They want AI to draft content.” That simplification often reveals the correct workload instantly.

Before submitting, run a quick confidence checklist. Did you review marked questions? Did you confirm distinctions among similar services? Did you avoid changing answers without a clear reason? In final review, trust evidence, not anxiety. Change an answer only if you can identify the exact exam concept that makes another option better.

Your final exam day checklist should include practical readiness as well: confirm your testing appointment details, identification requirements, system readiness for online proctoring if applicable, and a quiet testing environment. Mentally, commit to a calm sequence: read carefully, identify the workload, map to the best Azure service or principle, eliminate distractors, and move on. This chapter is your final bridge from studying to certification performance. If you can consistently identify what the exam is testing and why the distractors are weaker, you are ready for AI-900.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A retail company wants to process photos of paper receipts submitted by customers and extract the printed store name, date, and purchase total into a database. Which Azure AI capability should you choose?

Show answer
Correct answer: Optical character recognition (OCR)
OCR is the best match because the requirement is to extract printed text from images. On AI-900, this is a classic service-mapping question: extract text points to OCR, not general image labeling. Image classification predicts a category for an entire image, such as receipt versus invoice, but it does not return the text content. Face detection identifies human faces and related attributes, which is unrelated to receipt text extraction.

2. You are reviewing a mock exam question that asks which machine learning approach should be used to predict next month's sales revenue based on historical sales data. Which type of machine learning should you select?

Show answer
Correct answer: Regression
Regression is correct because the target value, sales revenue, is numeric. AI-900 frequently tests the distinction between predicting numbers and predicting categories. Classification would be appropriate if the goal were to predict a label such as high, medium, or low sales band. Clustering groups unlabeled data into similar segments and is not used when you already have a known numeric outcome to predict.

3. A support center wants to analyze incoming customer emails and determine whether each message expresses a positive, neutral, or negative opinion. Which Azure AI Language capability best fits this requirement?

Show answer
Correct answer: Sentiment analysis
Sentiment analysis is correct because the task is to determine opinion polarity such as positive, neutral, or negative. Language detection only identifies which language the text is written in, such as English or Spanish, and does not evaluate tone. Key phrase extraction pulls important terms or topics from text, but it does not classify emotional sentiment.

4. A company is building an AI solution that generates draft marketing copy from user prompts. During review, the team decides the system must avoid harmful or offensive outputs and clearly communicate limitations to users. Which responsible AI principle is MOST directly being applied?

Show answer
Correct answer: Reliability and safety
Reliability and safety is the best answer because the primary focus in the scenario is preventing harmful or unsafe generated output. On AI-900, safety-related controls for generative AI align closely to building dependable systems that minimize harmful responses. Transparency is relevant to communicating limitations, but it is not the main requirement described. Fairness concerns avoiding biased treatment across people or groups, which is not the central issue in this scenario.

5. During final review for AI-900, you see this question: A business wants a conversational solution that uses a foundation model to generate summaries of internal documents when users ask natural language questions. Which service is the MOST appropriate choice?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is correct because the scenario requires generative AI capabilities based on a foundation model to create summaries from prompts. Azure AI Vision is used for image-related workloads such as OCR, tagging, and visual analysis, not document-based text generation. Azure AI Speech focuses on speech-to-text, text-to-speech, and speech translation, which does not directly address prompt-based summarization with a generative model.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.