HELP

Microsoft AI Fundamentals AI-900 Exam Prep

AI Certification Exam Prep — Beginner

Microsoft AI Fundamentals AI-900 Exam Prep

Microsoft AI Fundamentals AI-900 Exam Prep

Pass AI-900 with beginner-friendly Microsoft exam prep.

Beginner ai-900 · microsoft · azure ai fundamentals · ai certification

Prepare for Microsoft AI-900 with a Clear, Beginner-Friendly Roadmap

Microsoft AI-900: Azure AI Fundamentals is one of the best entry points into artificial intelligence certification, especially for learners who are new to Microsoft exams and do not come from a technical background. This course blueprint is designed specifically for non-technical professionals who want to understand the exam, master the official objectives, and build enough confidence to pass on their first attempt. The structure follows the published AI-900 domains from Microsoft and turns them into a practical six-chapter study path that is easy to follow.

The course begins with a complete orientation to the AI-900 exam itself. Before diving into technical concepts, Chapter 1 explains registration, scheduling, scoring expectations, question formats, and study strategy. This is important for first-time candidates because many people lose confidence not from the content, but from not knowing how Microsoft certification exams work. By addressing the exam process first, learners can start with clarity and a realistic plan.

Mapped Directly to the Official AI-900 Exam Domains

Chapters 2 through 5 are aligned to the official AI-900 objectives published by Microsoft:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Each chapter focuses on what beginners actually need to know for the exam: definitions, business scenarios, Azure service recognition, responsible AI principles, and exam-style comparison skills. Rather than overwhelming you with implementation detail, the course emphasizes conceptual understanding and service selection, which is exactly what AI-900 candidates need.

In the AI workloads chapter, you will learn how to identify common AI solution categories and match them to real-world business needs. In the machine learning chapter, you will study foundational terms like features, labels, training, validation, and inference, along with Azure Machine Learning basics. The computer vision chapter covers image analysis, OCR, facial capabilities, and document intelligence. The NLP and generative AI chapter explains text analytics, translation, speech, conversational AI, prompts, copilots, and responsible use of generative models in Azure.

Designed for Non-Technical Professionals

This course is intentionally built for professionals with basic IT literacy but no coding background and no prior certification experience. Every chapter uses plain language, guided progression, and exam-style reinforcement. The goal is not just to teach terminology, but to help you recognize how Microsoft presents questions. That means you will practice spotting keywords, eliminating distractors, and choosing the most appropriate Azure AI service for a given business scenario.

Because the AI-900 exam often tests understanding through short cases and service-matching questions, this course blueprint includes practice-focused milestones throughout the middle chapters. You will not just read about the domains; you will rehearse the decision-making process required on the exam.

Full Mock Exam and Final Review

Chapter 6 is dedicated to final readiness. It combines a full mixed-domain mock exam, answer review by domain, weak-spot analysis, and a practical exam-day checklist. This final chapter helps close knowledge gaps and gives learners a structured way to review before test day. It also supports better retention by revisiting each official domain in one last consolidated pass.

If you are ready to start your certification path, Register free and begin building your AI-900 study routine. If you want to explore related learning paths first, you can also browse all courses on Edu AI.

Why This Course Helps You Pass

This blueprint helps learners succeed because it is focused, official-objective aligned, and beginner appropriate. It avoids unnecessary complexity while still covering the concepts Microsoft expects you to understand. By combining exam orientation, domain-based learning, and realistic practice, the course supports both knowledge and confidence.

  • Built around the official Microsoft AI-900 exam domains
  • Structured as a six-chapter progression from orientation to full mock exam
  • Suitable for non-technical learners and first-time certification candidates
  • Includes exam-style practice focus in every core content chapter
  • Reinforces responsible AI, Azure service recognition, and scenario analysis

Whether you work in business, sales, operations, project management, or simply want a recognized Microsoft credential in AI fundamentals, this course gives you a practical and approachable path to AI-900 exam readiness.

What You Will Learn

  • Describe AI workloads and common business scenarios tested on the AI-900 exam
  • Explain fundamental principles of machine learning on Azure, including training, inference, and responsible AI concepts
  • Identify computer vision workloads on Azure and match them to the right Azure AI services
  • Understand natural language processing workloads on Azure, including text analysis, translation, and speech capabilities
  • Describe generative AI workloads on Azure, including copilots, prompts, foundation models, and responsible use
  • Apply exam strategies, question analysis techniques, and mock exam practice to improve AI-900 readiness

Requirements

  • Basic IT literacy and comfort using web applications
  • No prior Microsoft certification experience needed
  • No programming background required
  • Interest in AI concepts, business use cases, and Azure services

Chapter 1: AI-900 Exam Foundations and Study Plan

  • Understand the AI-900 exam format and objectives
  • Set up registration, scheduling, and candidate accounts
  • Build a beginner-friendly study strategy
  • Learn scoring, question styles, and test-day expectations

Chapter 2: Describe AI Workloads

  • Recognize core AI workloads and business value
  • Differentiate AI, machine learning, and generative AI use cases
  • Connect AI workloads to Azure service categories
  • Practice exam-style scenarios for Describe AI workloads

Chapter 3: Fundamental Principles of ML on Azure

  • Understand machine learning concepts at a beginner level
  • Explain supervised, unsupervised, and reinforcement learning basics
  • Identify Azure machine learning capabilities and workflows
  • Practice exam-style questions on ML principles and Azure services

Chapter 4: Computer Vision Workloads on Azure

  • Identify key computer vision tasks and outcomes
  • Match vision workloads to Azure AI services
  • Understand image, video, and document intelligence use cases
  • Practice AI-900 style questions for computer vision workloads

Chapter 5: NLP and Generative AI Workloads on Azure

  • Explain natural language processing fundamentals and service options
  • Recognize speech, language, and translation workloads on Azure
  • Understand generative AI concepts, prompts, and copilots
  • Practice exam-style questions on NLP and generative AI workloads

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer for Azure AI

Daniel Mercer is a Microsoft Certified Trainer who specializes in Azure AI and cloud fundamentals certification preparation. He has guided beginner and non-technical learners through Microsoft certification pathways with a strong focus on exam alignment, clear explanations, and confidence-building practice.

Chapter 1: AI-900 Exam Foundations and Study Plan

The AI-900: Microsoft Azure AI Fundamentals exam is designed as an entry-level certification, but candidates should not confuse “fundamentals” with “effortless.” Microsoft uses this exam to verify that you can recognize core AI workloads, understand the basic principles behind machine learning and responsible AI, and match common business needs to the correct Azure AI services. This chapter lays the foundation for the rest of the course by showing you what the exam is really testing, how the objective domains are organized, how to register and schedule your exam correctly, and how to build a study plan that fits a beginner-friendly path without leaving gaps.

One of the biggest mistakes candidates make is studying AI concepts in isolation. The AI-900 exam expects pattern recognition across categories: if a business wants image tagging, that points toward computer vision; if it wants language translation, that maps to natural language processing; if it wants content generation or copilots, that enters the generative AI domain. Even when the exam asks about definitions, it often wraps those definitions inside a practical scenario. That means your study plan must connect terminology, Azure service names, and real-world use cases.

This chapter also introduces the exam experience itself. Many first-time certification candidates underestimate logistics: registration steps, scheduling windows, identification rules, testing center vs. online delivery, and question format expectations. These procedural details matter because avoidable mistakes can create stress before the exam even begins. A strong candidate prepares both academically and operationally.

Exam Tip: Treat AI-900 as a business-to-technology mapping exam. Many questions are easier when you first ask, “What workload is this scenario describing?” and only then decide which Azure AI capability fits best.

As you move through this course, keep the course outcomes in view. You are not only learning what AI workloads are; you are learning how Microsoft frames them on the test. That includes AI workloads and business scenarios, machine learning concepts such as training and inference, responsible AI principles, computer vision services, natural language workloads, and generative AI concepts such as foundation models, prompts, and copilots. This first chapter turns those broad outcomes into a practical study system so you can approach the rest of the book with confidence and structure.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set up registration, scheduling, and candidate accounts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a beginner-friendly study strategy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Learn scoring, question styles, and test-day expectations: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set up registration, scheduling, and candidate accounts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: Overview of Microsoft Azure AI Fundamentals and the AI-900 certification

Section 1.1: Overview of Microsoft Azure AI Fundamentals and the AI-900 certification

AI-900 is Microsoft’s foundational certification for candidates who want to demonstrate basic knowledge of artificial intelligence concepts and Azure AI services. It is appropriate for beginners, business stakeholders, students, career changers, and technical professionals who want a broad understanding before moving into role-based Azure certifications. The exam does not require you to build full machine learning solutions, write production code, or design enterprise architectures. Instead, it checks whether you can identify AI workloads, describe what Azure services do, and distinguish between closely related concepts.

The certification is broad by design. You should expect coverage of machine learning, computer vision, natural language processing, conversational AI, generative AI, and responsible AI. Microsoft also expects you to recognize the difference between a concept and a service. For example, machine learning is a discipline, while Azure Machine Learning is a platform service. Computer vision is a workload category, while Azure AI Vision is a service family used to address that workload. Many exam traps rely on confusing these levels.

Another important point is that AI-900 is not purely theoretical. The exam often describes a business need such as classifying support tickets, extracting text from documents, detecting objects in images, or generating draft content for users. Your task is typically to identify the best-fit workload or Azure service. This means memorization alone is not enough. You need conceptual fluency and scenario awareness.

Exam Tip: When studying any topic in AI-900, learn three things together: the definition, a business scenario, and the matching Azure service. That trio is how the exam frequently frames correct answers.

Candidates sometimes assume a fundamentals exam will focus mostly on terminology. In reality, Microsoft wants evidence that you can communicate intelligently about AI solutions in Azure. Think of the exam as validating that you can participate in project discussions, recognize solution patterns, and choose from high-level options without needing deep implementation skills.

Section 1.2: Official exam domains and how Describe AI workloads maps across the test

Section 1.2: Official exam domains and how Describe AI workloads maps across the test

The AI-900 exam is organized around official skill domains, and your study plan should reflect those domains rather than random topic lists. Microsoft updates objective wording from time to time, so always review the current skills measured page before your final revision cycle. Even so, the core pattern remains stable: describe AI workloads and considerations, describe fundamental principles of machine learning on Azure, describe features of computer vision workloads on Azure, describe features of natural language processing workloads on Azure, and describe features of generative AI workloads on Azure.

The phrase “Describe AI workloads” is especially important because it maps across the entire exam, not just one small objective line. Microsoft uses it as a bridge concept. If you understand what kind of problem a scenario represents, you can often eliminate wrong answers quickly. For example, predicting numerical outcomes suggests machine learning; extracting printed text from an image suggests optical character recognition within computer vision; translating speech or text suggests natural language services; generating new content from prompts suggests generative AI.

A common trap is focusing only on service names and skipping workload language. The exam may ask indirectly. Instead of saying “Which service handles OCR?” it may describe employees scanning forms and needing text extraction. Instead of saying “Which service is for classification?” it may describe sorting emails into categories. Read for the problem type first, then map it to the Azure offering.

  • AI workloads and business scenarios: broad pattern recognition across solution types
  • Machine learning: training, inference, model evaluation, and responsible AI basics
  • Computer vision: image analysis, object detection, OCR, face-related capabilities as aligned with current objectives
  • Natural language processing: sentiment, key phrases, translation, speech, question answering, and text analysis
  • Generative AI: copilots, prompts, foundation models, and responsible use

Exam Tip: Build a one-page domain map. For each objective domain, list common verbs the exam uses such as classify, predict, detect, extract, translate, transcribe, summarize, and generate. Those verbs often reveal the correct workload faster than memorized definitions do.

If a question seems ambiguous, ask yourself which domain vocabulary is strongest in the scenario. The domain cues usually point to the intended answer even when multiple services sound familiar.

Section 1.3: Registration process, exam delivery options, identification, and scheduling policies

Section 1.3: Registration process, exam delivery options, identification, and scheduling policies

Administrative preparation is part of exam readiness. To take AI-900, you must use a Microsoft certification profile and complete scheduling through Microsoft’s exam delivery partner workflow. Make sure your legal name in your certification account exactly matches the identification you will present on exam day. Name mismatches are a preventable issue that can delay or invalidate a session. If your profile contains an old surname, nickname, missing middle name, or inconsistent character format, correct it well before your exam date.

You will generally choose between online proctored delivery and a physical test center, depending on local availability. Online testing offers convenience, but it also requires stronger environmental control. You may need a quiet room, cleared desk, functioning webcam and microphone, stable internet connection, and compliance with room scanning requirements. A testing center reduces technical setup risk but adds travel and timing constraints. Neither option is universally better; choose the one that minimizes uncertainty for you.

Scheduling should be strategic. Beginners often book too early and then rush through study materials. Others delay booking indefinitely and lose momentum. A practical approach is to choose a target date after building a realistic study plan, then use that date as an accountability anchor. Review rescheduling and cancellation windows carefully, because policies can change and late changes may carry penalties or forfeitures.

Identification rules are strict. Read the provider’s current policy for accepted IDs in your region. Also review check-in instructions in advance rather than on exam day.

Exam Tip: Do a full systems and room check at least one or two days before an online exam. Technical surprises create cognitive fatigue before the first question even appears.

Finally, avoid treating registration as a final step. It is part of exam preparation. When your account, ID, and delivery choice are settled early, you preserve mental energy for what matters most: mastering the objective domains.

Section 1.4: Scoring model, passing expectations, retake rules, and exam question formats

Section 1.4: Scoring model, passing expectations, retake rules, and exam question formats

Microsoft certification exams typically use a scaled scoring model, and AI-900 is commonly reported on a scale where 700 is the passing score. Candidates should understand what that does and does not mean. It does not mean you need 70 percent of all questions correct in a simple linear way. Scaled scoring accounts for exam form differences, so your best strategy is not to chase hypothetical raw score math. Instead, aim for broad, reliable understanding across all domains and enough cushion to handle unfamiliar wording.

The exam may contain different question styles. You should be ready for standard multiple-choice items, multiple-response items, drag-and-drop style ordering or matching, and scenario-based prompts. Some questions test direct recognition, while others test whether you can distinguish between two plausible services. Fundamentals exams often feel straightforward until answer options become subtly similar. That is where disciplined reading matters.

Know the retake policy before you test. If you do not pass on the first attempt, Microsoft generally enforces waiting periods before another attempt, with longer delays after repeated failures. That means a failed attempt costs more than money; it can also disrupt your study rhythm. The best use of practice is to reduce the chance of avoidable first-attempt failure.

Another common misunderstanding is assuming every question has equal emotional weight. If you encounter a difficult item, do not let it distort your pacing. One hard question does not signal overall failure.

Exam Tip: On fundamentals exams, the most common scoring mistake is not lack of knowledge but misreading qualifiers such as “best,” “most appropriate,” “should use,” or “wants to.” These words define the answer standard.

Expect questions that test practical fit rather than technical depth. For example, the exam is more likely to ask you to match a workload to a service than to derive an algorithm. Keep your preparation aligned with that level: understand what services do, when they are used, and how Microsoft describes them in business terms.

Section 1.5: Study strategy for beginners using domain weighting, notes, and revision cycles

Section 1.5: Study strategy for beginners using domain weighting, notes, and revision cycles

A beginner-friendly AI-900 study strategy should be structured, cyclical, and based on the official domains. Start by dividing your study effort according to the current domain weighting from Microsoft. Heavier domains deserve proportionally more time, but do not ignore lighter domains; fundamentals exams are designed to sample broadly. Your goal is balanced competence with extra reinforcement in the highest-weighted areas.

Use a three-pass approach. In pass one, build familiarity: learn the vocabulary of AI workloads, machine learning, computer vision, NLP, generative AI, and responsible AI. In pass two, connect concepts to Azure services and common business scenarios. In pass three, practice retrieval: explain each concept from memory, compare similar services, and correct weak areas through targeted review. This is much more effective than rereading notes passively.

Your notes should be compact and comparison-focused. Instead of writing long summaries, create tables such as workload versus example scenario versus Azure service. Keep a separate page for “easy to confuse” items. For example, compare text analytics functions, translation functions, speech functions, and generative AI functions. These comparison sheets become high-value revision tools in the final week.

Revision cycles matter. A simple plan is to study new material during the week and revisit it on a short delay: same day quick recap, 48-hour review, one-week review, then final consolidation. This spaced repetition reduces forgetting and helps service names stick.

  • Week 1: exam overview, workload categories, foundational terminology
  • Week 2: machine learning on Azure and responsible AI concepts
  • Week 3: computer vision and natural language processing
  • Week 4: generative AI, review, weak-area repair, and timed practice

Exam Tip: Do not wait until the end to practice scenario interpretation. From your first study session onward, ask, “What business problem is being solved here?” That habit directly improves exam performance.

Beginners often overfocus on memorizing every service detail. AI-900 rewards conceptual clarity more than exhaustive feature recall. Learn enough detail to distinguish options confidently, but keep your attention on the tested objective: describing workloads and matching them accurately.

Section 1.6: How to approach scenario-based questions, eliminate distractors, and manage time

Section 1.6: How to approach scenario-based questions, eliminate distractors, and manage time

Scenario-based questions are where many AI-900 candidates either gain a major advantage or lose easy points. The key is to decode the scenario in layers. First, identify the business goal. Second, identify the workload category. Third, match the category to the Azure service or concept that best fits. This sequence prevents you from jumping too quickly to a familiar service name that only partially matches the need.

Distractors are often built from answers that are technically related but not optimal. For example, a service that processes language may appear alongside one that specifically translates language. A machine learning concept may appear alongside a computer vision service because the scenario mentions images, even though the actual task is prediction, not image analysis. The exam is testing whether you can select the most appropriate option, not just a vaguely possible one.

Use elimination actively. Remove answers that belong to the wrong workload family, require unnecessary complexity, or solve only part of the stated problem. Pay close attention to whether the scenario asks for analysis, extraction, classification, generation, or prediction. Those verbs are strong signals.

Time management should be calm and deliberate. Do not spend excessive time trying to prove a difficult question mathematically; this is a fundamentals exam. If you narrow it to two choices, choose the best-supported option based on the scenario wording and move on. Preserve time for later items that may be more direct.

Exam Tip: Read the final sentence of a scenario carefully. It often states the actual decision point, while earlier sentences provide context or distractions.

On test day, maintain a steady rhythm. Answer clear questions efficiently, flag only when necessary, and avoid emotional overreaction to unfamiliar wording. The exam is designed to sample your understanding across domains, so success comes from consistency. If you can identify the workload, reject mismatched distractors, and manage your time with discipline, you will perform far better than candidates who rely on memorized keywords alone.

Chapter milestones
  • Understand the AI-900 exam format and objectives
  • Set up registration, scheduling, and candidate accounts
  • Build a beginner-friendly study strategy
  • Learn scoring, question styles, and test-day expectations
Chapter quiz

1. You are beginning preparation for the Microsoft AI-900 exam. Which study approach best aligns with how the exam objectives are typically assessed?

Show answer
Correct answer: Study AI concepts, Azure AI services, and business scenarios together so you can map workloads to the correct solution
Correct answer: Study AI concepts, Azure AI services, and business scenarios together so you can map workloads to the correct solution. AI-900 is a fundamentals exam that commonly tests business-to-technology mapping, such as recognizing whether a scenario describes computer vision, natural language processing, or generative AI and then choosing the appropriate Azure capability. Memorizing service names alone is insufficient because exam questions often wrap definitions inside scenarios. Focusing only on coding machine learning models is also incorrect because AI-900 does not emphasize implementation-level development skills.

2. A candidate says, "AI-900 is a fundamentals exam, so I do not need to worry about registration details until the night before the test." Based on recommended exam preparation practices, what is the best response?

Show answer
Correct answer: That is risky, because account setup, scheduling, identification requirements, and delivery method can create preventable exam-day problems
Correct answer: That is risky, because account setup, scheduling, identification requirements, and delivery method can create preventable exam-day problems. Chapter 1 emphasizes that operational readiness matters alongside academic study. Candidates should verify registration steps, scheduling windows, ID rules, and whether they will test online or at a center. Saying only technical knowledge matters is wrong because logistics can disrupt or delay the exam. Claiming late scheduling is preferred is also wrong because waiting can reduce available appointment choices and increase stress.

3. A company wants an AI solution that can automatically identify objects in product photos uploaded by customers. When answering an AI-900 exam question, what should you do first?

Show answer
Correct answer: Identify the workload category as computer vision before selecting a specific Azure AI service
Correct answer: Identify the workload category as computer vision before selecting a specific Azure AI service. A key AI-900 strategy is to first recognize the workload described by the scenario and then map it to the matching Azure capability. Object identification in images is a classic computer vision scenario. Assuming it is primarily a machine learning training task is too broad and does not reflect the exam's workload-mapping style. Choosing generative AI is incorrect because the requirement is image analysis, not content generation.

4. Which statement most accurately describes the level and purpose of the Microsoft AI-900 exam?

Show answer
Correct answer: It is an entry-level exam that validates recognition of core AI workloads, machine learning concepts, responsible AI, and related Azure AI services
Correct answer: It is an entry-level exam that validates recognition of core AI workloads, machine learning concepts, responsible AI, and related Azure AI services. AI-900 is designed as a fundamentals certification, but it still expects candidates to understand how Microsoft frames AI concepts and services in business scenarios. Calling it expert-level is wrong because the exam does not focus on advanced implementation from scratch. Calling it an administrator exam is also wrong because the focus is AI concepts and Azure AI solutions, not primarily infrastructure administration.

5. A learner is creating a beginner-friendly study plan for AI-900. Which plan is most likely to leave knowledge gaps that could hurt exam performance?

Show answer
Correct answer: Study each topic as an isolated definition list without practicing how scenarios map to workloads and services
Correct answer: Study each topic as an isolated definition list without practicing how scenarios map to workloads and services. Chapter 1 warns that one of the biggest mistakes candidates make is studying AI concepts in isolation. AI-900 commonly assesses whether you can recognize patterns in scenarios and map them to the proper workload and Azure AI offering. Reviewing objective domains with scenario practice is a strong approach, not a weak one. Studying major topic areas while linking them to Azure services is also appropriate because it reflects the exam's practical framing.

Chapter 2: Describe AI Workloads

This chapter maps directly to one of the most testable areas of the Microsoft AI Fundamentals AI-900 exam: recognizing common AI workloads, matching them to business scenarios, and distinguishing between related but different concepts such as AI, machine learning, and generative AI. On the exam, Microsoft often presents short business descriptions and asks you to identify the workload category rather than implement a technical solution. That means your job is to read for intent: Is the system predicting a number, classifying an image, extracting text, translating speech, or generating new content? The AI-900 exam rewards candidates who can connect a scenario to the right category of AI capability and the most appropriate Azure service family.

As you study this chapter, focus on the practical language used in exam questions. Terms such as prediction, classification, object detection, OCR, sentiment analysis, translation, speech recognition, copilot, and content generation are clues. The exam usually does not expect deep coding knowledge here. Instead, it tests whether you understand what kind of workload is being described and why a business would use it. In other words, think like a consultant reading requirements. What outcome does the organization want, and which AI pattern best fits?

You should also remember that AI workloads are commonly grouped into several broad categories on Azure: machine learning, computer vision, natural language processing, conversational AI and speech, knowledge mining, and generative AI experiences. Some scenarios can seem to overlap. For example, a chatbot that answers questions from company documents may involve natural language processing, search, and generative AI. The best answer is usually the one that matches the dominant business goal stated in the scenario. This chapter will help you recognize those distinctions and avoid common traps.

Exam Tip: On AI-900, when two answer choices both sound plausible, look for the verb in the scenario. “Predict,” “forecast,” and “recommend” usually point toward machine learning. “Detect,” “analyze image,” or “read handwriting” point toward computer vision. “Translate,” “extract key phrases,” or “recognize speech” point toward natural language processing and speech. “Generate,” “summarize,” “draft,” or “answer in natural language” often signal generative AI.

The chapter sections that follow align to the exam objective of describing AI workloads and common business scenarios. You will learn to recognize core AI workloads and business value, differentiate AI, machine learning, and generative AI use cases, connect workloads to Azure service categories, and apply exam-style reasoning. Read actively and compare the workload types, because the AI-900 exam frequently tests contrast as much as definition.

Practice note for Recognize core AI workloads and business value: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Differentiate AI, machine learning, and generative AI use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Connect AI workloads to Azure service categories: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style scenarios for Describe AI workloads: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Recognize core AI workloads and business value: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Describe AI workloads objective overview and key exam terminology

Section 2.1: Describe AI workloads objective overview and key exam terminology

The AI-900 objective “Describe AI workloads” is foundational because it frames the rest of the exam. Before you can choose an Azure AI service, you must identify what type of problem the business is trying to solve. An AI workload is a category of tasks that uses AI techniques to achieve a result such as recognizing patterns, understanding language, interpreting images, making predictions, or generating content. The exam expects you to know the purpose of these workload categories and to recognize them in business scenarios.

At a high level, artificial intelligence is the broad field of creating systems that appear to act intelligently. Machine learning is a subset of AI in which models learn patterns from data to make predictions or decisions. Generative AI is a further specialization focused on producing new content such as text, images, code, or summaries based on prompts and learned patterns. One common exam trap is treating these terms as interchangeable. They are related, but not identical. If the scenario is about producing a draft email or summarizing a document, that is not just generic AI; it is specifically generative AI.

Other exam terms matter as well. Training means teaching a model from data. Inference means using the trained model to make predictions on new data. Classification assigns an item to a category, while regression predicts a numeric value. Clustering groups similar items without predefined labels. In computer vision, image classification labels the whole image, while object detection locates and identifies individual objects within an image. In language workloads, sentiment analysis identifies opinion or emotional tone, and named entity recognition extracts things such as people, places, dates, and organizations.

Exam Tip: The exam often tests terminology by embedding it inside a business description rather than asking for definitions directly. Learn to translate plain-language business needs into AI terms. “Find unhappy customers from reviews” means sentiment analysis. “Read text from scanned receipts” means OCR. “Forecast next month sales” means regression.

Azure groups these capabilities into service categories. You do not need deep implementation detail for every service in this chapter, but you should know the service family that corresponds to the workload. The tested skill is classification of needs: identify the workload first, then connect it to an Azure AI category. That sequence is often the key to getting scenario questions right.

Section 2.2: Machine learning workloads, prediction scenarios, and decision support

Section 2.2: Machine learning workloads, prediction scenarios, and decision support

Machine learning workloads are about learning from data to support decisions or automate predictions. On AI-900, these scenarios often involve historical business data and a future outcome. Common examples include predicting loan default risk, forecasting product demand, recommending products, identifying likely equipment failure, or classifying emails as spam. The exam expects you to recognize when the system is not simply following fixed rules but instead discovering patterns from data.

Several machine learning patterns are especially important. Classification predicts a category, such as whether a customer will churn or whether a transaction is fraudulent. Regression predicts a numeric value, such as house price, sales volume, or delivery time. Clustering groups records with similar characteristics, such as customer segments for marketing analysis. You may also see recommendation-style scenarios, where past user behavior is used to suggest products or content. These are all machine learning workloads, even though their outputs differ.

A common exam trap is confusing decision support with deterministic automation. If a scenario says an app uses a set of predefined conditions like “if temperature exceeds threshold, send alert,” that is not necessarily machine learning. But if it says the solution analyzes historical maintenance data to estimate failure likelihood, that is machine learning because it predicts based on learned patterns. Watch for clues like “historical data,” “train a model,” “predict probability,” and “forecast trends.”

Machine learning on Azure is commonly associated with Azure Machine Learning, where organizations can train, deploy, and manage models. For AI-900, you are not expected to build full pipelines, but you should know that Azure Machine Learning supports model training and deployment. The business value is improved decision-making, automation at scale, and the ability to find patterns too complex for manual analysis.

  • Use classification when the answer is a label or category.
  • Use regression when the answer is a number.
  • Use clustering when the goal is discovering natural groupings.
  • Use recommendations when the goal is suggesting likely relevant items.

Exam Tip: If the scenario asks what workload fits “predict whether,” think classification. If it asks “predict how much” or “forecast how many,” think regression. That distinction appears frequently and can eliminate wrong choices quickly.

Remember that the exam is not trying to trick you into advanced model selection; it is testing whether you can identify the machine learning workload from the business objective. Start with the output type, then choose the workload category.

Section 2.3: Computer vision workloads such as image classification, detection, and OCR

Section 2.3: Computer vision workloads such as image classification, detection, and OCR

Computer vision workloads enable systems to interpret visual information from images and video. On AI-900, this area is heavily scenario-driven. You may be asked to identify the right workload when a company wants to tag product photos, detect whether hard hats are being worn in a factory, read text from forms, or analyze facial attributes. The key is to distinguish the kind of visual understanding required.

Image classification assigns one or more labels to an entire image. For example, a retail company may classify images as “shoe,” “shirt,” or “bag.” Object detection goes further by locating and identifying multiple objects inside an image, such as detecting cars, pedestrians, or boxes in a warehouse scene. Optical character recognition (OCR) extracts printed or handwritten text from images or scanned documents. This is a common exam favorite because many business processes involve receipts, invoices, identity documents, or forms. If the problem is “read the text,” the answer is OCR, not image classification.

Another tested concept is image analysis for descriptive tagging, captioning, or identifying common visual elements. Azure AI Vision is the service family most often associated with image analysis, OCR, and object detection scenarios. The exam generally expects service-category awareness, not coding specifics.

A classic trap is confusing facial analysis with person identification. If a scenario asks whether a system can detect a face or identify facial landmarks, that is a vision analysis task. If the scenario implies verifying a specific individual’s identity, be careful and think about responsible AI and sensitive use cases. AI-900 may frame such items conceptually rather than operationally.

Exam Tip: Ask yourself whether the AI must understand the whole image, specific objects in positions, or text embedded in the image. Whole image equals classification or analysis. Specific items with locations equals object detection. Printed or handwritten words equals OCR.

From a business value standpoint, computer vision reduces manual inspection, speeds document processing, improves asset tracking, and supports quality control. On the exam, the best answer is usually the one most tightly matched to the stated output. “Locate defects on a product line image” suggests object detection. “Convert a scanned contract to editable text” suggests OCR. “Determine whether an uploaded image contains a dog or cat” suggests image classification.

Section 2.4: Natural language processing workloads including sentiment, translation, and speech

Section 2.4: Natural language processing workloads including sentiment, translation, and speech

Natural language processing, or NLP, focuses on enabling computers to work with human language in text and speech. AI-900 commonly tests whether you can distinguish among text analytics, translation, question answering, and speech-related tasks. These scenarios are highly practical: analyzing customer reviews, extracting important phrases from support tickets, translating web content, converting speech to text, or synthesizing spoken responses.

Text analytics workloads include sentiment analysis, which identifies whether text expresses positive, negative, or neutral opinion; key phrase extraction, which pulls out important terms; and entity recognition, which identifies names, locations, organizations, dates, and other structured information in unstructured text. If a business wants to monitor social media for customer satisfaction, sentiment analysis is the most likely workload. If it wants to pull invoice dates or company names from text, entity recognition fits better.

Translation converts text from one language to another. This is different from summarization, which shortens content while preserving main meaning. Candidates sometimes confuse the two because both transform text. Speech recognition converts spoken language into text, while speech synthesis converts text into spoken audio. A voice-enabled assistant may use both: recognize a user request and speak back a response.

Azure AI Language and Azure AI Speech are the service categories you should associate with these workloads. Again, the exam emphasis is on matching need to category. If the prompt mentions call center recordings converted to text, think speech recognition. If it mentions multilingual support for customer chat, think translation. If it mentions analyzing review tone, think sentiment analysis.

  • Opinion or mood in text: sentiment analysis
  • Important terms from text: key phrase extraction
  • Names, dates, places, organizations: entity recognition
  • Language conversion: translation
  • Audio to text: speech recognition
  • Text to audio: speech synthesis

Exam Tip: Listen for the input and output formats. Text in, text label out often indicates text analytics. Audio in, text out is speech recognition. Text in, audio out is speech synthesis. Text in one language, text in another language is translation.

Many exam questions combine customer service with language tasks. Do not automatically choose “chatbot” just because users ask questions. First identify the core workload being emphasized: understanding sentiment, translating content, recognizing speech, or generating a conversational answer.

Section 2.5: Generative AI workloads, copilots, content creation, and conversational experiences

Section 2.5: Generative AI workloads, copilots, content creation, and conversational experiences

Generative AI is one of the most visible areas on the modern AI-900 exam. These workloads use powerful models, often called foundation models, to generate new content from prompts. Typical outputs include draft emails, code suggestions, summaries, reports, product descriptions, chat responses, and image generation. The business value is productivity, creativity support, conversational assistance, and faster access to information.

A copilot is a generative AI assistant embedded into an application or workflow to help a user complete tasks. For example, a sales copilot may summarize account notes, draft follow-up emails, and answer questions using CRM data. A document assistant may generate a first draft from prompts and source files. The exam may describe these without always using the word “copilot,” so watch for features such as natural language prompting, context-aware suggestions, and content creation.

Prompts are instructions or context given to the model to influence the response. Better prompts usually produce more relevant outputs. You do not need advanced prompt engineering for AI-900, but you should know that prompts guide model behavior and that the same model can support many use cases depending on the prompt and grounding data. Conversational experiences include chat-based systems that answer questions, summarize content, or generate personalized responses. These differ from traditional rule-based bots because generative AI creates flexible language rather than choosing only from prewritten scripts.

A common exam trap is confusing generative AI with classic NLP. If the task is simply classifying sentiment or translating text, that is not primarily generative AI. If the task is creating a new summary, drafting content, or answering open-ended questions in natural language, generative AI is the better match. Azure OpenAI Service is typically the Azure service category associated with foundation-model-based generative workloads.

Exam Tip: Look for verbs such as “generate,” “draft,” “summarize,” “rewrite,” “answer conversationally,” or “create image.” Those usually indicate generative AI. Verbs like “classify,” “detect language,” or “extract entities” usually indicate traditional AI workloads instead.

On the exam, choose the answer that reflects content creation or conversational generation when the system is producing novel text or responses. Generative AI is not about just finding information; it is about creating useful output based on a prompt and model context.

Section 2.6: Responsible AI principles, business considerations, and exam-style practice set

Section 2.6: Responsible AI principles, business considerations, and exam-style practice set

No discussion of AI workloads is complete without responsible AI. Microsoft expects AI-900 candidates to understand that AI solutions should be designed and used in ways that are fair, reliable, safe, private, secure, inclusive, transparent, and accountable. Even in the “Describe AI workloads” objective, you may see scenarios that ask you to identify business considerations or risks alongside the technical workload. For example, if a model influences hiring, lending, or access decisions, fairness and transparency are important concerns. If a workload processes voice, text, or customer records, privacy and security must be considered.

When evaluating a business scenario, think beyond “Can AI do this?” and ask “Should it be done this way, and what controls are needed?” Generative AI in particular raises issues such as inaccurate responses, harmful output, prompt misuse, copyright concerns, and the need for human review. Computer vision may raise consent and privacy concerns. NLP on customer communications may require careful handling of personal data. Responsible AI principles are not separate from business value; they protect trust, compliance, and long-term adoption.

For exam strategy, use a three-step method. First, identify the input type: tabular data, image, text, audio, or prompt-driven request. Second, identify the output type: prediction, label, extracted text, translated text, generated content, or spoken response. Third, check for responsibility clues such as fairness, transparency, or privacy. This approach helps you separate very similar answer choices.

Common traps include choosing a specific technology because it sounds advanced rather than because it fits the scenario, missing the difference between analysis and generation, and overlooking responsible AI concerns when the question shifts from technical capability to business suitability. The AI-900 exam often uses realistic wording, so stay anchored to the required outcome.

  • Prediction from historical data usually indicates machine learning.
  • Image interpretation usually indicates computer vision.
  • Text or speech understanding usually indicates NLP or speech services.
  • New content creation from prompts usually indicates generative AI.
  • Sensitive use cases should trigger responsible AI thinking.

Exam Tip: If a question asks what a company should consider when deploying AI, the best answer may focus on fairness, privacy, or accountability rather than a technical feature. Do not assume every AI question is asking for a service name.

By the end of this chapter, your goal is to recognize workload patterns quickly and accurately. That skill is essential for AI-900 success because many later objectives build on it. If you can classify the scenario, the correct answer becomes much easier to spot.

Chapter milestones
  • Recognize core AI workloads and business value
  • Differentiate AI, machine learning, and generative AI use cases
  • Connect AI workloads to Azure service categories
  • Practice exam-style scenarios for Describe AI workloads
Chapter quiz

1. A retail company wants to analyze images from store cameras to determine how many people enter the store each hour and whether shelves are empty. Which AI workload best fits this requirement?

Show answer
Correct answer: Computer vision
This scenario involves analyzing visual content from images or video, which is a computer vision workload. Counting people and detecting empty shelves are image analysis tasks. Natural language processing is used for text or language-based tasks such as sentiment analysis or translation, so it does not fit. Conversational AI focuses on bots and interactive dialogue systems, not image-based detection.

2. A bank wants to use historical customer data to predict whether a loan applicant is likely to default. Which type of AI solution is most appropriate?

Show answer
Correct answer: Machine learning
Predicting whether a customer will default is a classic machine learning scenario because the system uses patterns in historical data to make predictions or classifications. Generative AI is designed to create new content such as text or images, not primarily to predict outcomes from structured data. Optical character recognition (OCR) is used to extract text from images or scanned documents, which is unrelated to credit-risk prediction.

3. A company wants a solution that can draft marketing email content in natural language based on a short prompt entered by a user. Which AI capability is being described?

Show answer
Correct answer: Generative AI
Drafting new marketing email text from a prompt is a generative AI use case because the system creates original content in natural language. Speech recognition converts spoken audio to text, which is not the main requirement here. Knowledge mining focuses on extracting and indexing insights from large collections of documents so information can be discovered, not on generating new email copy.

4. A manufacturer wants to process thousands of scanned warranty forms and extract printed and handwritten text into a searchable system. Which Azure AI workload category best matches this scenario?

Show answer
Correct answer: Computer vision
Extracting printed and handwritten text from scanned forms is typically handled by OCR and document intelligence capabilities, which fall under the computer vision service category in AI-900 workload mapping. Machine learning is too broad and would not be the best answer when the scenario specifically describes text extraction from images. Conversational AI would apply to chatbots or voice assistants, which are not part of this requirement.

5. A company builds a chatbot that answers employee questions by searching policy documents and then responding in natural language with a summarized answer. On the AI-900 exam, which workload should you identify as the dominant business goal?

Show answer
Correct answer: Generative AI experience
The key verbs in the scenario are 'answers' and 'responding in natural language with a summarized answer,' which indicate a generative AI experience. Although the solution may also involve search or knowledge mining behind the scenes, the dominant business goal is natural-language answer generation. Anomaly detection is used to find unusual patterns in data, which is not described here. Image classification applies to labeling images, which is unrelated to employee policy questions.

Chapter 3: Fundamental Principles of ML on Azure

This chapter maps directly to one of the most heavily tested AI-900 objective areas: understanding the fundamental principles of machine learning and recognizing how Azure supports those principles. For the exam, you are not expected to build complex models or write code. Instead, you must identify what machine learning is, distinguish common machine learning approaches, understand the basic lifecycle of training and inference, and recognize when Azure Machine Learning, automated machine learning, or visual designer tools are appropriate.

A common AI-900 challenge is that questions often sound technical even when they test simple conceptual distinctions. For example, the exam may describe a business scenario involving predictions, grouping, or decision-making and ask you to identify the machine learning category or the most suitable Azure capability. Your task is to decode the language of the question. If a scenario predicts a known category such as pass or fail, approved or rejected, or spam or not spam, think classification. If it predicts a numeric value such as sales amount or temperature, think regression. If it groups similar items without predefined labels, think clustering. If it looks for unusual behavior, think anomaly detection.

This chapter also supports broader course outcomes by helping you describe AI workloads and common business scenarios tested on AI-900, explain beginner-level machine learning concepts on Azure, and practice the kind of reasoning needed for exam-style comparisons. As you read, focus on the vocabulary the exam uses repeatedly: features, labels, training, validation, model, inference, responsible AI, automated machine learning, and Azure Machine Learning.

Exam Tip: AI-900 often rewards recognition more than memorization. If you can identify the problem type, the expected output, and whether labeled data is available, you can eliminate many wrong answers quickly.

Another common trap is confusing Azure Machine Learning with prebuilt Azure AI services. Azure Machine Learning is typically used to build, train, manage, and deploy custom machine learning models. By contrast, Azure AI services provide prebuilt capabilities for vision, speech, language, and related workloads. When a question describes custom prediction from business data such as churn, pricing, or forecasting, Azure Machine Learning is usually the stronger match.

In the sections that follow, you will build a beginner-friendly but exam-focused understanding of machine learning concepts, supervised and unsupervised learning basics, Azure machine learning workflows, and responsible AI principles. You will also learn how to spot common exam traps and interpret scenario wording accurately so that your answer aligns with what the test is really asking.

Practice note for Understand machine learning concepts at a beginner level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explain supervised, unsupervised, and reinforcement learning basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify Azure machine learning capabilities and workflows: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style questions on ML principles and Azure services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand machine learning concepts at a beginner level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Fundamental principles of ML on Azure objective overview

Section 3.1: Fundamental principles of ML on Azure objective overview

The AI-900 exam expects you to understand machine learning at a beginner level and connect that knowledge to Azure. At a high level, machine learning is a subset of AI in which systems learn patterns from data and use those patterns to make predictions, classifications, recommendations, or decisions. On the exam, this objective usually appears through short business scenarios rather than mathematical formulas.

Microsoft commonly frames this objective around the machine learning workflow: collect data, prepare data, train a model, validate the model, deploy the model, and use the model for inference. You do not need deep statistical expertise, but you do need to understand what happens at each stage. Training means using historical data to teach the model patterns. Validation means checking whether the model performs well on data it has not already memorized. Inference means using the trained model to generate predictions on new data.

Azure supports this lifecycle through Azure Machine Learning, which provides tools for data science workflows, model training, model management, and deployment. In exam terms, think of Azure Machine Learning as the platform for end-to-end machine learning operations. Questions may mention data scientists, model training, experiments, pipelines, endpoints, or model deployment. Those clues point toward Azure Machine Learning.

The exam also wants you to recognize machine learning as one AI workload among many. Not every intelligent system is a custom ML solution. If the scenario requires recognizing faces in images, translating text, or analyzing sentiment with prebuilt features, Azure AI services may be more appropriate than custom ML development. If the scenario focuses on using business data to predict an outcome specific to the organization, machine learning on Azure is more likely.

Exam Tip: Watch for wording such as predict, estimate, forecast, classify, segment, detect unusual behavior, or optimize decisions. These verbs usually indicate a machine learning workload rather than a simple rules-based application.

A common trap is overcomplicating the question. AI-900 typically tests whether you can match a plain-language business need to the right concept. Focus first on the desired outcome, then on the type of data available, and finally on whether the organization needs a custom model or a prebuilt AI service.

Section 3.2: Core ML concepts: features, labels, models, training, validation, and inference

Section 3.2: Core ML concepts: features, labels, models, training, validation, and inference

This section covers foundational terminology that appears throughout the AI-900 exam. A feature is an input variable used by a model to make a prediction. For example, in a model predicting house prices, features might include square footage, number of bedrooms, and location. A label is the known outcome the model is trying to learn during supervised training. In the same example, the label would be the actual sale price.

A machine learning model is the mathematical representation of patterns learned from data. For AI-900, you do not need to know the internal algorithms in detail. What matters is the role of the model: it uses features to estimate or predict an output. During training, the model learns from historical data. During validation, its quality is evaluated using separate data so you can estimate how well it will perform on unseen examples. During inference, the trained model is applied to new data to produce a prediction or decision.

Questions may test whether you understand the difference between training and inference. Training happens before deployment and uses existing data to create the model. Inference happens after deployment and uses new input data to generate output. Many candidates miss this because both involve data flowing through a model, but only training changes the model.

  • Features = inputs
  • Labels = known target values in supervised learning
  • Training = learning from historical data
  • Validation = evaluating model performance on separate data
  • Inference = making predictions on new data

Exam Tip: If a question asks what is needed for supervised learning, look for labeled data. If labels are not available and the goal is to find patterns or groups, supervised learning is not the right answer.

Another exam trap is confusing validation with testing in a broad sense. AI-900 is not trying to assess your knowledge of every model evaluation nuance. It simply expects you to know that validation checks whether the trained model performs acceptably before broad use. Keep your answer aligned to that level.

Finally, remember that data quality matters. A model trained on poor, biased, incomplete, or unrepresentative data will likely perform poorly. This idea connects to responsible AI later in the chapter and can also appear in scenario-based questions about model reliability and fairness.

Section 3.3: Types of machine learning: classification, regression, clustering, and anomaly detection

Section 3.3: Types of machine learning: classification, regression, clustering, and anomaly detection

The exam frequently tests your ability to identify the correct machine learning type from a scenario. The safest way to answer is to focus on the output the organization wants. Classification predicts a category or class. Examples include whether a customer will churn, whether a transaction is fraudulent, or whether an email is spam. Even if there are more than two categories, it is still classification if the output is a label rather than a number.

Regression predicts a numeric value. Typical examples include forecasting revenue, estimating delivery time, predicting temperature, or calculating maintenance cost. A common exam trap is that forecasting may sound different from regression, but if the output is a number, regression is often the best answer in AI-900-level questions.

Clustering is an unsupervised learning method used to group similar data points without predefined labels. A retailer segmenting customers by purchasing behavior is a classic clustering scenario. The key clue is that the groups are discovered from the data rather than assigned in advance.

Anomaly detection identifies unusual patterns, events, or outliers. Examples include detecting suspicious network activity, abnormal sensor readings, or financial transactions that differ sharply from normal behavior. If the scenario emphasizes rare or unusual cases rather than broad categories, anomaly detection is likely the intended answer.

The exam may also reference reinforcement learning at a conceptual level. Reinforcement learning involves an agent learning through actions, rewards, and penalties to maximize long-term success. It is commonly associated with dynamic decision-making such as robotics, game playing, or route optimization. AI-900 usually tests recognition of the concept, not implementation detail.

Exam Tip: Use a three-part filter: category output means classification, numeric output means regression, unlabeled grouping means clustering, unusual behavior means anomaly detection.

A common trap is mixing clustering with classification. If the categories already exist and the model learns to assign records to those categories, that is classification. If the system discovers natural groupings without labeled examples, that is clustering. This distinction shows up often because both involve grouping ideas but are based on different data conditions.

Section 3.4: Azure Machine Learning concepts, automated machine learning, and designer tools

Section 3.4: Azure Machine Learning concepts, automated machine learning, and designer tools

Azure Machine Learning is Microsoft’s cloud platform for building, training, deploying, and managing machine learning models. For AI-900, you should understand it as the primary Azure service for custom machine learning workflows. It supports experimentation, data preparation, model training, deployment endpoints, and operational management across the model lifecycle.

Automated machine learning, often called automated ML or AutoML, helps users train and select models by automating tasks such as algorithm selection, feature engineering in some workflows, and hyperparameter tuning. On the exam, automated ML is usually the right fit when the scenario describes a user who wants to create a predictive model efficiently without manually testing many algorithms. It is especially useful for common prediction problems such as classification, regression, and forecasting.

Designer tools in Azure Machine Learning provide a visual, drag-and-drop approach to creating machine learning pipelines. This is important for AI-900 because Microsoft often tests whether you can identify low-code or no-code options. If the question emphasizes a graphical interface, reusable pipeline steps, or a user who prefers visual workflow design over writing code, designer is a strong candidate.

Azure Machine Learning can also deploy trained models as endpoints for inference. This means applications can send new data to the deployed model and receive predictions. If the scenario mentions exposing a model for use by apps or business systems, think about deployment and inference endpoints.

Exam Tip: Distinguish between building a custom model and using a prebuilt AI feature. Azure Machine Learning is for custom model development and lifecycle management; Azure AI services are for prebuilt capabilities like vision, speech, and language.

A common trap is assuming automated ML removes the need for validation or responsible AI review. It speeds model selection and experimentation, but it does not eliminate the need to evaluate quality, fairness, and suitability. Another trap is confusing visual designer tools with dashboards or reporting tools. Designer creates ML workflows, not just data visualizations.

In short, if the exam scenario involves data scientists, model training, experimentation, low-code model creation, or deployment of a predictive model, Azure Machine Learning, automated ML, and designer should be top of mind.

Section 3.5: Responsible AI in machine learning on Azure: fairness, reliability, privacy, and transparency

Section 3.5: Responsible AI in machine learning on Azure: fairness, reliability, privacy, and transparency

Responsible AI is explicitly tested on AI-900, and machine learning questions may include ethical or governance concerns alongside technical choices. You should understand the core principles at a practical level. Fairness means AI systems should avoid unjust bias and treat people equitably. If a hiring or loan approval model performs better for one group than another because of biased training data, that is a fairness concern.

Reliability and safety mean models should perform consistently and as expected under normal conditions. A model that produces unstable results or fails in predictable edge cases can create business and legal risk. Privacy and security mean data used for training and inference must be handled appropriately, protected from unauthorized access, and used in line with regulations and policies.

Transparency means stakeholders should have understandable information about what the system does, how it is used, and in some cases why it produced a result. Accountability means humans remain responsible for oversight and governance of AI systems. While the section title emphasizes fairness, reliability, privacy, and transparency, do not forget that AI-900 often presents these principles together conceptually.

On Azure, responsible machine learning practices can involve careful dataset review, performance monitoring, documentation, access controls, and explainability-oriented tools. For the exam, however, conceptual understanding matters more than tool-specific depth. If a question asks how to reduce harmful bias, improving data representativeness and evaluating performance across groups are usually better answers than simply increasing model complexity.

Exam Tip: When the scenario mentions bias, discrimination, unequal outcomes, or underrepresented groups, think fairness first. When it mentions unpredictable failures, think reliability. When it mentions sensitive personal data, think privacy and security. When it asks for understandable explanations, think transparency.

A major trap is choosing a purely technical answer to an ethical problem. More data or a bigger model does not automatically solve fairness issues. AI-900 expects you to recognize that responsible AI includes governance, human oversight, and process decisions, not just algorithms.

Section 3.6: Exam-style scenarios comparing ML approaches, outputs, and Azure options

Section 3.6: Exam-style scenarios comparing ML approaches, outputs, and Azure options

This final section focuses on how to think through the scenario wording you will see on the exam. AI-900 questions often compare machine learning approaches indirectly. Instead of asking for a definition, they describe a business need and expect you to identify the learning type, the expected output, or the right Azure service. The key is to isolate three things quickly: what the input data looks like, what output is required, and whether the organization needs a custom model or a prebuilt service.

If a company wants to predict whether a customer will renew a subscription, the output is a category such as yes or no, which signals classification. If it wants to estimate monthly sales, the output is numeric, which signals regression. If it wants to divide customers into segments without predefined segment labels, that signals clustering. If it wants to flag unusual credit card transactions, that suggests anomaly detection.

Then ask whether Azure Machine Learning is necessary. If the scenario is about custom predictions based on proprietary business data, Azure Machine Learning is usually appropriate. If the scenario instead asks for prebuilt language, speech, or image analysis capabilities, that points away from Azure Machine Learning and toward Azure AI services.

When comparing automated ML and designer, remember the intent. Automated ML is for streamlining model selection and training across common predictive tasks. Designer is for building workflows visually. Both are in Azure Machine Learning, but they are not interchangeable in every question.

Exam Tip: Eliminate answers in layers. First identify the problem type. Next identify whether labeled data exists. Then decide whether the solution is custom ML or a prebuilt AI capability. This process is often enough to narrow the options to the correct answer.

Common traps include being distracted by industry context, overvaluing code-related terms, or confusing output types. The exam does not care whether the scenario is healthcare, finance, retail, or manufacturing unless that context changes the AI need. Focus on what the system must do. If you stay anchored to outcome, data, and Azure service fit, you will answer these machine learning questions with much greater confidence.

Chapter milestones
  • Understand machine learning concepts at a beginner level
  • Explain supervised, unsupervised, and reinforcement learning basics
  • Identify Azure machine learning capabilities and workflows
  • Practice exam-style questions on ML principles and Azure services
Chapter quiz

1. A retail company wants to use historical customer data to predict whether a shopper is likely to cancel a subscription in the next 30 days. The dataset includes a column that indicates whether each past customer actually canceled. Which type of machine learning problem is this?

Show answer
Correct answer: Classification
This is classification because the goal is to predict a known category or label, such as canceled or not canceled. In AI-900, supervised learning uses labeled data to predict known outcomes. Clustering is incorrect because it groups similar records without predefined labels. Anomaly detection is incorrect because the scenario is not primarily about finding unusual behavior; it is about predicting one of two known outcomes.

2. A company wants to analyze customer records and group customers into segments based on purchasing behavior. The company does not have predefined segment labels. Which machine learning approach should you identify?

Show answer
Correct answer: Unsupervised learning
This is unsupervised learning because the data does not include predefined labels and the goal is to find structure or patterns, such as customer segments. Supervised learning is incorrect because it requires labeled outcomes for training. Reinforcement learning is incorrect because it is used when an agent learns through rewards and penalties based on actions, not when grouping similar records in static business data.

3. You need to build, train, and deploy a custom machine learning model in Azure using business data to predict future product demand. Which Azure service is the best match?

Show answer
Correct answer: Azure Machine Learning
Azure Machine Learning is the correct choice because AI-900 expects you to recognize it as the primary Azure service for building, training, managing, and deploying custom machine learning models. Azure AI services is incorrect because it provides prebuilt capabilities for vision, speech, and language rather than custom predictive modeling from business data. Azure AI Document Intelligence is incorrect because it is specialized for extracting information from documents, not forecasting demand with custom ML.

4. A bank trains a model to estimate the likely dollar amount of a loan applicant's future monthly spend based on income, credit history, and account activity. What type of prediction is the model making?

Show answer
Correct answer: Regression
This is regression because the model predicts a numeric value: a dollar amount. On the AI-900 exam, regression is used when the output is continuous or numerical. Classification is incorrect because it predicts a category, such as approved or rejected, rather than a number. Clustering is incorrect because clustering groups similar items without predicting a specific labeled or numeric outcome.

5. A team is preparing a machine learning solution in Azure. They use historical data to create a model, test its performance on held-out data, and then use the deployed model to generate predictions on new records. Which statement correctly describes training and inference?

Show answer
Correct answer: Training is the process of fitting a model by using data, and inference is the process of using the trained model to make predictions on new data.
This is the correct distinction tested in AI-900: training is when the model learns patterns from data, and inference is when the trained model is used to generate predictions from new input. Option A reverses the definitions, which is a common exam trap. Option C is incorrect because fairness and responsible AI are important considerations, but they do not replace the core lifecycle concepts of training and inference.

Chapter 4: Computer Vision Workloads on Azure

Computer vision is a core AI-900 exam domain because Microsoft expects candidates to recognize common visual AI workloads and map each one to the most appropriate Azure service. On the exam, you are usually not asked to build a full solution. Instead, you are asked to identify what the business needs, translate that need into a vision task such as image classification, object detection, optical character recognition, or document processing, and then choose the correct Azure AI offering. This chapter focuses on exactly that decision-making skill.

In practical business settings, computer vision systems help organizations extract meaning from images, video, and scanned documents. Examples include automatically describing photos, detecting products on shelves, reading text from receipts, analyzing live video streams for safety monitoring, and extracting structured fields from forms and invoices. The AI-900 exam tests whether you understand these scenarios at a foundational level, including where Azure AI Vision fits, where Document Intelligence fits, and where the boundaries of face-related capabilities and responsible AI matter.

As you study this chapter, keep one recurring exam principle in mind: the test often rewards precise matching. If a scenario says “read text from an image,” that points to OCR. If it says “extract fields from invoices and forms,” that points to document intelligence rather than generic image analysis. If it says “detect and locate multiple objects in an image,” that points to object detection rather than simple tagging or captioning. Small wording differences matter.

This chapter naturally integrates the key lessons you need for AI-900 readiness: identifying computer vision tasks and outcomes, matching vision workloads to Azure AI services, understanding image, video, and document intelligence use cases, and preparing for AI-900-style service-selection questions. Read each section with the exam objective in mind and pay close attention to common traps.

  • Know the difference between broad image understanding and specialized document extraction.
  • Recognize when a scenario is about labels, descriptions, locations, or text.
  • Understand that face-related features have responsible use boundaries and are tested conceptually.
  • Expect the exam to describe a business requirement first and name the Azure service second, if at all.

Exam Tip: On AI-900, the best answer is usually the service that most directly solves the stated requirement with the least complexity. Do not over-engineer the scenario in your head. Choose the foundational Azure AI service that matches the described workload.

Use the following sections as a guided map through the computer vision objective area. They are organized the way an exam coach would teach them: first the objective overview, then image tasks, then face-related boundaries, then OCR and document workloads, then video and spatial scenarios, and finally service-selection guidance that helps you avoid classic exam mistakes.

Practice note for Identify key computer vision tasks and outcomes: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Match vision workloads to Azure AI services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand image, video, and document intelligence use cases: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice AI-900 style questions for computer vision workloads: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify key computer vision tasks and outcomes: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Computer vision workloads on Azure objective overview

Section 4.1: Computer vision workloads on Azure objective overview

The AI-900 exam expects you to recognize the main categories of computer vision workloads and match them to Azure services at a high level. The tested objective is not deep implementation detail. Instead, Microsoft wants you to understand what computer vision systems do, what business outcomes they support, and which Azure service family is the best fit. That means identifying whether a requirement involves image analysis, face-related analysis, OCR, document processing, or video and spatial analysis.

Computer vision workloads generally involve extracting useful information from visual input. That input may be a still image, a scanned page, a PDF, or a video stream. The system then produces outcomes such as tags, captions, object locations, recognized text, form fields, or event detections. On AI-900, the first question you should silently ask is: what type of output is the organization trying to get from the visual data?

Azure includes multiple services relevant to these needs. Azure AI Vision is commonly associated with image analysis tasks such as tagging, captioning, and object detection, and also supports OCR-oriented capabilities in the broader vision space. Azure AI Document Intelligence is more specialized for extracting text, key-value pairs, tables, and structured information from forms and business documents. Video scenarios may involve analyzing visual streams, while spatial understanding scenarios focus on people movement and presence in physical spaces. The exam may also refer to face-related capabilities, but you should remember there are important responsible AI boundaries around identity-related uses.

Exam Tip: If the scenario centers on understanding a general photo, start thinking Azure AI Vision. If it centers on extracting fields from a business document such as an invoice, contract, or receipt, start thinking Azure AI Document Intelligence.

A common exam trap is confusing a workload with a custom machine learning project. AI-900 often emphasizes prebuilt Azure AI services. If the scenario is straightforward and aligns with standard visual tasks, the correct answer is usually a managed Azure AI service rather than training a custom model from scratch. Another trap is confusing image classification with object detection. Classification answers “what is in this image,” while object detection answers “what objects are present and where are they located.”

To score well, train yourself to convert business language into technical intent. “Describe a scene” suggests captioning. “Label contents” suggests tagging. “Find every bicycle in the image” suggests object detection. “Read handwriting from a form” suggests OCR or document intelligence depending on whether structure matters. This translation skill is one of the most valuable exam skills in the entire vision objective.

Section 4.2: Image analysis concepts including tagging, captioning, and object detection

Section 4.2: Image analysis concepts including tagging, captioning, and object detection

Image analysis is one of the most testable computer vision topics on AI-900 because it includes multiple similar-sounding capabilities that candidates often mix up. The exam commonly tests your ability to distinguish tagging, captioning, and object detection. Although all three involve analyzing images, they produce different outputs and serve different business goals.

Tagging assigns descriptive labels to an image. For example, a photo might receive tags such as “outdoor,” “car,” “person,” or “tree.” This is useful for content organization, search, indexing, and digital asset management. If a business wants to automatically categorize large image libraries so users can search them later, tagging is a strong fit. Captioning goes a step further by generating a natural-language description of the scene, such as “A person riding a bicycle on a city street.” Captioning is often useful for accessibility, user experience, and content summarization.

Object detection differs because it not only identifies objects but also locates them within the image, typically with bounding boxes. If a company wants to count products on store shelves, identify vehicles in traffic images, or mark where defects appear in a manufacturing photo, object detection is the key concept. On the exam, wording such as “locate,” “detect multiple items,” or “identify where objects appear” is a clue that object detection is being tested.

Exam Tip: If the output needs coordinates or locations, think object detection, not just tagging. Tags describe presence; detection describes presence plus position.

Another distinction to keep straight is between general image analysis and OCR. If the input is a regular photo and the business wants to understand the visual scene, Azure AI Vision is likely the intended answer. If the image contains important printed or handwritten text that must be extracted, text recognition becomes central. The exam may include scenarios where both are possible, but the best answer is the one aligned with the primary requirement.

Common traps include choosing captioning when the business only needs search labels, or choosing tagging when the business needs a user-friendly sentence. Watch for the exact wording of the desired output. “Generate a sentence” is different from “assign categories.” Also, some candidates assume classification and object detection are the same. Classification often summarizes an image at a high level, while detection identifies individual items. AI-900 rewards precise reading here.

In business use cases, image analysis can support media management, retail monitoring, product catalog enrichment, social media moderation support, and accessibility solutions. When you see these scenarios on the exam, focus less on implementation and more on the required result. That is how you identify the correct Azure AI service and capability quickly.

Section 4.3: Face-related capabilities, identity considerations, and responsible use boundaries

Section 4.3: Face-related capabilities, identity considerations, and responsible use boundaries

Face-related AI is one of the most sensitive and easily misunderstood parts of the vision objective. For exam purposes, you should understand that systems can analyze human faces for certain attributes or detect their presence, but identity-related uses have significant responsible AI and policy considerations. Microsoft expects AI-900 candidates to know not just what technology can do, but also where caution and governance are required.

At a foundational level, face-related capabilities can include detecting a face within an image and identifying basic visual features associated with the face detection process. Historically, face services have also been discussed in relation to verification or recognition scenarios, but exam candidates must be careful here. The responsible use boundary matters. On AI-900, if a scenario suggests identifying a person for security, surveillance, or sensitive decision-making, you should recognize that this is an area with restrictions and ethical concerns rather than simply assuming a generic “face API” is the right answer.

Exam Tip: When face-related questions appear, look for whether the exam is testing capability awareness or responsible AI awareness. Microsoft often wants you to recognize that not every technically possible scenario is appropriate or generally available without constraints.

A common trap is confusing face detection with person identification. Detecting that a face exists in an image is not the same as determining who the person is. Another trap is ignoring privacy and fairness concerns. AI-900 is not purely a features exam; it also includes responsible AI principles. If a scenario implies high-impact use, identity matching, or sensitive monitoring, pause and consider whether the question is probing your understanding of ethical boundaries.

From a business perspective, face-related capabilities may appear in photo organization, user interaction, or presence detection scenarios. However, in exam language, broad statements like “identify customers by their faces as they enter the store” should make you cautious. Microsoft wants candidates to understand that AI solutions involving biometric identity require careful governance, legal review, and responsible use considerations.

The best exam strategy is to separate three ideas: detecting faces in an image, analyzing visual characteristics in a limited sense, and using biometric identity in a real-world decision flow. The first is straightforward computer vision. The third is where policy, fairness, privacy, and restricted use become central. AI-900 often rewards the candidate who reads beyond the technical words and recognizes the governance implications.

Section 4.4: Optical character recognition and document intelligence workloads on Azure

Section 4.4: Optical character recognition and document intelligence workloads on Azure

Optical character recognition, or OCR, is the task of reading text from images or scanned documents. This is a very common AI-900 topic because many organizations need to convert visual text into machine-readable data. The exam typically tests whether you can distinguish basic text extraction from deeper document understanding. That distinction is essential for choosing between general vision-based OCR needs and Azure AI Document Intelligence for structured document extraction.

If a scenario says a company wants to read street signs, scan a photo of a menu, capture text from a screenshot, or extract words from a simple image, OCR is the central idea. If the requirement expands to understanding document structure, key-value pairs, tables, receipts, invoices, IDs, or forms, then document intelligence is usually the better answer. In other words, OCR gets the text; document intelligence gets the business meaning and structure.

Azure AI Document Intelligence is especially important on the exam because it addresses real business automation scenarios. Organizations use it to process invoices, purchase orders, tax forms, receipts, insurance documents, and application forms. The service can extract fields such as invoice number, total amount, vendor name, dates, and line items. This is much more than generic image analysis. It is purpose-built document processing.

Exam Tip: If the question emphasizes forms, fields, tables, or business documents, do not stop at OCR. Think Azure AI Document Intelligence because the exam is usually testing structured extraction, not just reading characters.

A common trap is choosing Azure AI Vision simply because the source file is an image or PDF. The file format does not determine the answer; the required output does. If the company only needs the text itself, OCR may be enough. If it needs data mapped into usable fields for workflows or downstream systems, document intelligence is the stronger match. Another trap is assuming all document solutions require custom machine learning. AI-900 often expects you to know that prebuilt document models and managed document extraction services already exist.

Business use cases include digitizing paper archives, automating accounts payable, extracting data from claims forms, and enabling searchable document repositories. On the exam, look for verbs like “extract,” “parse,” “identify fields,” “process invoices,” or “read tables.” Those are your clues that the workload belongs in the document intelligence category rather than generic image analysis.

Section 4.5: Video analysis and spatial understanding scenarios for business applications

Section 4.5: Video analysis and spatial understanding scenarios for business applications

Not all computer vision workloads involve still images. The AI-900 exam may also test video analysis and spatial understanding scenarios, especially in business environments such as retail, manufacturing, security monitoring, and workplace analytics. The key is to recognize when the input is continuous visual data over time and when the business needs event detection, movement tracking, or understanding of how people use physical spaces.

Video analysis often involves processing frames from a camera feed to identify events or patterns. For example, a company might want to monitor a production line for anomalies, detect whether people are entering a restricted area, or analyze foot traffic in a store. The exam is less focused on deep architecture and more focused on understanding that video introduces time and sequence, not just isolated image interpretation. If a scenario mentions live streams, surveillance cameras, or ongoing monitoring, you should immediately think beyond basic image tagging.

Spatial understanding scenarios focus on how people and objects move through physical environments. In a retail store, a business may want to know how many people entered a section, how long they remained there, or whether occupancy thresholds were exceeded. In an office or venue, the goal might be to understand space utilization. The exam may describe these as people-counting, movement analysis, or presence detection scenarios.

Exam Tip: If the requirement depends on motion, sequence, occupancy, or events over time, it is probably not a simple image analysis question. Read carefully for clues that the workload involves video or spatial context.

A common trap is to answer with OCR or image analysis just because individual frames exist. While video consists of images, the business requirement may depend on time-based behavior. Another trap is overlooking privacy and responsible use issues, especially if the scenario involves monitoring people. AI-900 expects foundational awareness that visual AI in real spaces should be designed carefully and responsibly.

For exam readiness, connect these scenarios to business outcomes: safety monitoring, operational efficiency, occupancy insights, queue management, and facility optimization. Then ask what the system must understand: a single image, a stream of frames, or movement through a physical environment. That reasoning will help you eliminate wrong answers quickly and choose the Azure vision-related capability category that best fits the problem.

Section 4.6: Exam-style practice on selecting Azure AI Vision and related services

Section 4.6: Exam-style practice on selecting Azure AI Vision and related services

The final step in mastering this chapter is learning how the AI-900 exam frames service-selection decisions. Microsoft commonly writes questions as business requirements rather than direct feature checks. Your task is to convert the narrative into a workload category and then into the correct Azure service. For computer vision, the most common decision pattern is whether the need maps to Azure AI Vision, Azure AI Document Intelligence, a face-related capability with responsible use considerations, or a video/spatial analysis scenario.

Start by identifying the input type and the required output. If the input is a photo and the output is labels, descriptions, or object locations, Azure AI Vision is usually the correct direction. If the input is a scanned invoice and the output is fields such as total amount or invoice number, Azure AI Document Intelligence is the better answer. If the scenario centers on reading text from signs or screenshots, OCR-related vision capability is likely intended. If it focuses on people movement across camera feeds, think video or spatial understanding rather than static image analysis.

Exam Tip: Before looking at answer choices, classify the scenario in your own words: “This is tagging,” “This is OCR,” “This is document field extraction,” or “This is object detection.” Doing that first reduces confusion from distractors.

Common distractors on AI-900 include broader Azure terms, unrelated AI workloads, and custom model options that sound advanced but are not necessary. The exam often places two almost-correct answers side by side. To separate them, focus on the output granularity. For example, “find text in a form” differs from “extract named values from a form.” “Identify objects in an image” differs from “describe the full image in a sentence.” “Detect faces” differs from “identify a person by biometric identity.”

Another strong exam strategy is elimination. If the scenario is clearly about business documents, eliminate generic image labeling answers. If it is about scene description, eliminate document processing answers. If it involves sensitive face identification, evaluate whether responsible AI concerns are part of what the question is testing. AI-900 questions are often easier once you remove answers that solve the wrong category of problem.

As you review this chapter, aim to build fast pattern recognition. That is what improves score performance under time pressure. The exam is not asking whether you can code a computer vision solution. It is asking whether you can identify the workload, understand what the service is designed to do, and avoid common service-matching mistakes. That is the core of computer vision success on AI-900.

Chapter milestones
  • Identify key computer vision tasks and outcomes
  • Match vision workloads to Azure AI services
  • Understand image, video, and document intelligence use cases
  • Practice AI-900 style questions for computer vision workloads
Chapter quiz

1. A retail company wants to process photos of store shelves and identify each product location within an image so that out-of-stock items can be flagged automatically. Which computer vision task should the company use?

Show answer
Correct answer: Object detection
Object detection is correct because the requirement is to identify and locate multiple items within an image. On the AI-900 exam, wording such as 'identify where objects are in the image' maps to detection rather than simple labeling. Image classification is incorrect because it assigns a label to the whole image, not coordinates for multiple products. OCR is incorrect because it is used to read text, not detect physical products on shelves.

2. A company needs to read printed and handwritten text from scanned receipts that customers upload from a mobile app. Which Azure capability best matches this requirement?

Show answer
Correct answer: Optical character recognition (OCR)
OCR is correct because the business need is specifically to extract text from images. AI-900 commonly tests this exact distinction: 'read text from an image' points to OCR. Azure AI Vision image tagging is incorrect because tagging identifies general content such as objects or scenes, not the text itself. Face detection is incorrect because the scenario is unrelated to faces and would not help extract receipt text.

3. An accounts payable department wants to upload invoices and automatically extract fields such as vendor name, invoice number, and total amount into a business system. Which Azure AI service should you recommend?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is correct because the requirement is structured field extraction from documents such as invoices and forms. This is a common AI-900 service-selection scenario. Azure AI Vision is incorrect because although it can analyze images and perform OCR, the exam expects Document Intelligence when the task is specialized document processing and field extraction. Azure AI Language is incorrect because it analyzes text content, sentiment, entities, and similar language workloads rather than extracting layout-aware fields from scanned forms.

4. A media company wants a solution that can generate a natural-language description of the main content of a photograph, such as 'A person riding a bicycle on a city street.' Which capability best fits this requirement?

Show answer
Correct answer: Image captioning
Image captioning is correct because the requirement is to produce a descriptive sentence summarizing image content. On AI-900, this differs from detection, which focuses on identifying and locating objects. Object detection is incorrect because it would return identified objects and positions rather than a natural-language description. Document field extraction is incorrect because that applies to structured documents like forms and invoices, not general photographs.

5. You are reviewing possible Azure computer vision solutions for a customer. The customer asks for a face-related capability. According to AI-900 guidance, what should you keep in mind when selecting or discussing this type of solution?

Show answer
Correct answer: Face-related capabilities have responsible AI boundaries and should be evaluated carefully for allowed use
Face-related capabilities have responsible AI boundaries and should be evaluated carefully for allowed use, which is the correct AI-900 conceptual takeaway. Microsoft expects candidates to understand that face scenarios are not just technical matching questions; they also involve responsible AI considerations. The option stating there are no special considerations is incorrect because it ignores those boundaries. The invoice extraction option is incorrect because invoice processing maps to Document Intelligence, not face-related features.

Chapter 5: NLP and Generative AI Workloads on Azure

This chapter maps directly to core AI-900 exam objectives around natural language processing and generative AI workloads on Azure. On the exam, Microsoft typically does not expect deep implementation detail, code syntax, or architecture design at an engineer level. Instead, you are tested on whether you can recognize a business scenario, identify the correct AI workload, and match that workload to the appropriate Azure AI service. That means your job as a test taker is to classify what the question is really asking: Is this a text analysis problem, a translation requirement, a speech scenario, a conversational interface, or a generative AI use case?

Natural language processing, or NLP, refers to AI systems that interpret, analyze, generate, or transform human language. In AI-900 questions, NLP often appears through customer feedback analysis, chatbots, document text extraction, translation across languages, call center transcription, and voice-enabled applications. The exam rewards precision. If a prompt asks to identify positive or negative opinions in reviews, that is sentiment analysis. If it asks to find company names, people, dates, or locations, that is entity recognition. If the scenario is converting speech to text, you should think speech recognition, not language understanding. The wrong answers are often plausible, so the distinction between adjacent services matters.

Generative AI is also a major modern exam domain. You should understand what a foundation model is, how prompts guide model output, what copilots do, and why responsible AI matters. Microsoft may test whether you can distinguish traditional NLP from generative AI. For example, extracting key phrases from reviews is not the same as asking a large language model to draft a summary. One is a task-specific analytical capability; the other is text generation using a foundation model. Exam Tip: When you see words like create, draft, summarize, answer, generate, or propose, generative AI may be the intended answer. When you see classify, detect, identify, recognize, extract, or translate, think first about standard Azure AI language or speech services.

This chapter integrates the lesson goals you need for exam readiness: explaining NLP fundamentals and Azure service options, recognizing speech, language, and translation workloads, understanding prompts and copilots, and practicing the mental comparisons required for exam-style scenario analysis. As you read, focus on the patterns behind the services. AI-900 questions often describe outcomes rather than product names, so strong concept recognition is your best strategy.

Another recurring exam trap is confusing Azure AI services that sound similar. Azure AI Language supports multiple text-based NLP tasks. Azure AI Speech handles spoken language scenarios. Azure AI Translator focuses on language translation. Azure OpenAI supports generative AI models for text generation, summarization, reasoning-style responses, and chat experiences. A conversational solution might combine several of these, but exam questions usually have one dominant requirement. Your task is to identify the primary capability being tested.

Finally, keep responsible AI in view. AI-900 regularly emphasizes fairness, reliability, privacy, security, transparency, and accountability. In generative AI contexts, this expands to safe outputs, prompt filtering, grounding, content moderation, and human oversight. Exam Tip: If two answers both seem technically possible, the exam often favors the one that reflects responsible and appropriate use of AI services for the stated business need.

Practice note for Explain natural language processing fundamentals and service options: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Recognize speech, language, and translation workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand generative AI concepts, prompts, and copilots: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: NLP workloads on Azure objective overview and common business scenarios

Section 5.1: NLP workloads on Azure objective overview and common business scenarios

On AI-900, NLP questions typically begin with a business scenario rather than a technical label. A company wants to analyze product reviews, detect the language of support tickets, build a voice assistant, translate web content, or identify important terms in internal documents. Your exam skill is to translate that scenario into the right workload category. The major NLP-related categories you should know are text analytics, conversational language understanding, question answering, translation, and speech.

Azure provides these capabilities through Azure AI services, especially Azure AI Language, Azure AI Speech, and Azure AI Translator. Azure AI Language covers many text-focused scenarios, including sentiment analysis, key phrase extraction, entity recognition, summarization, and conversational language understanding. Azure AI Speech addresses speech-to-text, text-to-speech, speech translation, and speaker-related scenarios. Translator is for converting text or documents between languages. The exam usually tests recognition, not deployment steps.

Common business scenarios include customer feedback mining, routing support requests, extracting insights from social media posts, creating multilingual applications, transcribing meetings, and enabling voice interaction. A subtle exam trap is assuming every chatbot question is generative AI. Some chatbots use predefined intents and entities through language understanding, while others use foundation models through Azure OpenAI. Read carefully. If the goal is matching user utterances to a known intent such as book flight or check balance, that is a classic conversational language understanding scenario. If the goal is producing flexible, natural answers across many topics, that suggests generative AI.

Exam Tip: Start by asking: what is the input and what is the desired output? Text to labels usually means text analytics. Text to translated text means Translator. Speech to text means Speech. Prompt to generated answer means generative AI. This simple input-output mapping helps eliminate distractors quickly.

The exam also expects you to understand that NLP can be combined with other workloads. For example, a scanned invoice might first require optical character recognition and then text analysis. However, when the question specifically emphasizes meaning, sentiment, intent, translation, or generated text, it is targeting the NLP and generative AI objectives rather than computer vision. Focus on the central task the business is trying to accomplish.

Section 5.2: Text analytics, sentiment analysis, key phrase extraction, and entity recognition

Section 5.2: Text analytics, sentiment analysis, key phrase extraction, and entity recognition

Text analytics is one of the most testable AI-900 topics because it includes several closely related capabilities that exam writers like to compare. Sentiment analysis determines whether text expresses positive, negative, mixed, or neutral opinion. Key phrase extraction identifies important terms or concepts in text. Entity recognition detects references such as people, organizations, locations, dates, times, and quantities. On the exam, these capabilities are usually associated with Azure AI Language.

To answer correctly, focus on what the organization wants to extract from the text. If a retailer wants to know whether reviews are favorable, the answer is sentiment analysis. If a legal team wants the main terms from long documents, key phrase extraction is likely the right fit. If a travel company needs to identify city names, departure dates, and customer names from messages, entity recognition is the best match. The exam often places all three as answer options because they all process text, but only one matches the requested outcome.

A common trap is confusing entity recognition with key phrase extraction. Entities are categorized items with semantic meaning, such as a person or place. Key phrases are important words or multiword terms that summarize the text but are not necessarily typed as categories. Another trap is assuming sentiment analysis summarizes text. It does not; it evaluates opinion polarity. Similarly, language detection identifies the language used in text, which is different from translation. Detecting that text is in French is not the same as converting it to English.

Exam Tip: Look for signal words. Opinion, attitude, favorable, unhappy, and satisfaction point to sentiment. Important topics, summary terms, or major concepts suggest key phrase extraction. Names, places, dates, brands, and addresses point to entity recognition.

AI-900 may also test that Azure AI Language can support document and conversation analysis scenarios without requiring you to know SDK details. The exam objective is service selection. If a scenario asks for extracting structured insight from unstructured text at scale, Azure AI Language is usually the core answer. Avoid overcomplicating it with machine learning training unless the question explicitly says the organization needs to build a custom model from labeled data. In most foundational questions, the exam is testing awareness of built-in AI capabilities rather than custom ML development.

Section 5.3: Translation, speech recognition, speech synthesis, and conversational language understanding

Section 5.3: Translation, speech recognition, speech synthesis, and conversational language understanding

This objective area asks you to distinguish among language conversion, spoken language processing, and intent-based conversational systems. Translation means converting text from one language to another, commonly using Azure AI Translator. Speech recognition means converting spoken audio into text, using Azure AI Speech. Speech synthesis means converting text into spoken audio, also using Azure AI Speech. Conversational language understanding focuses on identifying user intent and relevant entities in a user utterance, typically within Azure AI Language.

These distinctions are highly testable because the services can appear in similar customer-facing applications. A multilingual call center application might use speech recognition to transcribe a caller, translation to convert the transcript, and speech synthesis to speak a response in another language. A virtual assistant might use speech recognition to capture the user request, conversational language understanding to detect the intent, and then perform an action. The exam may describe the whole application but ask which service handles one specific function.

One classic trap is confusing speech recognition with conversational understanding. If the system needs to turn audio into written words, that is speech recognition. If it needs to determine whether the user wants to reset a password or check an order status, that is intent recognition in a conversational language solution. Another trap is mixing translation and speech translation. If the question is clearly about text being converted between languages, think Translator. If it emphasizes spoken input and output across languages, Speech may be involved.

Exam Tip: Separate modality from meaning. Speech services handle the audio modality. Language understanding handles the meaning of words once captured. Translation changes language. Text-to-speech generates audio output. If you answer in that order, you will often identify the correct option quickly.

Expect AI-900 to stay at a conceptual level. You do not need to memorize API names. You do need to recognize where each service fits in business scenarios such as voice bots, multilingual websites, accessibility applications, meeting transcription, and automated call routing. Questions often reward candidates who read carefully enough to notice whether the requirement is hearing, understanding, speaking, or translating.

Section 5.4: Generative AI workloads on Azure objective overview and foundation model concepts

Section 5.4: Generative AI workloads on Azure objective overview and foundation model concepts

Generative AI workloads differ from traditional NLP because the system does not merely classify or extract information; it creates new content based on patterns learned from very large datasets. On AI-900, you should understand that generative AI can produce text, code, images, summaries, and conversational responses. In Azure-focused exam language, this is commonly associated with Azure OpenAI and related copilot-style solutions.

A foundation model is a large pre-trained model that can be adapted or prompted for many tasks. Rather than building separate models for each narrow language task, organizations can use one powerful model to summarize documents, draft emails, answer questions, generate product descriptions, or support chat experiences. This broad capability is what makes foundation models different from task-specific NLP services. However, broad capability does not mean unlimited reliability. The exam expects you to know that generative outputs can be inaccurate, biased, or unsafe if not properly governed.

Typical business scenarios include internal knowledge assistants, customer support copilots, document summarization, drafting marketing content, coding assistance, and natural language interfaces over enterprise data. A key exam distinction is whether the requirement is deterministic extraction versus flexible generation. If a question asks for extracting named entities from thousands of insurance claims, Azure AI Language is more appropriate. If it asks for generating a natural-language summary of each claim for an adjuster, generative AI may be the better fit.

Exam Tip: Foundation models are not the same as traditional machine learning models trained from scratch for one task. On AI-900, think of them as large reusable models that can be prompted and adapted for many generative scenarios. If the question emphasizes versatility across multiple tasks, that is a clue.

Another exam trap is assuming generative AI replaces all other AI services. It does not. Azure still offers specialized services for translation, speech, and text analytics because those are often more targeted, predictable, and efficient for specific tasks. The best answer on the exam is usually the most direct and appropriate service for the stated need, not simply the newest or most advanced technology.

Section 5.5: Prompts, copilots, Azure OpenAI concepts, and responsible generative AI practices

Section 5.5: Prompts, copilots, Azure OpenAI concepts, and responsible generative AI practices

A prompt is the instruction or context you provide to a generative model to guide its output. On AI-900, you should know that prompt quality matters. Clear prompts usually produce more useful results. Prompts can specify tone, format, context, constraints, and the desired task. For example, a business might instruct a model to summarize support tickets in bullet points for managers. The exam does not require advanced prompt engineering methods, but it does expect you to understand that model behavior is influenced by the prompt.

Copilots are AI assistants embedded into workflows to help users complete tasks more efficiently. They might draft content, answer questions, retrieve information, or suggest next steps. In Azure and Microsoft ecosystems, a copilot often combines a foundation model with business context, user input, and safeguards. The key concept for the exam is augmentation rather than replacement. Copilots assist people; they do not remove the need for review or accountability.

Azure OpenAI provides access to powerful generative models within Azure’s enterprise environment. Conceptually, AI-900 expects you to know that Azure OpenAI can support chat, summarization, content generation, and natural language interaction. You should also know that responsible use is essential. Generative systems can hallucinate, meaning they produce incorrect but plausible content. They can also reflect bias, expose sensitive information if poorly designed, or produce unsafe outputs. That is why responsible AI controls matter.

  • Use content filtering and moderation to reduce harmful outputs.
  • Include human review for high-impact decisions or public-facing content.
  • Protect privacy and avoid exposing sensitive data in prompts or outputs.
  • Ground responses in trusted data sources when accuracy matters.
  • Be transparent that users are interacting with AI-generated content.

Exam Tip: If a scenario asks how to improve reliability in a generative AI solution, look for answers involving human oversight, grounding, content filtering, and responsible AI practices rather than assuming the model is always correct.

A common trap is selecting generative AI for regulated or high-stakes decisions without safeguards. The exam often frames the correct answer around responsible design, not just capability. If one answer is technically impressive but unsafe, and another includes proper governance and oversight, the responsible option is usually the better exam choice.

Section 5.6: Exam-style scenarios comparing NLP services and generative AI solutions

Section 5.6: Exam-style scenarios comparing NLP services and generative AI solutions

This section is about how to think like the exam. AI-900 questions in this area often compare similar-sounding services and ask you to choose the best fit for a scenario. The fastest strategy is to identify the primary user outcome. If the requirement is to detect how customers feel, choose sentiment analysis. If it is to translate product descriptions, choose Translator. If it is to turn spoken meetings into text, choose Speech. If it is to generate a first draft of a policy summary, choose a generative AI solution such as Azure OpenAI.

Consider the difference between predefined understanding and open-ended generation. A support bot that needs to map user requests into known actions like cancel booking or update address is a conversational language understanding scenario. A knowledge assistant that answers broad employee questions using natural conversation is more aligned with generative AI, especially if it summarizes or synthesizes content. The exam may intentionally include chatbot language in both cases, so do not let the word bot determine your answer by itself.

Another common comparison is between text analytics and summarization by a generative model. If the business needs structured extraction at scale, such as entities, sentiment scores, or key phrases, standard NLP services are usually the intended answer. If the business wants a natural-language summary or draft, generative AI is more likely. Exam Tip: Extraction is not generation. Classification is not conversation. Translation is not language detection. These distinctions resolve many exam questions.

When two answers seem plausible, ask which one is more specific and direct. Microsoft certification questions often reward the least overengineered option. For example, using a foundation model to identify a person name in text is possible, but entity recognition is the purpose-built capability. Likewise, using generative AI for translation is possible, but Translator is the direct Azure service match for the requirement.

Finally, remember the exam mindset: do not answer based on what could work in the real world after customization. Answer based on the Azure service or AI concept that most closely matches the business requirement stated in the prompt. That discipline is especially important in this chapter because NLP and generative AI solutions can overlap. The best candidates are the ones who can see the overlap, but still choose the most exam-appropriate answer.

Chapter milestones
  • Explain natural language processing fundamentals and service options
  • Recognize speech, language, and translation workloads on Azure
  • Understand generative AI concepts, prompts, and copilots
  • Practice exam-style questions on NLP and generative AI workloads
Chapter quiz

1. A retail company wants to analyze thousands of customer reviews to determine whether each review expresses a positive, neutral, or negative opinion. Which Azure AI capability should the company use?

Show answer
Correct answer: Sentiment analysis in Azure AI Language
Sentiment analysis in Azure AI Language is the correct choice because the requirement is to classify opinion in text as positive, neutral, or negative. Speech recognition is incorrect because it converts spoken audio to text rather than analyzing written opinions. Azure OpenAI can generate text, but this scenario is asking for a standard NLP classification task, not a generative AI workload.

2. A support center needs to convert recorded phone conversations into written transcripts for later review. Which Azure AI service is the best match for this requirement?

Show answer
Correct answer: Azure AI Speech
Azure AI Speech is the best match because the primary requirement is speech-to-text transcription. Azure AI Translator is used to translate text or speech between languages, not simply transcribe audio in the same language. Azure AI Language focuses on text-based analysis tasks such as sentiment, entity recognition, and key phrase extraction after text already exists.

3. A global organization wants its website chatbot to translate customer questions and responses between English, French, and Japanese in real time. Which Azure AI service should be used for the core translation requirement?

Show answer
Correct answer: Azure AI Translator
Azure AI Translator is correct because the main requirement is language translation between multiple languages. Azure OpenAI is designed for generative AI tasks such as drafting, summarizing, and chat generation, but it is not the primary Azure service for translation-focused exam scenarios. Azure AI Language provides text analytics capabilities, but translation is a separate specialized workload.

4. A company wants to build a copilot that can draft email replies and summarize internal documents based on user prompts. Which Azure service should they primarily evaluate?

Show answer
Correct answer: Azure OpenAI
Azure OpenAI is the correct answer because drafting replies and summarizing documents from prompts are generative AI tasks that rely on foundation models. Azure AI Speech would only be appropriate if the main need involved spoken input or output. Azure AI Language is used for analytical NLP tasks such as sentiment analysis or entity extraction, not for broad text generation and copilot-style responses.

5. A financial services firm is implementing a generative AI assistant for employees. The solution must reduce the risk of harmful responses and ensure outputs are based on approved company data when possible. Which approach best aligns with responsible AI guidance for this scenario?

Show answer
Correct answer: Use grounding with approved data sources and apply content moderation with human oversight
Using grounding with approved data sources and applying content moderation with human oversight best reflects AI-900 responsible AI guidance. It helps improve relevance, reduce unsafe output, and supports accountability. Allowing unrestricted public data access is incorrect because it can increase privacy, security, and reliability risks. Relying only on longer prompts is also incorrect because prompts alone do not guarantee factual accuracy, safety, or compliance; responsible AI controls are still needed.

Chapter 6: Full Mock Exam and Final Review

This chapter brings the entire AI-900 course together into a final exam-prep workflow. By this point, you have studied the official objective domains: AI workloads and business scenarios, machine learning fundamentals on Azure, computer vision, natural language processing, and generative AI. The goal now is not to learn brand-new material, but to sharpen recognition, eliminate confusion between similar services, and build the confidence to answer quickly and accurately under exam conditions. This is where full mock exam practice becomes valuable. A good mock exam is not just a score report; it is a diagnostic tool that reveals whether you truly understand how Microsoft words beginner-level AI questions and whether you can map scenarios to the correct Azure AI capability.

The AI-900 exam is broad rather than deep. That means many candidates lose points not because the content is technically advanced, but because they misread small wording differences. The exam often tests whether you can distinguish between categories such as machine learning versus generative AI, computer vision versus OCR-specific tasks, or translation versus sentiment analysis. It also checks whether you understand what Azure AI services are designed to do at a high level. In this chapter, the mock exam portions help you simulate test conditions, while the review sections help you analyze why a particular answer is correct and why attractive distractors are wrong.

As you work through Mock Exam Part 1 and Mock Exam Part 2, focus on pattern recognition. Ask yourself what the question is really testing: a business scenario, a service name, a responsible AI principle, or a workload type. On AI-900, the best strategy is usually to identify the business need first, then match it to the Azure service or concept that most directly solves that need. If a scenario asks about predicting values from historical data, think machine learning. If it asks about extracting text from an image, think OCR within Azure AI Vision. If it asks about generating new content from prompts, think generative AI and Azure OpenAI Service concepts. Exam Tip: When two answer choices both seem plausible, prefer the one that directly addresses the stated requirement with the least extra complexity. AI-900 rewards clear service-to-scenario matching, not overengineering.

The final sections of this chapter also address weak spot analysis and exam-day readiness. Many learners finish content review but never stop to ask which domain still feels uncertain. That is a mistake. A strong final week should not be spent rereading everything equally. Instead, use your mock results to identify the exact domains where you still confuse terminology, and then revise those aggressively. The last lesson in this chapter gives you an exam-day checklist and a practical mindset plan so that knowledge is not lost to stress or poor pacing. By the end of this chapter, you should be prepared not only to recall facts, but to apply exam strategy like a certification candidate who understands how the AI-900 is designed.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full mixed-domain mock exam covering all official AI-900 objectives

Section 6.1: Full mixed-domain mock exam covering all official AI-900 objectives

Your full mock exam should feel like the real AI-900 experience: mixed topics, straightforward wording with occasional traps, and constant switching between concepts. That mixed structure matters because the actual exam does not group every question neatly by domain. One item may ask you to identify a suitable AI workload for a business process, while the next may shift to Azure Machine Learning, then to computer vision, then to responsible AI, and then to generative AI. Practicing in this format teaches you to reset your thinking quickly and identify the domain from clues in the scenario.

As you take Mock Exam Part 1 and Mock Exam Part 2, train yourself to classify each item before deciding on an answer. Look for signal words. Terms such as prediction, classification, training data, and inferencing usually indicate machine learning. Terms such as image analysis, facial detection, optical character recognition, and video insights point to computer vision. Terms such as key phrases, language detection, entity recognition, speech synthesis, and translation indicate NLP. Terms such as prompts, generated output, copilots, foundation models, and grounded responses suggest generative AI. If the question focuses on fairness, transparency, accountability, reliability, privacy, inclusiveness, or safety, it is testing responsible AI concepts rather than a specific service feature.

A common trap in mixed-domain practice is overthinking service names. AI-900 is an entry-level exam, so the test usually measures whether you know which Azure offering fits the use case at a foundational level. You do not need architect-level deployment knowledge. For example, if a scenario describes training a model from historical data and deploying it for predictions, the exam may simply be checking whether you understand the machine learning lifecycle on Azure. Exam Tip: If you notice yourself debating advanced implementation details, pause and ask whether the exam objective is actually much simpler than the answer choices make it seem.

After completing a mock exam, spend at least as much time reviewing as you did answering. Categorize errors into three buckets: knowledge gaps, misreads, and second-guessing. Knowledge gaps mean you did not know the concept. Misreads mean you knew the material but missed a key word like generate versus classify, detect versus analyze, or train versus infer. Second-guessing errors happen when you initially selected the right concept but switched due to uncertainty. This review process is critical because it tells you whether your next study session should focus on content, reading discipline, or confidence.

  • Mark each missed question by objective domain.
  • Write a one-line reason why the correct answer was correct.
  • Write a one-line reason why your chosen answer was wrong.
  • Review patterns instead of isolated mistakes.

The purpose of a full mixed-domain mock exam is not only to estimate your score, but to improve your exam recognition skills across all official AI-900 objectives. Treat every mock as a rehearsal for the real exam environment.

Section 6.2: Detailed answer review for Describe AI workloads and ML on Azure

Section 6.2: Detailed answer review for Describe AI workloads and ML on Azure

This review area covers two foundational domains that often appear early in the exam: describing AI workloads and business scenarios, and explaining machine learning fundamentals on Azure. For AI workloads, the exam expects you to recognize common categories such as anomaly detection, forecasting, computer vision, natural language processing, conversational AI, and generative AI. The key is to match the business goal to the workload type. If a company wants to detect unusual credit card behavior, that is anomaly detection. If it wants to estimate future sales, that is forecasting. If it wants software to answer user questions in natural language, that may point to conversational AI or generative AI depending on whether it is retrieving or generating responses.

For machine learning, AI-900 tests conceptual understanding, not algorithm mathematics. You should be comfortable with the ideas of training, validation, inference, features, labels, and models. You should also recognize the major machine learning types: supervised learning for predictions based on labeled data, unsupervised learning for finding patterns in unlabeled data, and reinforcement learning for reward-driven decision behavior. The Azure-specific angle often involves knowing that Azure Machine Learning supports creating, training, managing, and deploying models. The exam may also test whether you understand that inference is the process of using a trained model to make predictions on new data.

One of the most common traps is confusing rule-based logic with machine learning. If the scenario describes explicit if-then rules created by a developer, that is not machine learning. Machine learning learns patterns from data. Another trap is mixing up classification and regression. Classification predicts categories, such as approved or denied, spam or not spam. Regression predicts numeric values, such as price, demand, or temperature. Exam Tip: If the output is a number on a continuous scale, think regression. If the output is one of several labels, think classification.

Responsible AI principles can also appear inside ML questions. Microsoft emphasizes fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. On the exam, these principles are often tested through simple scenarios. For example, a model that performs poorly for one demographic group raises fairness concerns. A system that cannot explain how predictions are made may raise transparency issues. Be careful not to memorize terms in isolation; understand the practical meaning behind each principle.

When reviewing your mock exam answers in this domain, ask yourself whether the question was testing workload recognition, ML lifecycle knowledge, or responsible AI. If you missed the item, identify which of those subskills was weak. That precision makes remediation faster and more effective.

Section 6.3: Detailed answer review for computer vision and NLP workloads on Azure

Section 6.3: Detailed answer review for computer vision and NLP workloads on Azure

Computer vision and natural language processing questions are popular on AI-900 because they let Microsoft test whether you can map real-world tasks to well-known Azure AI capabilities. In computer vision, expect scenarios involving image classification, object detection, facial analysis concepts, OCR, image tagging, and video understanding at a basic level. The exam generally wants you to recognize that Azure AI Vision supports analyzing images and extracting insights, including reading text from images. If the scenario is specifically about converting printed or handwritten text in an image into machine-readable text, OCR is the clue you should catch immediately.

In NLP, the exam focuses on services and tasks such as sentiment analysis, key phrase extraction, entity recognition, language detection, translation, speech-to-text, text-to-speech, and question answering concepts. Again, keep the business requirement front and center. If the goal is to identify whether customer reviews are positive or negative, that is sentiment analysis. If the goal is to convert spoken words into written text, that is speech recognition. If the goal is to translate text between languages, choose translation rather than general text analytics.

A major trap is confusing text analysis with text generation. Text analysis means extracting insight from existing text, such as sentiment, entities, or key phrases. Text generation means creating new content, which belongs more naturally in the generative AI domain. Another trap is treating all vision questions as the same. Reading text in an image is not the same as detecting objects in an image, and both are different from generating captions or identifying tags. Exam Tip: Look for the noun that tells you what must be produced: text, labels, objects, sentiment, translation, or speech. The required output usually reveals the correct Azure capability.

The exam may also use distractors that are related but incomplete. For example, a service that can analyze images generally does not replace a speech service, and a translation capability does not perform sentiment scoring. Eliminate options that solve only part of the problem or that belong to a different modality. Modality awareness is essential: image, text, audio, and generated content are distinct categories even when they are all under the umbrella of Azure AI.

During review, group your missed items by modality confusion. If you mixed up OCR and object detection, your issue is within vision. If you confused sentiment analysis and translation, your issue is within NLP. If you confused analysis with generation, your issue crosses domains and should be reviewed as a larger conceptual distinction.

Section 6.4: Detailed answer review for generative AI workloads on Azure

Section 6.4: Detailed answer review for generative AI workloads on Azure

Generative AI is now a visible part of AI-900, and many candidates either overestimate or underestimate what they need to know. The exam stays at a fundamentals level. You should understand what generative AI does, why prompts matter, what foundation models are in simple terms, how copilots use AI to assist users, and why responsible use is especially important when systems generate text, images, code, or summaries. Azure-specific framing often includes Azure OpenAI Service concepts and the use of large language models to produce responses based on prompts.

On the exam, generative AI questions often test distinctions. A traditional machine learning model predicts labels or values from data, while a generative AI model creates new content. A chatbot based only on fixed decision trees is not the same as a copilot powered by a language model. Prompt engineering at this level means providing clear instructions and context so the model produces useful output. You do not need deep model architecture knowledge, but you do need to understand practical concepts such as prompts, completions, grounding, and the possibility of inaccurate or fabricated responses.

One of the most important traps in this domain is assuming generated content is always correct. AI-900 expects you to understand the risk of hallucinations and the need for human oversight, validation, and responsible deployment. Another trap is confusing responsible AI principles in general with generative-specific safety issues. Generative AI introduces concerns around harmful content, misinformation, privacy, and biased outputs. Exam Tip: If an answer choice includes language about reviewing outputs, filtering unsafe content, protecting user data, or adding human oversight, that is often aligned with responsible generative AI practice.

You should also understand that copilots are not a separate magical category of AI; they are applications that use AI models to help users complete tasks such as drafting, summarizing, searching, or answering questions. On the exam, if the scenario describes assistance embedded into a workflow, that points toward a copilot experience. If it emphasizes broad pretrained capabilities reused across many tasks, that points toward foundation models.

When reviewing mock exam answers here, ask whether you missed the question because of terminology or because you still mentally blend generative AI with NLP. Generative AI may involve language, but not every language task is generative. That distinction is one of the most exam-relevant ideas in this chapter.

Section 6.5: Weak area remediation plan, last-week revision, and memory aids

Section 6.5: Weak area remediation plan, last-week revision, and memory aids

Your final week should be strategic, not random. Start by reviewing your results from Mock Exam Part 1 and Mock Exam Part 2. Rank each objective area as strong, moderate, or weak. Strong areas need light review to maintain speed and confidence. Moderate areas need targeted question practice and service comparison review. Weak areas need focused remediation using short study blocks, not passive rereading. For AI-900, the highest-value remediation usually comes from clarifying differences between similar concepts and services rather than trying to memorize every possible product detail.

Create a simple remediation grid. In one column, list the concept you missed. In the second, write the correct definition in your own words. In the third, write the common confusion pair. For example: regression versus classification, OCR versus object detection, sentiment analysis versus translation, machine learning versus generative AI, fairness versus transparency. This forces active recall and improves discrimination, which is exactly what the exam tests.

Memory aids can help if they are tied to meaning. Think of machine learning as learn from data, then infer on new data. Think of computer vision as see and interpret images. Think of NLP as understand or transform human language. Think of generative AI as create new content from prompts. Think of responsible AI as use AI safely, fairly, and transparently. Exam Tip: The best memory aid is one that helps you eliminate wrong answers, not just recognize right ones.

  • Spend one day reviewing AI workloads and responsible AI principles.
  • Spend one day reviewing ML concepts and Azure Machine Learning basics.
  • Spend one day reviewing computer vision and OCR distinctions.
  • Spend one day reviewing NLP tasks including speech and translation.
  • Spend one day reviewing generative AI, copilots, prompts, and safety.
  • Use the final day for a light mixed review and rest.

Avoid the trap of cramming new details the night before the exam. Your goal in the last week is pattern clarity. If you can quickly map business scenarios to workload types and core Azure AI services, you are in the right place for AI-900 success.

Section 6.6: Final review, exam-day checklist, confidence tips, and next certification steps

Section 6.6: Final review, exam-day checklist, confidence tips, and next certification steps

In your final review, return to the official AI-900 outcomes and make sure you can explain each one in plain language. Can you describe common AI workloads and when businesses use them? Can you explain training versus inference in machine learning? Can you identify the right Azure tools for computer vision and NLP scenarios? Can you distinguish generative AI from traditional predictive systems? If you can answer those questions clearly, you are aligned with the exam at the right depth.

Your exam-day checklist should include both technical and mental preparation. Confirm the exam time, testing environment, identification requirements, and any online proctoring rules if applicable. Arrive early or log in early. Have a calm routine. Read each question slowly enough to catch the action word and desired output. Eliminate clearly wrong answers before comparing the remaining choices. If a question seems unfamiliar, classify the domain first, then look for the answer that best matches the scenario at a fundamentals level. Exam Tip: Do not let one difficult question damage your pacing. Mark it mentally, make your best choice, and move on.

Confidence on AI-900 comes from recognizing that this is a foundational exam. It is designed to validate broad understanding, not specialist engineering depth. Most mistakes come from rushing, mixing up similar services, or ignoring key wording in the scenario. Trust the basic concept you learned. If a business wants to analyze images, choose the vision-related option. If it wants to generate text from prompts, choose the generative AI option. If it wants to predict future numeric values from historical data, choose regression-based ML thinking.

After passing AI-900, consider your next certification step based on your career direction. If you want deeper Azure data and AI implementation skills, you may move toward role-based Azure certifications in data, AI engineering, or machine learning. AI-900 gives you the vocabulary and conceptual framework to do that. Even if this is your first certification, treat it as a foundation rather than a finish line. A pass proves you understand the language of modern AI on Azure and can connect business scenarios to the right fundamental solutions.

Finish this chapter with a final confidence statement: you do not need perfection, you need readiness. Use the mock exam results, review your weak spots, follow the checklist, and sit the exam with a clear, steady approach.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A company wants to build a solution that predicts next month's sales based on historical transaction data. Which type of AI workload should they use?

Show answer
Correct answer: Machine learning
Machine learning is correct because predicting a numeric value from historical data is a classic forecasting scenario covered in the AI-900 machine learning domain. Computer vision is incorrect because it focuses on analyzing images and video. Conversational AI is incorrect because it is used for chatbot and speech-based interactions, not predictive modeling from tabular business data.

2. A retailer wants to extract printed text from photos of receipts submitted by customers. Which Azure AI capability best matches this requirement?

Show answer
Correct answer: Optical character recognition (OCR) in Azure AI Vision
OCR in Azure AI Vision is correct because the requirement is to read and extract text from images, which is a core computer vision task in AI-900. Sentiment analysis is incorrect because it evaluates the emotional tone of text after the text is already available. Language detection is incorrect because it identifies the language of text, but it does not extract text from an image in the first place.

3. A business wants an application that creates draft marketing email content from a user's prompt. Which Azure AI concept best fits this scenario?

Show answer
Correct answer: Generative AI using Azure OpenAI Service
Generative AI using Azure OpenAI Service is correct because the scenario requires creating new text content from prompts, which is a key generative AI use case in AI-900. Regression using Azure Machine Learning is incorrect because regression predicts numeric values rather than generating natural language. Face detection using Azure AI Vision is incorrect because it analyzes images for human faces and does not generate text.

4. During practice exams, a candidate repeatedly confuses translation questions with sentiment analysis questions. What is the best final-review strategy based on AI-900 exam preparation guidance?

Show answer
Correct answer: Focus revision on the weak domain and practice distinguishing the service scenarios
Focusing revision on the weak domain and practicing service-to-scenario matching is correct because Chapter 6 emphasizes weak spot analysis and targeted review rather than treating all topics equally. Rereading every chapter equally is less effective because it does not prioritize the areas causing mistakes. Taking more random mock exams only is also not the best choice because without targeted correction, the same confusion between translation and sentiment analysis may continue.

5. You are taking the AI-900 exam and encounter a question where two Azure services seem plausible. According to recommended exam strategy, what should you do first?

Show answer
Correct answer: Identify the business requirement and select the service that addresses it most directly with the least extra complexity
Identifying the business requirement first and choosing the most direct service is correct because AI-900 typically tests clear mapping between scenarios and Azure AI capabilities, not overengineering. Choosing the most advanced service is incorrect because beginner-level exam questions usually reward the simplest correct match. Selecting the broadest feature set is also incorrect because exam wording often includes enough detail to point to a specific capability rather than a more complex or general-purpose option.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.