AI Certification Exam Prep — Beginner
Everything you need to pass AI-900—explained simply, practiced thoroughly.
This beginner-friendly course is built for non-technical professionals who want to pass the Microsoft AI-900: Azure AI Fundamentals certification exam. You don’t need prior certifications or hands-on coding experience—just basic IT literacy and a willingness to practice with exam-style questions. The course is organized as a 6-chapter book so you always know what to study next, why it matters, and how it appears on the exam.
Every chapter maps directly to Microsoft’s published exam objectives. You’ll learn the “what” and “why” behind core Azure AI concepts, then reinforce them with scenario-based practice that mirrors the tone and structure of real AI-900 questions.
Chapter 1 gets you ready for success before you even begin content study: exam registration and scheduling, scoring expectations, question types, and a practical study strategy designed for beginners. You’ll also learn a repeatable approach for eliminating distractors and managing time.
Chapters 2–5 each focus on one or two exam domains. The goal is decision-making, not memorization: you’ll learn to identify the workload described in a scenario and select the most appropriate Azure approach. Each chapter ends with exam-style practice so you can confirm readiness before moving on.
Chapter 6 is a full mock exam experience plus final review. You’ll take two timed mock exam parts, review answer rationales, and run a “weak spot” analysis so the final days of study are targeted and efficient. You’ll also get an exam-day checklist that covers environment setup, pacing, and common traps.
AI-900 rewards clear understanding of fundamentals: knowing what a workload is, what a service family does, and how to reason from a business scenario to the right AI approach. This course is designed around that reality. You’ll build a mental map of AI workloads, learn the essential ML lifecycle concepts, and practice selecting among vision, NLP, and generative AI solutions in the same way the exam asks.
If you’re new to certification exams, start by setting your target date and following the chapter milestones in order. To begin learning on Edu AI, use Register free. If you’d like to compare learning paths first, you can also browse all courses.
By the end of this course, you’ll be able to explain each AI-900 domain in plain language, interpret scenario questions accurately, and walk into exam day with a clear plan.
Microsoft Certified Trainer (MCT)
Jordan Whitaker is a Microsoft Certified Trainer who has coached beginners to pass Microsoft fundamentals exams through practical, exam-aligned study plans. He specializes in translating Azure AI concepts into clear decision frameworks that match real AI-900 question patterns.
AI-900 is designed to validate that you can talk about AI clearly, choose the right type of AI workload for a business problem, and recognize which Azure services typically support that workload. This chapter orients you to the exam format, logistics, and the study workflow you’ll use across the course. As a non-technical professional, your advantage is that the exam often rewards good problem framing: what the goal is, what data is available, what “success” means, and what responsible AI risks must be managed.
Across the next six chapters, you will learn to recognize core AI scenarios (machine learning, computer vision, natural language processing, and generative AI), explain what they do in plain language, and connect them to common Azure service families. You’ll also learn how Microsoft writes questions: the stem often hides the real objective in a single keyword (for example, “extract text,” “classify sentiment,” “detect objects,” “predict a number,” or “generate an answer”).
Exam Tip: Treat AI-900 as a “matching” exam. Your job is frequently to match scenario → AI workload type → best-fit Azure service category → responsible AI considerations. You do not need to code, but you must be precise with terminology.
This chapter covers five practical tasks you should complete early: understand the exam format, decide online vs. test center delivery, set checkpoints for learning, learn how to read Microsoft-style questions, and build a week-by-week plan that fits your calendar.
Practice note for Understand the AI-900 exam format, timing, and scoring: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Register, schedule, and choose online vs test center delivery: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Set up your study workflow and learning checkpoints: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Learn how to read Microsoft-style exam questions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Build your personalized week-by-week study plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand the AI-900 exam format, timing, and scoring: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Register, schedule, and choose online vs test center delivery: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Set up your study workflow and learning checkpoints: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Learn how to read Microsoft-style exam questions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
AI-900 measures foundational AI literacy in an Azure context. The exam is built for candidates who need to understand AI workloads and how they are delivered in Azure—without requiring programming or data science depth. That makes it a strong fit for business analysts, project managers, sales/marketing roles, product owners, compliance partners, and leaders who collaborate with technical teams.
What the exam tests is your ability to describe common AI scenarios and benefits (automation, insights, personalization), and to explain tradeoffs and constraints. Expect frequent emphasis on responsible AI: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These topics appear as “what should you do” or “what risk is most relevant” questions, especially in scenarios involving people (hiring, lending, healthcare, education) or sensitive data.
Common trap: over-rotating into implementation details. For example, many candidates think they must choose an algorithm (like logistic regression) rather than a workload category (classification) and its evaluation measures. The exam typically wants the concept, not the code. Another trap is confusing “AI” as a single tool—AI-900 separates machine learning, computer vision, NLP, and generative AI into distinct workload families.
Exam Tip: When you read a scenario, first classify it as: prediction of a number (regression), choosing a label (classification), grouping similar items (clustering), finding anomalies, extracting meaning from text (NLP), understanding images (vision), or generating new content (generative AI). Only after that should you think about which Azure service family supports it.
You’ll register for AI-900 through Microsoft’s certification portal, which routes you to the exam delivery provider. The two main delivery options are online proctoring (take the exam from home/office) and a test center. Your choice should be based on risk management, not convenience alone.
Online delivery is efficient, but it is strict: you typically need a private room, a clean desk, reliable internet, and a supported device. Background noise, extra monitors, phones, notes, or even frequent glances away from the screen can trigger warnings. Test centers reduce environmental risk but require travel and fixed schedules.
Accommodations are available if you need them (for example, extra time). Apply early—approval can take time, and you don’t want your study plan to collide with admin delays. For identification, plan on presenting a valid, government-issued ID that matches your registration name. Name mismatches are a classic “avoidable fail” because they can prevent you from starting the exam.
Exam Tip: If you choose online proctoring, run the system test well before exam day. Do it again the day before. Many candidates lose time and composure to last-minute browser permissions, webcam issues, corporate VPN policies, or locked-down work laptops.
Scheduling strategy: pick a date that forces commitment but still allows at least two full review cycles. Your goal is not to “finish content” but to complete content plus spaced review and practice under time constraints.
AI-900 is a timed, proctored exam with a mix of question formats. Microsoft-style questions often include single-answer multiple choice, multiple-response (“choose all that apply”), drag-and-drop matching, and scenario-based items. Some exams include case-study style sets where multiple questions refer to the same scenario. Regardless of format, you’re assessed on applied understanding: the correct option is the one that best satisfies the stated requirement and constraints.
Scoring is scaled rather than a simple percentage, and the passing score is published by Microsoft (commonly 700 on a 1–1000 scale). Don’t waste energy trying to reverse-engineer the scale. Focus on accuracy across domains and avoid preventable mistakes like misreading “most cost-effective,” “least administrative effort,” or “requires no training data.”
Retake policies exist, but your study plan should assume you pass on the first attempt. Retakes cost time and attention, and the biggest barrier is often motivation rather than capability.
Common traps in question types: (1) multiple-response items where one “extra” selection makes the whole answer wrong; (2) matching questions where two options sound similar (for example, OCR vs. image classification); and (3) scenarios that imply a constraint like “no labeled data,” which points away from supervised learning.
Exam Tip: Before looking at options, restate the question in your own words in one sentence: “They want to extract printed text from images,” or “They want to detect objects and their bounding boxes.” That sentence becomes your anchor when distractors appear.
Beginners often try to study by re-reading notes or watching videos repeatedly. That feels productive, but it produces weak recall under exam pressure. Your workflow in this course will use three evidence-based tactics: spacing (review over time), active recall (retrieve from memory), and practice loops (apply concepts to scenarios repeatedly).
Spacing: Instead of cramming, schedule short reviews across days. After each chapter, do a 10–15 minute “next-day recap” and a 10–15 minute “next-week recap.” This matters because AI-900 has overlapping concepts (for example, responsible AI appears in every domain; classification vs. regression shows up again in vision and NLP examples).
Active recall: Use a one-page “workload map” you build as you study. For each workload (ML, vision, NLP, generative AI), you should be able to say: input data type, output type, common use cases, and typical evaluation idea (accuracy/precision/recall for classification; error for regression; BLEU-like translation quality conceptually; groundedness and safety considerations for generative AI). You don’t need formulas, but you must explain what the metric indicates.
Practice loops: Every study session should include scenario identification. You are training the exam skill of quickly recognizing what the question is really asking. This course will provide checkpoints per chapter—use them as “gates.” If you can’t explain a concept in plain language in under 30 seconds, you’re not exam-ready for that objective yet.
Exam Tip: Don’t memorize service names in isolation. Memorize them as a pair: “problem signal” → “service family.” Example: “extract text” → OCR capability; “detect objects” → object detection; “understand sentiment” → text analytics; “generate responses” → Azure OpenAI. The exam rewards mapping, not trivia.
Microsoft publishes the skills measured for AI-900 as domains that align to the course outcomes you were given: describe AI workloads and responsible AI; explain machine learning fundamentals on Azure; identify computer vision workloads; identify NLP workloads; and describe generative AI workloads. Your fastest path to a pass is to ensure coverage across domains rather than over-mastering one area.
This 6-chapter course is organized to mirror how questions appear on the exam:
Common trap: studying by favorite topic. AI-900 punishes “lopsided readiness.” You might feel confident in generative AI buzzwords, but still miss basic ML evaluation or confuse OCR with image classification. Your checkpoints should therefore be domain-based: you only move on when you can do basic scenario-to-workload mapping in each domain.
Exam Tip: Track readiness with a simple grid: rows = domains, columns = “define,” “recognize scenario,” “choose service family,” “responsible AI risk.” If any cell is weak, your plan should revisit it before scheduling the exam.
On exam day, your goal is execution, not learning. Start with environment control: sleep, hydration, and a calm setup. If you’re online, remove anything that could be flagged (papers, extra devices). If you’re at a test center, arrive early to avoid adrenaline spikes.
Time management: move steadily. Many AI-900 items are short, but scenario questions can slow you down. Avoid spending too long trying to “prove” an answer. If you can narrow to two options and you’re stuck, mark it (if the interface allows) and continue—your brain often resolves uncertainty after seeing later questions.
Elimination techniques are essential because distractors are designed to sound plausible. Eliminate options that violate a constraint in the stem (for example, “no labeled data” eliminates supervised training; “extract text” eliminates image classification; “generate new text” eliminates text analytics). Then choose the option that most directly satisfies the requirement with the least added assumptions.
Common trap: picking a sophisticated solution when the question asks for a fundamental one. AI-900 often favors the simplest correct workload. Another trap is confusing related outputs: classification returns a label, detection returns locations (often bounding boxes), OCR returns text, and generative AI returns newly created content that must be checked for safety and grounding.
Exam Tip: Watch for Microsoft’s qualifiers: “best,” “most appropriate,” “first,” “minimize effort,” “ensure transparency,” “reduce bias.” These words tell you the decision criterion. Underline them mentally before reviewing answers.
Finally, keep a professional mindset: you’re demonstrating that you can collaborate with technical teams by speaking accurately about AI. That is exactly what AI-900 is designed to validate.
1. You plan to take the AI-900 exam. Which statement best describes what the exam is designed to validate?
2. A training manager asks how to approach AI-900 practice questions efficiently. Which strategy best aligns with how Microsoft-style exam questions are commonly written?
3. A non-technical stakeholder is building a week-by-week plan for AI-900 and wants a simple rule for success on exam day. Which guidance best fits the AI-900 'matching' nature described in the course?
4. You are deciding between taking AI-900 online or at a test center. Which action is most appropriate to complete early to reduce exam-day risk, regardless of delivery choice?
5. You are creating a study workflow for AI-900. Which approach best supports consistent progress across multiple chapters for a busy professional?
Domain 1 of AI-900 focuses on whether you can recognize the type of AI problem being described and match it to the right approach and Azure service family. As a non-technical professional, your edge on this exam is learning the “pattern language” of AI scenarios: what the business wants, what kind of data is available, and what outcome the solution should produce. The exam is not trying to turn you into a data scientist; it’s checking that you understand common AI workload types, the plain-language differences between AI, machine learning, and deep learning, and how Responsible AI considerations show up in real deployments.
This chapter follows the same mental workflow you should use on test day: (1) identify the workload type (prediction, perception, conversation, generation), (2) decide whether AI is actually appropriate versus rules/automation, (3) translate the prompt into core ML terms (features, labels, inference), (4) apply Responsible AI principles to the scenario, and (5) map the workload to the right Azure AI family. The final section gives a practice set and a remediation plan so you know exactly what to review.
Practice note for Recognize common AI workload types and when to use them: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Differentiate AI, machine learning, and deep learning in plain language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Apply Responsible AI concepts to real business scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Checkpoint quiz and remediation plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Recognize common AI workload types and when to use them: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Differentiate AI, machine learning, and deep learning in plain language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Apply Responsible AI concepts to real business scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Checkpoint quiz and remediation plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
On AI-900, “AI workload” means the broad category of problem you’re solving. Learn four buckets, because most exam scenarios fit one clearly.
Prediction is about forecasting or scoring: “Will this customer churn?”, “What is the likely delivery delay?”, “Is this transaction fraudulent?” These are typically machine learning tasks that take structured inputs (columns in a table) and output a number or category. You’ll see clues like “predict,” “estimate,” “classify,” “risk score,” or “recommend.”
Perception is about interpreting the world via senses—most commonly images, video, or audio. Examples: detecting objects in a warehouse photo, reading text from receipts (OCR), identifying a face for access control, or describing an image for accessibility. Look for clues like “image,” “camera,” “scan,” “extract text,” “detect,” “recognize,” or “analyze video.”
Conversation is about interacting through natural language—chatbots, question answering, intent detection, summarizing customer messages, or routing support tickets. Keywords include “chat,” “bot,” “intent,” “entity,” “language understanding,” “call center,” and “customer support.”
Generation (generative AI) creates new content: drafting emails, creating marketing copy, generating code, producing images from prompts, or synthesizing a report from multiple sources. In exam terms, this often maps to large language models (LLMs) and prompting basics. Clues include “draft,” “create,” “generate,” “rewrite,” “compose,” “copilot,” and “prompt.”
Exam Tip: If the scenario needs an output that never existed before (a new paragraph, new image, new code), that’s generation, not “conversation,” even if it’s delivered via chat. Conversation is the interface; generation is the capability.
A common trap is overthinking the math. AI-900 wants recognition, not algorithm selection. First, name the workload type; then map to services later.
Not every automation problem needs AI. A frequent exam skill is deciding when rules-based logic is sufficient versus when AI/ML adds value. Rules-based solutions work well when the problem is stable, the logic is explicit, and exceptions are rare—think “if order total > $5,000 then require approval.”
AI is a better fit when (1) the relationships are complex, (2) the environment changes, (3) there’s ambiguity, or (4) you can’t write rules that cover real-world variation. Image recognition is a classic example: you could write rules for pixel patterns, but it breaks immediately with lighting, angles, and backgrounds. Similarly, detecting fraud often involves subtle patterns across many variables that evolve over time.
Trade-offs are testable: AI introduces model management, potential bias, and the need for evaluation; rules introduce maintenance burdens as edge cases grow. Another exam trap is assuming “AI is always better.” The exam often rewards the simplest effective approach.
Exam Tip: If a question mentions “frequently changing patterns,” “too many rules,” “unstructured data,” or “human-like recognition,” that is a strong signal for AI. If it mentions “compliance requires predictable logic” or “a small set of conditions,” rules-based is often the safer answer.
When you’re stuck between two options, ask: “Do we have data that represents the problem?” ML needs examples; rules need definitions. The scenario will usually hint which is available.
AI-900 expects you to understand the vocabulary used to describe machine learning solutions, even if you never train a model yourself.
A dataset is the collection of examples used for learning or evaluation. In business terms, it’s often rows of historical records. Features are the inputs—columns like age, region, last purchase date, device type, or number of support tickets. A label is what you want to predict—like “churned: yes/no” or “fraud: yes/no.” If the dataset contains labels, it’s typically used for supervised learning.
A model is the learned representation produced by training. Training is the process of finding patterns in the dataset so the model can generalize to new cases. Inference is using the trained model to make a prediction on new data—this is the “in production” moment when the business gets value.
Evaluation is another key idea: you don’t just train and deploy; you measure performance (for example, accuracy) and monitor drift. Drift means the real world changed: customer behavior, fraud tactics, seasonality, or new product lines. This is why model lifecycle concepts show up in Domain 1—because “set and forget” is an exam trap.
Exam Tip: If the question describes “predicting a value for new records,” that is inference, not training. Training uses historical labeled data; inference uses current unlabeled inputs to produce a prediction.
Common confusion: “dataset” versus “model.” The dataset is the evidence; the model is the learned artifact. If the prompt asks what to deploy to an app, you deploy a model endpoint, not the dataset.
Responsible AI is directly tested in AI-900, usually through scenario questions. You’re expected to recognize which principle is at risk and what a responsible organization should consider.
Fairness means similar people should be treated similarly. In lending, hiring, or healthcare, biased training data can create unequal outcomes across groups. Watch for scenarios mentioning protected attributes (age, gender, ethnicity) or unequal approval rates.
Reliability and safety focuses on consistent performance and avoiding harm, especially in high-stakes contexts (medical triage, autonomous systems, critical infrastructure). If a system must behave predictably under unusual conditions, reliability/safety is the key principle.
Privacy and security concerns personal data protection, data minimization, secure storage, and preventing leakage of sensitive information. Exam prompts may mention PII, customer records, or regulatory requirements. Also consider prompt injection and data exfiltration risks in generative AI deployments.
Inclusiveness ensures the solution works for people with different abilities, languages, accents, or access needs. If speech recognition fails for certain accents, or an app is unusable with screen readers, inclusiveness is the target principle.
Transparency is about making it clear that AI is involved and providing understandable explanations when appropriate. Users should know when they are interacting with a bot, and decision-makers may need reasons behind a score.
Accountability means humans remain responsible: governance, auditability, and clear escalation paths. The organization must define who approves deployment, who reviews incidents, and how models are updated.
Exam Tip: Many questions include multiple “good” actions. Choose the principle that is most directly impacted by the scenario’s risk. For example, “model underperforms for one demographic” is primarily fairness; “customers weren’t told it was AI” is transparency.
AI-900 does not require deep implementation steps, but it does test whether you can match workloads to the correct Azure AI service family. Think in families rather than individual APIs.
Azure AI Services (formerly Cognitive Services) align well with perception and language tasks where you want pretrained capabilities: image analysis, OCR, speech-to-text, translation, and text analytics. Use this family when you want to add AI features without training a custom model from scratch.
Azure AI Vision supports perception workloads: analyzing images, detecting objects, and extracting text (OCR). If the scenario says “read receipts,” “scan IDs,” or “extract text from images,” Vision/OCR is a strong match.
Azure AI Language supports conversation and NLP workloads: sentiment analysis, key phrase extraction, entity recognition, summarization, and some conversational understanding patterns. If the scenario is “analyze customer emails,” this is typically Language.
Azure AI Speech supports audio-based perception/conversation: speech recognition, speech synthesis, and translation in speech contexts. Keywords: “call center audio,” “transcribe,” “text-to-speech.”
Azure AI Foundry / Azure OpenAI supports generative AI workloads: LLM-based chat, content generation, summarization, and copilots, with prompting as a core skill. If the scenario emphasizes drafting, reasoning over text, or creating new content, this is the family to think about.
Azure Machine Learning fits prediction workloads and custom model lifecycle needs: training, evaluating, deploying, and managing ML models (including MLOps). If the scenario mentions “train a model using our historical data,” “track experiments,” or “deploy to an endpoint,” that points to Azure Machine Learning.
Exam Tip: Prebuilt AI Services are the default when the task is common and well-supported (OCR, translation, sentiment). Azure Machine Learning is the default when you need custom prediction using your own labeled dataset.
This section is your test-day method, without listing full questions in the chapter text. When you see an exam scenario, do three passes: workload type, data type, and risk.
Pass 1: Identify the workload. Ask: Is the system predicting an outcome (prediction), interpreting images/audio (perception), interacting in language (conversation), or creating new content (generation)? If the scenario uses “chat,” don’t stop there—determine whether the chat is retrieving facts (conversation/NLP) or drafting new outputs (generation).
Pass 2: Identify the data. Structured tables and labeled outcomes suggest ML prediction with features/labels. Images, video, and scanned documents suggest Vision/OCR. Free-form text suggests Language. Audio suggests Speech. Mixed enterprise content with “draft a response” suggests generative AI plus grounding (and therefore stronger privacy/security considerations).
Pass 3: Apply Responsible AI. Look for signals: protected classes (fairness), safety-critical decisions (reliability/safety), PII and regulated data (privacy/security), accessibility and multilingual users (inclusiveness), user disclosure/explanations (transparency), and governance/ownership (accountability).
Common exam traps to avoid:
Remediation plan (if you miss Domain 1 items): Reclassify each missed scenario into one of the four workload types, rewrite it using the terms “features,” “labels,” and “inference,” then map it to one Azure family. Finally, name the most relevant Responsible AI principle and the business risk it prevents. This loop builds the exact recognition skill the AI-900 exam rewards.
1. A retailer wants to predict next month’s demand for each product to reduce stockouts. The company has three years of historical sales data labeled with the quantity sold per month. Which AI workload type best fits this scenario?
2. A hospital wants to automatically route incoming patient emails to the correct department (Billing, Appointments, or Medical Records). They have thousands of past emails already tagged with the correct department. Which solution approach is most appropriate?
3. You are explaining AI concepts to a business stakeholder. Which statement best describes deep learning in plain language?
4. A bank deploys an AI model to help decide whether to approve personal loans. An internal audit finds that approval rates are significantly lower for a protected demographic group, even when income and credit history are similar. Which Responsible AI principle is most directly impacted?
5. A company wants to build a customer support chatbot that can answer questions about store hours, return policies, and order status. Users should be able to type questions in natural language and receive conversational responses. Which Azure AI service family best matches this requirement?
Domain 2 of AI-900 checks whether you can describe how machine learning (ML) works at a practical level—what problems ML solves, how models are trained and evaluated, and how Azure Machine Learning (Azure ML) organizes the end-to-end workflow. You are not expected to do heavy math, but you are expected to recognize common ML terms, choose the right problem type for a scenario, and identify the “Azure ML object” being referenced (workspace, compute, experiment, endpoint, and so on).
This chapter is written for non-technical professionals, but it’s still exam-first: you’ll learn how to spot what the question is really asking, which choices are distractors, and which keywords map to the official objectives. You’ll also see where candidates commonly mix up terms like “validation vs test,” “classification vs clustering,” or “endpoint vs compute.”
As you read, focus on “if the scenario says X, the answer is Y.” That pattern-matching skill is how you score quickly and confidently on Domain 2.
Practice note for Understand supervised, unsupervised, and reinforcement learning basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain training, validation, testing, and metrics without math overload: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Describe Azure Machine Learning core concepts and typical flow: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mini-review: common pitfalls and must-know definitions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand supervised, unsupervised, and reinforcement learning basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain training, validation, testing, and metrics without math overload: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Describe Azure Machine Learning core concepts and typical flow: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mini-review: common pitfalls and must-know definitions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
The fastest points in Domain 2 come from correctly identifying the ML problem type from a short scenario. AI-900 regularly tests whether you can map business language (“predict,” “segment,” “detect unusual”) to the right category. Most questions won’t name the algorithm; they’ll describe an outcome.
Classification predicts a category (a label). Examples: “Will the customer churn: yes/no?”, “Is this email spam or not spam?”, “Which product category fits this description?” Even when there are many possible classes (A/B/C), it’s still classification because the output is one of known labels.
Regression predicts a number. Examples: forecasting revenue, predicting house price, estimating delivery time in minutes. If the output is continuous (or treated as numeric), it’s regression. A common exam trap is “score” language: if the scenario says “risk score from 0 to 1,” it is usually regression (a numeric prediction), even though it might later be thresholded into “high/low.”
Clustering groups items into similarity-based clusters when you don’t already have labels. Examples: “segment customers into groups based on behavior,” “group news articles by topic without predefined categories.”
Anomaly detection flags rare or unusual cases compared to normal patterns. Examples: fraud detection, unusual sensor readings, unexpected network traffic. Some questions will mention “outliers,” “deviations,” or “rare events.”
Exam Tip: Look for the output type: label (classification), number (regression), groups discovered (clustering), rare/unusual (anomaly detection). If the scenario includes “historical labeled data,” that strongly signals supervised learning (classification/regression) rather than clustering.
Reinforcement learning (RL) sometimes appears in the objective list, but on AI-900 it’s typically conceptual: an agent learns by trial and error with rewards (e.g., robotics, game playing). If a question describes “actions,” “rewards,” and “environment,” that’s RL—otherwise it’s usually not the best fit.
AI-900 expects you to understand the model lifecycle as a sequence of activities, not as coding steps. Think: “How does a raw business dataset become a deployed model that makes predictions?” Azure ML provides tools across this lifecycle, but the exam is primarily checking that you know what each stage means and why it matters.
1) Data preparation includes collecting data, cleaning it (missing values, duplicates), and splitting it into appropriate subsets (you’ll cover validation vs test in the next sections). Data quality is a major theme: bad data usually beats good algorithms.
2) Feature engineering (concept) is turning raw inputs into useful signals for a model. You don’t need to know specific transformations, but you should recognize the idea: creating a “days since last purchase” field, extracting hour-of-day from a timestamp, or encoding text into numeric representations. On the exam, feature engineering is often mentioned as “transforming data into features used for training.”
3) Training is the process where the algorithm learns patterns from labeled (supervised) or unlabeled (unsupervised) data. Training produces a model (a learned function) that can score new data.
4) Evaluation means checking performance on data not used to fit the model. This is where you interpret metrics (accuracy, precision/recall, RMSE, etc.) at a high level.
5) Deployment is making the model available for use—commonly as an endpoint (a web service) that receives input and returns predictions. AI-900 tends to frame deployment as “operationalizing” the model.
Exam Tip: If a question asks what happens “after training,” the best answer usually includes evaluation before deployment. A common trap is choosing “deploy immediately” without validation/testing. Another trap is confusing “deployment” with “training”—training uses historical data to learn; deployment uses the trained model to score new data.
For non-technical pros, also keep the “why”: the lifecycle is about reducing risk (evaluation), improving reliability (data prep), and delivering business value (deployment) while supporting ongoing updates (retraining when data changes).
Overfitting and underfitting are classic AI-900 terms because they explain why a model performs well in one place (training) and poorly in another (real world). The exam will often present them as a mismatch between training performance and validation/test performance.
Underfitting happens when a model is too simple to learn the real pattern. It performs poorly on training data and also poorly on validation/test data. If the scenario says “low accuracy on both training and validation,” think underfitting. Typical remedies (conceptually) include using a more capable model or better features.
Overfitting happens when a model learns the training data too specifically (including noise). It performs very well on training data but worse on validation/test data. If the scenario says “excellent training results but poor results on new data,” think overfitting. Conceptual remedies include more training data, simplifying the model, regularization, or better evaluation practices.
AI-900 also checks that you understand dataset splits at a high level:
Exam Tip: Many candidates swap validation and test. If the question mentions “final unbiased assessment,” that is the test set. If it mentions “tune hyperparameters” or “select the best model,” that is the validation set.
Metrics appear without deep math, but you must choose the right ones. For classification, you’ll see accuracy, precision, recall, and F1 score. Precision matters when false positives are costly (e.g., flagging legitimate transactions as fraud). Recall matters when false negatives are costly (e.g., missing actual fraud or missing a disease). For regression, you’ll see metrics like MAE or RMSE (both represent prediction error; lower is better). For clustering, the exam generally stays conceptual (quality of grouping) rather than focusing on formulas.
Common trap: picking “accuracy” as the best metric in an imbalanced dataset. If only 1% of cases are positive, a model can be 99% accurate by always predicting “no.” In such cases, precision/recall are typically more meaningful.
Azure ML supports multiple ways to build models. AI-900 often tests which approach fits a scenario, especially when the user persona is described (developer vs analyst) or the need is described (speed vs control).
Automated ML (AutoML) helps you quickly train and compare models by automatically trying algorithms and parameter settings. It’s ideal when you have labeled data and want a strong baseline without deep ML expertise. AutoML is also exam-friendly language for “rapid prototyping” and “model selection automation.”
Designer (visual, drag-and-drop) is used to build ML pipelines with prebuilt components. It fits teams that want transparency and repeatability without writing much code. On exam questions, Designer aligns with phrases like “no-code/low-code,” “visual workflow,” or “build a pipeline using a graphical interface.”
Code-first (using SDKs, notebooks, Python) provides the most flexibility for custom logic, advanced models, or integration into engineering workflows (CI/CD, custom training loops). If the scenario emphasizes “full control,” “custom training,” or “integrate with software development practices,” code-first is usually the best answer.
Exam Tip: When two answers seem plausible, choose based on the constraint stated in the question stem. If it says “minimal coding,” Designer or AutoML will beat code-first. If it says “custom algorithm” or “fine-grained control,” code-first wins. If it says “find the best model quickly,” AutoML is the usual target.
Common trap: assuming AutoML is only for beginners. On the exam, AutoML is positioned as a productivity feature that can be used by many teams to speed experimentation, not as a “toy” approach. Another trap is confusing Designer with Power BI; Designer is an Azure ML capability for building ML workflows, not a reporting tool.
Domain 2 includes recognizing Azure ML’s core building blocks. Questions often describe a need (“where do I track runs?” “where do I deploy?”) and expect you to pick the correct object. Memorize these terms and what they do; it’s high-yield.
Workspace is the top-level container for Azure ML resources: it organizes models, experiments, compute, data connections, and endpoints. If a question says “central place to manage ML assets,” it’s the workspace.
Compute is the processing resource used to run training or inference. In exam language, compute is what you “attach” or “provision” to run jobs (CPU/GPU). A common trap is thinking compute is where the model is stored—compute is for running workloads, not for being the repository.
Datasets / data assets (naming can vary by exam wording) represent data references used for training and scoring. The key idea: Azure ML can register and manage your data so experiments are reproducible. If you see “versioning data” or “reusing data across runs,” think dataset/data asset.
Experiments are collections of training runs. A “run” captures what happened: code, parameters, metrics, and outputs. If a question asks how to “track and compare training iterations,” experiments are the answer.
Endpoints provide a deployed interface for consuming the model (often via HTTPS). If the scenario says “application calls the model to get predictions,” that’s an endpoint. Candidates often confuse endpoints with experiments; experiments are for building and tracking, endpoints are for serving.
Exam Tip: Watch the verb: “manage” (workspace), “run” (compute), “store/track data reference” (dataset/data asset), “compare runs” (experiment), “consume predictions” (endpoint). Verbs reveal the right noun.
Typical flow you should visualize: create a workspace → connect/register data → choose compute → run an experiment (training) → evaluate metrics → register a model → deploy to an endpoint → monitor and iterate. You won’t be asked to implement this, but you will be asked to identify the right component at each step.
This chapter’s practice focus is about answer selection strategy—how to eliminate distractors quickly in Domain 2 without doing calculations. When you see an exam-style multiple-choice question, start by underlining the scenario’s “signal words” and matching them to a concept.
Step 1: Identify the problem type. Look for output clues: category (classification), numeric value (regression), group discovery (clustering), unusual/rare (anomaly detection). If the question adds “with labeled examples,” that reinforces supervised learning.
Step 2: Identify the lifecycle stage. If the scenario says “clean missing values” or “transform data,” that’s data preparation/feature engineering. If it says “compare models using metrics,” that’s evaluation (often via validation). If it says “make available to an app,” that’s deployment (endpoint).
Step 3: Match the Azure ML component. Track runs and metrics? Experiment. Need processing to train? Compute. Need a management boundary? Workspace. Need a deployed scoring interface? Endpoint. Need reusable data references? Dataset/data asset.
Exam Tip: Be suspicious of vague answers that sound “AI-ish” but don’t match the noun in the question. For example, if the question asks “where do you deploy,” answers like “experiment” or “training pipeline” are common distractors—deployment typically maps to “endpoint.”
Mini-review (must-know definitions and pitfalls): Overfitting = great training, weak new-data performance; underfitting = weak everywhere. Validation is for tuning/selection; test is for final unbiased checking. Accuracy can be misleading on imbalanced classes—precision/recall may be better. AutoML = automate model/parameter search; Designer = visual pipeline; code-first = maximum control.
Use these patterns to answer quickly: the exam rewards precise vocabulary and correct mapping far more than deep technical detail. If you can consistently map scenario → concept → Azure ML object, Domain 2 becomes one of the most score-efficient parts of AI-900.
1. A retail company has historical sales transactions labeled with whether each customer churned (Yes/No). They want to predict if a current customer will churn. Which machine learning approach should they use?
2. You train a model and it performs very well on training data but noticeably worse on new, unseen data. Which statement best describes what is happening and the appropriate evaluation dataset to confirm it?
3. A logistics company wants to automatically group delivery addresses into "similar route" clusters to help plan territories. They do not have predefined labels for the groups. Which type of machine learning is most appropriate?
4. You are using Azure Machine Learning. You want to provide a shared place to manage datasets, registered models, compute targets, and deployments for your team. Which Azure ML resource should you create?
5. A team has trained a model in Azure Machine Learning and wants to make it available to a mobile app for real-time predictions. Which Azure ML concept best matches this requirement?
Domain 3 of AI-900 tests whether you can recognize common computer vision scenarios and select the correct Azure capability or service—without needing to build a model from scratch. As a non-technical professional, your job on the exam is often “translation”: translate a business need (inspect parts on a conveyor, read text from receipts, auto-caption images) into a vision workload (classification, detection, OCR, document understanding) and then into an Azure service family.
This chapter follows the same decision path the exam expects. First, you’ll match business scenarios to vision workload types. Next, you’ll nail the exam-level meaning of image analysis, detection, and OCR outputs. Then you’ll map those needs to Azure services and constraints. Finally, you’ll complete service-selection drills and a practice set (without embedding quiz questions inside the chapter narrative).
Exam Tip: AI-900 rarely asks you to tune neural networks. It frequently asks you to pick the right capability (e.g., “object detection” vs “OCR”) and the right service category (e.g., Vision vs Document Intelligence) based on the output format needed (labels, bounding boxes, text, structured fields).
Practice note for Match business scenarios to the right computer vision capability: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand image analysis, detection, and OCR at an exam level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Identify the Azure services used for vision solutions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 3: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Checkpoint: service-selection drills: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Match business scenarios to the right computer vision capability: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand image analysis, detection, and OCR at an exam level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Identify the Azure services used for vision solutions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domain 3: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Checkpoint: service-selection drills: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Start by identifying what the business is trying to produce as an output. AI-900 focuses on four core vision workload types: image classification, object detection, segmentation, and face-related capabilities. Each is defined by the “shape” of the answer.
Image classification assigns one or more labels to an entire image (or to a cropped region you provide). Think “Is this product damaged: yes/no?” or “Which category does this photo belong to?” The key exam clue is that classification does not require pinpointing where the item is—only what it is.
Object detection identifies objects and returns their locations, typically as bounding boxes. Look for words like “find,” “locate,” “count,” “where in the image,” “draw a box,” or “track items.” Counting items on shelves, locating safety helmets, or spotting vehicles in a frame are classic detection cues.
Segmentation goes beyond boxes to label pixels. The output is a per-pixel mask (often used for separating background/foreground or precise outlines). On AI-900, segmentation appears conceptually; the exam is more likely to test that you know it is “pixel-level” compared to detection’s “box-level.”
Face considerations: Face workloads include detection (is a face present), analysis (attributes), and verification/identification (matching). However, AI-900 emphasizes responsible AI and platform constraints. Azure’s face capabilities can be limited by policy, regional availability, and responsible use requirements. The exam often tests that face recognition is sensitive and may be restricted, and that you should only choose it when the scenario explicitly requires face-based outcomes.
Exam Tip: When two options both “recognize objects,” choose detection if the scenario needs location or counting, and classification if it only needs a category label.
Common trap: Confusing detection with OCR. If the “object” is text on an image (signs, invoices, serial numbers), that’s OCR, not object detection—unless the scenario explicitly wants to detect “a sign” rather than read it.
Image analysis is the exam’s umbrella term for “understand what’s in the picture” without training a custom model. In Azure, this is typically delivered by prebuilt vision models that output descriptive metadata. The AI-900 skill is recognizing which output type matches the business requirement.
Tags are keywords describing objects, scenes, or concepts (for example: “outdoor,” “vehicle,” “person,” “water”). Tags are useful for search, filtering, and cataloging. If the scenario describes building an image library where users search by keywords, tags are the expected output.
Captions are short natural-language descriptions (for example: “A person riding a bicycle on a city street”). Captions are used for accessibility, summarization, or auto-describing content. When you see “generate a description” or “alt text,” think captioning.
Content understanding (exam-level) refers to extracting higher-level meaning: recognizing common objects, scenes, and sometimes brands/landmarks depending on the service features presented in the question. The key is that you’re not returning structured fields like an invoice total; you’re returning descriptive information about the image itself.
Exam Tip: If the scenario is about “describe images” or “label images for search,” that is image analysis. If it’s about “read printed/handwritten text,” it’s OCR. If it’s about “extract invoice number and total,” it’s document/form understanding.
Common trap: Over-selecting custom vision training for simple needs. AI-900 questions often include an answer like “Train a custom model in Azure Machine Learning.” That’s usually wrong when prebuilt image analysis already meets the requirement. Pick the simplest managed capability that produces the needed output.
Another exam angle is understanding confidence scores. Many vision outputs come with confidence values. You don’t need to calculate them, but you should know they represent how sure the model is, and that solutions often set thresholds (for example, accept tags above 0.8 confidence) to reduce false positives.
OCR is about converting text in images into machine-readable text. The exam tests whether you can identify OCR scenarios quickly and understand the kinds of outputs OCR returns.
Common OCR business scenarios include digitizing printed documents, reading street signs for navigation, extracting text from screenshots, scanning handwritten notes, and capturing serial numbers or meter readings. The core clue is always the same: the goal is the text itself.
At an exam level, OCR outputs typically include: the recognized text, the structure of how text appears (lines/words), and positional information (bounding boxes/polygons for words or lines). This positional data matters when the scenario needs “where on the image the text appears” (for highlighting or redaction) rather than only the raw text string.
Exam Tip: If the question mentions “handwritten” vs “printed,” still think OCR first. The service may support both, but the workload category remains OCR.
Common trap: Confusing OCR with translation. OCR extracts text; translation converts extracted text from one language to another. If the scenario needs both, the correct approach is typically OCR first, then a language/translation capability.
Common trap: Confusing OCR with document understanding. OCR gives you text and layout. Document understanding gives you named fields (like Total, Date, Vendor) as structured key-value outputs. If the scenario requires “populate a database field called InvoiceTotal,” OCR alone is usually insufficient unless you add downstream parsing or a document extraction service.
In your service-selection drills, practice spotting the minimal required output: if the deliverable is “a text transcript,” OCR is enough; if the deliverable is “a structured JSON with fields,” you likely need document intelligence.
Document and form understanding is the next step beyond OCR: it extracts structured information from documents. AI-900 frames this as using prebuilt or trained models to return fields, tables, and key-value pairs in a predictable schema.
Use this category when the business scenario mentions invoices, receipts, purchase orders, IDs, insurance claims, or “forms” with consistent fields. The output is not merely text; it’s a data structure suitable for automation. For example, the system might return vendor name, invoice date, subtotal, tax, total, and line items.
Conceptually, document understanding combines OCR (to read the text) plus layout analysis (to understand where the text sits on the page) plus field extraction (to map text to meaning). The exam doesn’t require you to implement this, but it does require you to know that the goal is structured extraction.
Exam Tip: Watch for words like “extract,” “populate,” “automate data entry,” “key-value pairs,” “tables,” or “line items.” Those are strong signals for document intelligence rather than basic OCR.
Common trap: Selecting image analysis (tags/captions) for documents. A scanned invoice is technically an image, but the required output is structured business data, not descriptive tags like “paper” or “text.”
Another trap is thinking you must always train a custom model. Many scenarios can be solved with prebuilt models for common document types (receipts, invoices, business cards). AI-900 often rewards choosing prebuilt capabilities when the document type is standard and explicitly stated.
For AI-900, your service-selection goal is to map capability to the correct Azure AI service family. Expect wording like “Which Azure service should you use?” rather than detailed SDK questions.
Azure AI Vision (often referenced simply as “Vision”) covers broad image analysis tasks: tagging, captioning, object detection-style capabilities in prebuilt form, and OCR-like reading in some contexts. When the scenario is “analyze images” or “generate captions/tags,” this is your default starting point.
Azure AI Document Intelligence is the go-to for structured document extraction (forms, invoices, receipts, IDs). Choose it when the output must be structured fields and tables.
Face-related services/capabilities may appear as part of Azure AI Vision offerings or as dedicated face capabilities depending on how the exam question is phrased. Treat face scenarios carefully: verify the scenario’s explicit need (verification vs detection), and remember responsible AI considerations, consent, and potential restrictions.
Exam Tip: Let the required output format drive the service choice. Unstructured description → Vision. Text transcription → OCR/Read. Structured business fields → Document Intelligence.
Common trap: Picking Azure Machine Learning because it “can do anything.” AI-900 generally expects you to select the managed Azure AI service when a prebuilt option exists, especially for standard workloads.
Constraints also matter at a high level: data sensitivity, responsible use (especially for face), and whether the scenario implies real-time processing or batch processing. AI-900 won’t test throughput numbers, but it will test that certain use cases require careful governance and appropriate service selection.
This section is your checkpoint for Domain 3 readiness. You are not memorizing product names in isolation—you are rehearsing the exam’s pattern: scenario → workload type → expected output → service selection. Use the following drills as a method during practice tests and on exam day.
Service-selection drill (5-step):
Exam Tip: Many incorrect answers are “nearly right” but miss one requirement. For example, OCR can read an invoice, but if the scenario demands line-item tables and totals in structured form, Document Intelligence is the better match.
Common trap checklist (use during review):
Finally, when you encounter exam-style MCQs in your practice platform, force yourself to justify the correct option in one sentence using the output format. Example justification pattern: “Choose X because the scenario needs Y output (e.g., bounding boxes / key-value pairs / captions).” This keeps you aligned with what AI-900 actually tests and reduces second-guessing.
1. A retailer wants to automatically generate captions and tags (for example, "outdoor", "mountain", "person") for product photos uploaded to its website. The solution does not need custom model training. Which computer vision capability should you use?
2. A manufacturing company wants to identify defective items on a conveyor belt and highlight where the issue occurs in each image. The output must include the location of the item in the frame. Which workload best fits this requirement?
3. An insurance company needs to extract text from photos of claim forms and return it as machine-readable text. Which capability should you select?
4. A company needs to process scanned invoices and extract specific fields such as vendor name, invoice number, and total amount into structured key-value output. Which Azure service family is the best fit?
5. You are designing a solution that must return bounding boxes around each detected car and person in security camera images. Which Azure service and feature combination should you choose?
This chapter targets two heavily tested AI-900 domains: Natural Language Processing (NLP) workloads (Domain 4) and Generative AI workloads (Domain 5). The exam does not expect you to build models or write code, but it does expect you to recognize common language scenarios, select the most appropriate Azure capability, and articulate responsible AI considerations.
You’ll practice an “exam lens” throughout: identify the task (what outcome is needed), the input type (text, speech, documents, chat), and the service family (Azure AI Language, Azure AI Translator, Azure AI Bot Service, Azure OpenAI). You will also learn the common traps—especially confusing classic NLP (extract/label) with generative AI (create/synthesize).
Exam Tip: When a scenario asks to “extract,” “detect,” “classify,” or “analyze” existing text, think NLP analytics. When it asks to “draft,” “rewrite,” “summarize in new wording,” “answer questions,” or “generate,” think generative AI—and then look for mention of grounding, safety, or citations.
Practice note for Identify NLP tasks and map them to Azure capabilities: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain conversational AI at a fundamentals level (bots and language understanding): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand generative AI concepts, prompt basics, and safety considerations: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domains 4 and 5: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Capstone review: choosing the right AI approach for a scenario: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Identify NLP tasks and map them to Azure capabilities: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain conversational AI at a fundamentals level (bots and language understanding): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand generative AI concepts, prompt basics, and safety considerations: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Answer exam-style questions for Domains 4 and 5: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Capstone review: choosing the right AI approach for a scenario: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
NLP on AI-900 is about recognizing what you can do with text (and sometimes speech-to-text output). You’ll see five recurring tasks: entity recognition, sentiment analysis, key phrase extraction, text classification, and summarization. Each is distinct in the exam’s wording, so train yourself to map the verbs in a prompt to the task type.
Entity recognition identifies named items in text—people, organizations, locations, dates, product names, and sometimes personally identifiable information (PII). Exam scenarios commonly describe pulling out “customer names,” “order numbers,” “cities,” or “medical terms.” The trap is assuming “search” is entity recognition; search is retrieval, while entity recognition is structured extraction from unstructured text.
Sentiment analysis estimates opinions (positive/negative/neutral) and sometimes “mixed.” Watch for scenarios like “monitor brand perception” or “detect unhappy customers from reviews.” A common trap: sentiment is not the same as “topic.” A review can be negative without revealing the topic; topic/category is classification.
Key phrase extraction pulls the main talking points (e.g., “late delivery,” “refund policy,” “battery life”). On the exam, it often appears in dashboards or analytics pipelines where you want quick highlights without reading every message. It’s not a summary; it’s a set of phrases.
Text classification assigns a label to text. The exam may describe routing emails (“billing,” “technical support,” “returns”) or tagging documents by department. Classification can be single-label or multi-label in concept; for AI-900, focus on the idea of “categorize text into known classes.”
Summarization reduces content length while preserving meaning. AI-900 may contrast extractive summarization (selecting key sentences) versus abstractive summarization (rewriting). Exam Tip: If a prompt says “generate a new short version in different wording,” lean generative AI. If it says “condense the document,” it could be classic summarization; read for hints like “extract sentences” versus “rewrite.”
Responsible AI appears even in basics: entity detection may expose PII; sentiment models can misread sarcasm or culturally specific phrasing. The exam often tests that you can identify these limitations and recommend human review or appropriate safeguards.
AI-900 expects you to match NLP tasks to Azure service families rather than memorize every API name. The key umbrella is Azure AI Language, which covers text analytics-style capabilities and conversational language understanding. Another core capability is Azure AI Translator for translation use cases.
Use a simple matching strategy. If the scenario says: “analyze reviews,” “extract entities,” “detect PII,” “find key phrases,” or “detect language,” the correct direction is typically Azure AI Language (text analytics capabilities). If it says “translate customer emails into English” or “provide multilingual chat,” think Azure AI Translator. Translation is usually a dedicated service in exam questions because it is a clear workload with a clear tool.
For conversational language (understanding what a user means), the exam often signals “intents,” “utterances,” and “entities,” which aligns with conversational language understanding capabilities (within Azure AI Language). This differs from “chat completion” with a large language model, which is generative AI. The trap: both can power a chatbot, but the exam will guide you by the phrasing. Rule-based intent classification (intent + entity extraction) points to conversational language understanding; open-ended answers and drafting point to Azure OpenAI.
Exam Tip: When two answers look plausible (e.g., “Azure AI Language” vs “Azure OpenAI”), choose based on whether the output is labels/extractions (Language) or new text generation (OpenAI). If the scenario demands determinism and predefined categories, it is rarely generative AI.
Also note the “document” angle: AI-900 may reference analyzing text from documents, but if the task is still about text analytics (entities, sentiment), the correct service is still language analytics. Don’t confuse it with document processing/OCR (covered in vision domains) unless the scenario explicitly mentions extracting text from images or PDFs that require OCR.
Conversational AI on AI-900 is about understanding the moving parts of a chatbot system: the bot channel, the language understanding component, and the logic that selects actions. In exam scenarios, a bot often sits in Teams, a website, or a customer service portal and must interpret user messages and respond appropriately.
Intents represent what the user wants (e.g., “reset password,” “track order,” “store hours”). Utterances are example phrases users might say to express an intent (e.g., “I forgot my password,” “can’t log in”). Entities are key data extracted from the utterance (e.g., order number, city, product type). The exam frequently tests recognition of these terms and how they work together: utterances train intent recognition; entities are extracted values used to fulfill the request.
Orchestration is the concept of coordinating tools and steps to produce a final outcome: calling language understanding first, then looking up an order in a database, then responding with the result. Even in a non-technical exam, you’re expected to understand that bots often integrate with business systems and that NLP alone is not the entire solution.
Two common traps: (1) assuming a chatbot is “just NLP.” In practice, a chatbot includes conversation flow, state, authentication, and backend calls. (2) confusing deterministic intent-based bots with generative bots. If the question highlights “predefined intents,” “route to support queues,” or “fill a form,” it is intent-based conversational AI. If it highlights “open-ended Q&A,” “draft a response,” or “summarize policies,” it may be a generative assistant.
Exam Tip: Look for words like “route,” “trigger,” “submit,” and “ticket” (intent-based orchestration) versus “compose,” “explain,” and “rewrite” (generative). Choose the architecture that best matches the business requirement for control and predictability.
Generative AI workloads create new content: text, summaries, code-like outputs, or structured drafts. AI-900 focuses on foundational terms and when to use large language models (LLMs) on Azure—most notably through Azure OpenAI services and patterns like copilots.
An LLM is trained on massive text corpora to predict the next token in a sequence. A token is a chunk of text (often part of a word). Tokens matter because they relate to prompt limits, response limits, and cost considerations. The exam may not ask you to calculate token counts, but it can test that longer prompts/outputs consume more tokens and can be constrained by context length.
Embeddings are vector representations of text (or other data) that capture semantic meaning—similar texts have vectors that are close in vector space. On AI-900, embeddings commonly appear in “semantic search,” “find similar documents,” and “retrieve relevant passages for Q&A.” This is an important conceptual divider: embeddings help with retrieval and similarity, while chat/completions help with generation. Many real solutions combine them.
Common Azure generative use cases that appear on the exam include: drafting emails, summarizing meetings, generating product descriptions, creating Q&A-style assistants, and building copilots that help employees interact with internal knowledge. The key exam skill is to justify why generative AI is appropriate: the output is not just a label, but a new natural-language artifact.
Exam Tip: When a scenario requires “accurate answers based on company documents,” generative AI alone is risky. Expect the correct answer to mention grounding (retrieval-augmented generation) and safety controls rather than “train a new model from scratch.” AI-900 generally prefers managed services and patterns over custom training.
Prompting is how you instruct a generative model. AI-900 tests prompting at a principles level: be clear about the task, provide context, define the desired format, and set constraints. Good prompts reduce ambiguity (which reduces unpredictable output) and improve consistency.
Grounding means anchoring the model’s response in trusted data (for example, retrieving relevant passages from a knowledge base and including them in the prompt). Grounding is critical because LLMs can produce plausible-sounding but incorrect content (“hallucinations”). Exam scenarios may describe an assistant answering policy questions; the correct approach often involves grounding the answer in official policy documents rather than relying on the model’s general knowledge.
Citations are a concept (even if implementation varies): the assistant should indicate where information came from, especially when answering based on documents. Citations improve transparency and auditability and help users validate answers. The exam may frame this as “include sources,” “link to documents,” or “show supporting passages.”
Safety filters and content moderation help prevent harmful, disallowed, or non-compliant output. AI-900 responsible AI themes include: fairness, reliability, privacy/security, transparency, and accountability. With generative AI, the most tested concerns are: (1) sensitive data leakage (PII in prompts or outputs), (2) harmful content generation, and (3) over-reliance without verification.
Exam Tip: If a question asks how to reduce harmful or inappropriate responses, look for answers referencing content filtering, system instructions/policies, and human-in-the-loop review. A common trap is choosing “increase model size” or “train longer,” which does not directly address safety or compliance.
When writing prompts, also watch for data boundaries: don’t paste secrets; use least-privilege access to enterprise data; log and monitor responsibly. Even non-technical professionals are expected to recognize that responsible use is a design requirement, not an afterthought.
This section prepares you for the style of AI-900 multiple-choice questions in Domains 4 and 5. On the exam, you’ll often see short business scenarios with a single key requirement hidden in a phrase like “extract,” “translate,” “categorize,” “draft,” or “answer using internal documents.” Your job is to select the workload and the Azure capability that best fits—without over-engineering.
Use a two-pass method. First pass: identify whether the scenario is analytics NLP (labels/extractions) or generative (new content). Second pass: choose the most direct Azure option: Language analytics for sentiment/entities/key phrases/classification, Translator for language conversion, conversational language understanding for intents/utterances/entities in bots, and Azure OpenAI for LLM-based generation and copilots.
Common exam trap: Picking generative AI for a scenario that only needs classification. For example, routing support tickets is usually solved with text classification + business rules, not an LLM. Another trap is ignoring responsible AI prompts: if the scenario mentions compliance, regulated data, or “must cite sources,” the best answer typically includes grounding, citations, and safety controls—not just “use an LLM.”
Capstone mindset: choosing the right approach means matching the simplest tool that meets requirements. If you can solve it with deterministic extraction/classification, that is often preferred. If you need flexible language generation, choose Azure OpenAI—but pair it with grounding and safety to meet enterprise expectations.
1. A support team has 50,000 historical email tickets. They want to automatically detect the sentiment (positive/negative/neutral) and extract key phrases from each email to identify common issues. Which Azure capability should you recommend?
2. A global company needs to translate user-generated product reviews from Spanish, French, and Japanese into English in near real time. They do not need sentiment or entity extraction—only translation. What should you use?
3. A company wants a customer-facing chat experience on its website that can answer common questions and escalate to an agent when needed. They want a bot and the ability to understand user intent at a basic level. Which combination best fits?
4. A legal team wants to generate a first-draft summary of long contract documents and include citations to the specific clauses used. They also want to reduce the risk of the model inventing details. What is the best approach on Azure?
5. You are reviewing an AI solution proposal. Which scenario is the strongest indicator that you should choose generative AI (Domain 5) rather than classic NLP analytics (Domain 4)?
This chapter is your transition from “learning the material” to “passing the exam.” AI-900 is designed for non-technical professionals, but it still rewards test discipline: reading carefully, mapping scenarios to the right AI workload, and selecting the correct Azure service family without overthinking implementation details. Your goal here is to simulate the experience, expose weak spots, and lock in the must-know definitions and service mapping that appear repeatedly on the exam.
Use the mock exam parts as a diagnostic tool—not a score contest. After each part, you will run a structured review (rationales matter more than your percent). Then you’ll do weak spot analysis and finish with an exam-day checklist plus a rapid review of definitions and service-to-workload mapping. This chapter is written to align with the exam objectives: AI workloads and responsible AI, machine learning basics, computer vision workloads, natural language processing workloads, and generative AI workloads on Azure.
Exam Tip: AI-900 often tests “recognition” rather than “construction.” You’re not asked to build models; you’re asked to correctly identify the workload type, benefits/limitations, responsible AI considerations, and the best-fitting Azure service category.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Final rapid review: must-know definitions and service mapping: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Final rapid review: must-know definitions and service mapping: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Treat the mock as a rehearsal for exam conditions. Set a timer, remove distractions, and commit to one continuous attempt per part. Even though AI-900 is approachable, many candidates lose points due to pacing mistakes (spending too long on early items) and rationale mistakes (reviewing only what they got wrong rather than understanding why the right answer is right).
Use a simple pacing plan: one pass to answer everything you can confidently, a second pass for flagged items, and a final pass to verify you didn’t misread “best,” “most appropriate,” or “primarily.” On AI-900, many items are short but packed with qualifiers. Don’t “solve”; match the scenario to the concept. If you’re between two choices, look for the one that is a service family (e.g., Azure AI Vision, Azure AI Language, Azure OpenAI) versus a tool used for building (e.g., Azure Machine Learning) or data storage (often a distractor).
Rationale review is where learning happens. For each item, write a one-line reason: (1) what workload it is (vision/NLP/ML/generative), (2) which service family is designed for it, and (3) what clue in the prompt triggered that choice (e.g., “extract text from receipts” → OCR → Azure AI Vision). This turns review into pattern training, which is exactly what the exam tests.
Exam Tip: In your review notes, avoid copying definitions verbatim. Instead, capture “trigger phrases” that map to services: “sentiment/key phrases” → Azure AI Language; “detect objects in images” → Azure AI Vision; “train/evaluate model” → Azure Machine Learning; “chat/completion with prompting” → Azure OpenAI.
Mock Exam Part 1 emphasizes Domains 1–2: describing AI workloads and the fundamentals of machine learning. Expect scenario-based prompts that ask you to categorize a workload (prediction, classification, anomaly detection, recommendation) and to recognize responsible AI principles (fairness, reliability and safety, privacy and security, inclusiveness, transparency, accountability). The exam commonly checks whether you can distinguish “AI workload” from “automation” and whether you can identify when a human should stay in the loop.
For ML basics, focus on the model lifecycle at a conceptual level: data collection, training, validation/testing, deployment, and monitoring. You should be comfortable with the idea that training learns patterns from labeled (supervised) or unlabeled (unsupervised) data, and that evaluation uses metrics to estimate performance before deployment. A common trap is confusing classification (categorical outputs) with regression (numeric outputs), or thinking that higher accuracy always means a better model. AI-900 may hint at imbalanced classes, where accuracy can mislead.
Know the purpose of Azure Machine Learning: it’s the platform for building, training, and deploying ML models with experiment tracking and pipelines. Do not confuse it with prebuilt “Azure AI services,” which provide ready-to-use capabilities (vision, speech, language). If the prompt mentions “custom training,” “model management,” “pipelines,” or “MLOps,” you’re likely in Azure Machine Learning territory. If it mentions extracting insights from text or images without building a model, you’re likely in Azure AI services.
Exam Tip: When a question references “responsible AI,” scan answer options for governance actions: documenting data sources, monitoring drift, explaining decisions, limiting data retention, and testing for bias. Distractors often include purely technical tuning (like “increase epochs”), which is not a responsible AI control by itself.
Finally, watch for wording traps: “predict whether” often implies classification; “predict how many” often implies regression; “group customers” often implies clustering; “identify unusual transactions” often implies anomaly detection. Your job is to label the workload first, then map to the ML concept or Azure service.
Mock Exam Part 2 shifts to Domains 3–5: computer vision, NLP, and generative AI. Here, Azure service selection and correct workload identification are heavily tested. The exam likes “real business tasks” phrased in everyday language—your advantage is to translate that language into the right category: vision (images/video), NLP (text/language), or generative AI (create new content using LLMs).
For computer vision, be clear on the differences between image analysis (tags, captions, objects), OCR (extract text from images), and detection (find objects/regions). “Read text from a photo of a receipt” is OCR. “Count people and locate them in a frame” implies detection. “Describe what’s in an image” implies image analysis. In AI-900, these typically map to Azure AI Vision capabilities. A frequent distractor is to pick Azure Machine Learning when the prompt doesn’t ask for custom training.
For NLP, remember the common tasks: sentiment analysis, key phrase extraction, entity recognition, language detection, summarization, and translation. These align with Azure AI Language and Azure AI Translator. If the prompt says “build a conversational interface,” it may imply Azure AI Bot Service plus Language Understanding concepts; however, the exam usually expects you to recognize the language capability (intent/entity understanding) rather than design an entire bot architecture.
For generative AI, focus on what makes it distinct: it generates text, code, summaries, or embeddings based on prompts. Azure OpenAI Service is the key mapping for LLM-based chat/completions, content generation, and embeddings use cases (semantic search, RAG patterns conceptually). The exam tends to test prompting basics: provide clear instructions, include relevant context, specify format, and iterate. A common trap is thinking generative AI is always the right answer for “analyze text.” Many analysis tasks are better matched to Azure AI Language (extract entities, sentiment) rather than generating responses.
Exam Tip: If the output is “new content” (draft, rewrite, summarize in a new style, answer questions), lean generative AI (Azure OpenAI). If the output is “extracted insight” (entities, sentiment, language), lean Azure AI Language. If the input is images and you need text, lean OCR in Azure AI Vision.
Your score improves fastest when you diagnose why you missed an item. After each mock part, categorize every miss (and every lucky guess) into one of three buckets: Knowledge Gap, Wording/Qualifier Miss, or Distractor Trap. This turns review into targeted practice instead of rereading the whole course.
Knowledge Gap means you didn’t know a definition, principle, or service mapping. Fix it by creating a flashcard that includes a trigger phrase and the correct mapping. Example format: “Need OCR from images → Azure AI Vision OCR.” Also add one “not this” note to prevent future confusion (e.g., “not Azure Machine Learning unless custom model training is required”).
Wording/Qualifier Miss means you knew the topic but missed a word like “best,” “most cost-effective,” “no-code,” “prebuilt,” or “responsible.” Fix it by underlining qualifiers during future attempts and restating the question in your own words before choosing. AI-900 commonly uses qualifiers to separate two plausible answers.
Distractor Trap means you were pulled toward a familiar term (e.g., “Machine Learning” sounds advanced) even when a prebuilt AI service was more appropriate. Fix it by learning the exam’s typical distractors: storage or compute services when the question is about AI capability; Azure Machine Learning offered when the scenario is prebuilt analysis; “accuracy” offered as the only evaluation metric regardless of context.
Exam Tip: When two options both sound right, pick the one that most directly matches the workload and requires the least customization. AI-900 is not testing architecture mastery; it’s testing correct categorization and service fit.
Use this final checklist as your rapid review. Your goal is immediate recall of definitions and the ability to map scenarios to services.
Service selection cheat sheet mindset: first classify the input type (text, image, conversational prompt, structured data). Next identify the output type (insight extraction vs new content vs prediction). Finally decide: prebuilt service (Azure AI services) versus custom model lifecycle (Azure Machine Learning). Most AI-900 items can be answered from those three steps.
Exam Tip: If the question emphasizes “no code,” “quickly add AI,” or “prebuilt,” it’s usually pointing to Azure AI services rather than Azure Machine Learning.
On exam day, reduce preventable errors. If you’re testing online, ensure a quiet room, stable internet, and that you’ve completed any system checks early. If you’re testing at a center, arrive early and plan for check-in time. Either way, your performance depends on reading accuracy as much as knowledge.
Time strategy: start with a calm first pass. Answer what you know, flag what you don’t, and keep moving. AI-900 questions are typically short; the danger is rereading and second-guessing. On your second pass, resolve flagged items by mapping: workload → key clue → service. If you still can’t decide, eliminate options that don’t match the input/output type (e.g., storage services for an AI capability question) and pick the “most direct, least custom” option.
Confidence tactics: use consistency checks. If you chose Azure OpenAI for a question that asked for “extract entities,” reconsider—entity extraction is usually Azure AI Language. If you chose Azure Machine Learning for “OCR from images,” reconsider—OCR is typically Azure AI Vision. These quick sanity checks catch the most common last-minute flips.
Exam Tip: Don’t change an answer just because it feels “too easy.” AI-900 is designed so that many correct answers are straightforward when you recognize the pattern. Change only when you can point to a specific overlooked word or a clearer service mapping.
Finally, treat the exam as a classification exercise: classify the scenario, then select the matching capability and service. If you follow the same process you used in the mock exam review, you’ll reduce anxiety and improve accuracy under time pressure.
1. A retail company wants to analyze customer support emails to automatically identify whether each message is a complaint, a refund request, or a product question. The team does not want to train or manage a custom model. Which Azure AI service is the best fit?
2. A hospital wants to detect whether medical staff are wearing face masks in images captured at entrances. Which AI workload and Azure service mapping is most appropriate?
3. A financial services company wants a chatbot that can answer questions grounded in internal policy documents and return source citations. The company wants to minimize hallucinations. What is the best approach on Azure?
4. Your team is reviewing responsible AI requirements for an AI system that recommends loan approvals. Which responsible AI concern is most directly addressed by evaluating whether approval rates differ significantly across demographic groups?
5. You are taking a full-length practice test and notice you frequently miss questions where the scenario describes extracting fields (like vendor name and total) from scanned invoices. Which workload/service mapping should you reinforce in your final rapid review?