HELP

AI-900 Practice Test Bootcamp

AI Certification Exam Prep — Beginner

AI-900 Practice Test Bootcamp

AI-900 Practice Test Bootcamp

Master AI-900 with targeted practice, review, and mock exams.

Beginner ai-900 · microsoft · azure ai fundamentals · azure

Prepare for Microsoft AI-900 with a focused exam blueprint

The AI-900 exam by Microsoft validates your understanding of core artificial intelligence concepts and Azure AI services at a foundational level. This course blueprint is designed for beginners who want a clear, structured path to the Azure AI Fundamentals certification without needing prior certification experience. If you have basic IT literacy and want to build confidence through realistic practice and domain-based review, this bootcamp gives you a practical roadmap.

Rather than overwhelming you with unnecessary depth, the course is organized around the official AI-900 exam domains. You will start with exam fundamentals, then move through the tested objective areas in a way that mirrors how Microsoft expects candidates to reason through scenarios. The emphasis is on recognizing workloads, understanding service capabilities, and answering multiple-choice questions with confidence.

Built around the official AI-900 exam domains

This course covers the core domains named in the official objective list:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Each major content chapter is aligned to one or more of these domains, helping you study in a targeted way. That means every chapter supports what you are most likely to see on the exam, from common AI scenarios and machine learning basics to Azure AI Vision, speech services, language services, and Azure OpenAI concepts.

How the 6-chapter structure helps you study smarter

Chapter 1 introduces the certification journey. You will review the AI-900 format, registration process, scoring expectations, question styles, and a realistic study plan for beginners. This helps reduce uncertainty before you begin your technical review.

Chapters 2 through 5 deliver the core preparation experience. These chapters focus on the official domains and include deep conceptual explanation combined with exam-style practice. The goal is not just memorization, but understanding when to choose a service, how to identify AI scenarios, and how to eliminate incorrect answer options. You will study the differences between AI workloads, machine learning task types, computer vision scenarios, language workloads, and generative AI use cases in Azure.

Chapter 6 serves as your final readiness checkpoint. It includes full mock exam coverage, weak-spot analysis, final review guidance, and practical exam day tips. This chapter is where your preparation comes together and where you sharpen timing, confidence, and answer selection strategy.

Why this course supports passing the exam

The AI-900 is beginner-friendly, but it still tests whether you can distinguish among similar concepts and match business requirements to the right Azure AI capabilities. Many candidates struggle not because the topics are too advanced, but because the wording can be tricky. This blueprint is built to address that challenge through repetition, domain alignment, and realistic practice structure.

  • Clear mapping to Microsoft AI-900 objectives
  • Beginner-friendly sequencing with no assumed certification background
  • Strong emphasis on exam-style multiple-choice preparation
  • Dedicated mock exam and final review chapter
  • Coverage of modern Azure AI topics including generative AI workloads

Whether your goal is to validate your AI fundamentals, prepare for more advanced Azure certifications, or improve your career prospects in cloud and AI, this course gives you a clean and practical path forward. You can Register free to begin building your study plan, or browse all courses to explore additional certification prep options.

Who this bootcamp is for

This course is ideal for students, career switchers, business professionals, and IT beginners who want to understand AI concepts in Azure and pass Microsoft AI-900. It is also a strong fit for learners who prefer practice-driven study instead of long theoretical lectures. If you want a structured blueprint with clear milestones, exam relevance, and final mock exam preparation, this course is designed for you.

What You Will Learn

  • Describe AI workloads and identify common AI scenarios tested on the AI-900 exam
  • Explain fundamental principles of machine learning on Azure, including regression, classification, clustering, and responsible AI
  • Describe computer vision workloads on Azure and select the right Azure AI services for image and video scenarios
  • Describe natural language processing workloads on Azure, including speech, translation, sentiment, and conversational AI
  • Describe generative AI workloads on Azure, including copilots, prompts, responsible use, and Azure OpenAI concepts
  • Apply exam strategy, eliminate distractors, and answer Microsoft-style AI-900 practice questions with confidence

Requirements

  • Basic IT literacy and general comfort using web applications
  • No prior certification experience is needed
  • No programming background is required
  • Interest in Microsoft Azure and AI fundamentals
  • Willingness to practice with multiple-choice exam questions and review explanations

Chapter 1: AI-900 Exam Foundations and Study Plan

  • Understand the AI-900 exam format and objective map
  • Set up registration, scheduling, and test-day logistics
  • Build a beginner-friendly study strategy and revision plan
  • Learn Microsoft-style question patterns and scoring basics

Chapter 2: Describe AI Workloads

  • Recognize core AI workloads and business scenarios
  • Differentiate AI, machine learning, and generative AI concepts
  • Match workloads to Azure AI solution patterns
  • Practice scenario-based questions for Describe AI workloads

Chapter 3: Fundamental Principles of ML on Azure

  • Understand machine learning concepts tested on AI-900
  • Identify regression, classification, clustering, and deep learning basics
  • Explore Azure Machine Learning and model lifecycle essentials
  • Practice question sets for Fundamental principles of ML on Azure

Chapter 4: Computer Vision Workloads on Azure

  • Understand image, video, and document AI scenarios
  • Choose the right Azure AI service for vision tasks
  • Review OCR, facial analysis, and image classification concepts
  • Practice exam-style questions for Computer vision workloads on Azure

Chapter 5: NLP and Generative AI Workloads on Azure

  • Master Azure NLP concepts including speech and language understanding
  • Compare translation, sentiment, entity extraction, and conversational AI
  • Understand generative AI workloads, copilots, prompts, and Azure OpenAI
  • Practice integrated question sets for NLP workloads on Azure and Generative AI workloads on Azure

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Engineer Associate

Daniel Mercer is a Microsoft Certified Trainer with extensive experience preparing learners for Azure certification exams. He specializes in Microsoft AI and cloud fundamentals, translating official exam objectives into beginner-friendly study plans and realistic practice questions.

Chapter focus: AI-900 Exam Foundations and Study Plan

This chapter is written as a guided learning page, not a checklist. The goal is to help you build a mental model for AI-900 Exam Foundations and Study Plan so you can explain the ideas, implement them in code, and make good trade-off decisions when requirements change. Instead of memorising isolated terms, you will connect concepts, workflow, and outcomes in one coherent progression.

We begin by clarifying what problem this chapter solves in a real project context, then map the sequence of tasks you would follow from first attempt to reliable result. You will learn which assumptions are usually safe, which assumptions frequently fail, and how to verify your decisions with simple checks before you invest time in optimisation.

As you move through the lessons, treat each one as a building block in a larger system. The chapter is intentionally structured so each topic answers a practical question: what to do, why it matters, how to apply it, and how to detect when something is going wrong. This keeps learning grounded in execution rather than theory alone.

  • Understand the AI-900 exam format and objective map — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Set up registration, scheduling, and test-day logistics — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Build a beginner-friendly study strategy and revision plan — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Learn Microsoft-style question patterns and scoring basics — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.

Deep dive: Understand the AI-900 exam format and objective map. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Set up registration, scheduling, and test-day logistics. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Build a beginner-friendly study strategy and revision plan. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Learn Microsoft-style question patterns and scoring basics. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

By the end of this chapter, you should be able to explain the key ideas clearly, execute the workflow without guesswork, and justify your decisions with evidence. You should also be ready to carry these methods into the next chapter, where complexity increases and stronger judgement becomes essential.

Before moving on, summarise the chapter in your own words, list one mistake you would now avoid, and note one improvement you would make in a second iteration. This reflection step turns passive reading into active mastery and helps you retain the chapter as a practical skill, not temporary information.

Sections in this chapter
Section 1.1: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 1.2: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 1.3: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 1.4: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 1.5: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 1.6: Practical Focus

Practical Focus. This section deepens your understanding of AI-900 Exam Foundations and Study Plan with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Chapter milestones
  • Understand the AI-900 exam format and objective map
  • Set up registration, scheduling, and test-day logistics
  • Build a beginner-friendly study strategy and revision plan
  • Learn Microsoft-style question patterns and scoring basics
Chapter quiz

1. You are preparing for the AI-900 exam and want to use your study time efficiently. Which action should you take FIRST to align your preparation with the skills measured on the exam?

Show answer
Correct answer: Review the official exam skills outline and map your study plan to each objective area
The correct answer is to review the official exam skills outline and build a study plan around the measured objectives. Microsoft exams are based on published skill domains, so using the objective map helps prioritize what is actually assessed. Memorizing portal steps without checking coverage is inefficient because AI-900 is a fundamentals exam that emphasizes concepts and scenarios more than detailed procedural recall. Focusing only on practice questions is also incorrect because real Microsoft exams test understanding of the objective domains, not simple question memorization.

2. A candidate plans to take AI-900 next week but has not yet confirmed exam registration details. On exam day, the candidate wants to avoid preventable delays or missed access. What is the BEST preparation step?

Show answer
Correct answer: Verify scheduling details, identification requirements, and any test delivery instructions before exam day
The best step is to verify scheduling, ID requirements, and delivery instructions in advance. This aligns with standard certification exam readiness practices and reduces the risk of administrative issues blocking the attempt. Waiting until exam day is risky because identification or environment problems can prevent check-in. Assuming the platform will allow extra time is also wrong because certification exams generally follow strict scheduling and check-in rules, and delays may lead to forfeiture rather than accommodation.

3. A beginner has two weeks to prepare for AI-900. The learner understands basic cloud concepts but struggles to retain AI terminology after long study sessions. Which study approach is MOST appropriate?

Show answer
Correct answer: Use a structured plan with short study blocks, objective-based review, and regular practice checks
A structured plan with shorter sessions, mapped objectives, and frequent review is the most appropriate beginner-friendly strategy. It supports retention and helps identify weak areas early. Reading everything once and delaying practice until the end is less effective because it provides little feedback during preparation. Skipping foundational topics is incorrect because AI-900 is a fundamentals exam, so understanding core concepts, terminology, and use cases is essential; advanced implementation detail alone does not match the exam's purpose.

4. A learner notices that many Microsoft-style certification questions include extra scenario details and multiple plausible answers. Which strategy is MOST effective for choosing the best answer?

Show answer
Correct answer: Identify the key requirement in the scenario and eliminate options that do not fully satisfy it
The correct strategy is to identify the key requirement and eliminate options that do not fully meet it. Microsoft exam questions often include distractors that are partially correct but do not satisfy the specific business or technical need described. Choosing the most technical-sounding answer is wrong because the best answer is determined by fit to requirements, not complexity. Ignoring the scenario is also incorrect because certification questions are designed around context, constraints, and intended outcomes.

5. A student asks how scoring works on the AI-900 exam and whether every question should be treated casually because some may not count. Which response is BEST?

Show answer
Correct answer: Some exams may include unscored items, so the student should still answer every question carefully and manage time across the entire exam
The best response is that some Microsoft exams may include unscored items, but candidates should still answer every question carefully and manage time across the full exam. Candidates are not told which items are scored, so all questions should be treated seriously. Saying only long scenario questions count is false; scoring is not determined by question length in that way. Assuming unfamiliar questions are unscored is also incorrect and dangerous because difficult or unfamiliar items can still be scored and may represent important objective areas.

Chapter 2: Describe AI Workloads

This chapter maps directly to one of the most important AI-900 objective areas: recognizing AI workloads and matching them to common business scenarios. On the exam, Microsoft often describes a business need in plain language and expects you to identify the workload category before choosing a service. That means your first job is not memorizing every Azure product name. Your first job is recognizing the pattern: is the scenario about predicting a number, classifying text, understanding images, extracting fields from forms, translating speech, answering questions from enterprise content, or generating new content from prompts?

In AI-900, the wording is usually broader than in role-based exams. You are expected to understand what a workload does, what type of problem it solves, and which Azure AI solution pattern best fits. A common trap is choosing a technology because a keyword looks familiar instead of identifying the underlying task. For example, “recommend the best category for incoming support tickets” points to classification, while “group customers with similar purchasing behavior” points to clustering. Likewise, “extract invoice totals from scanned receipts” is not generic OCR alone; it is a document intelligence scenario because the goal is structured field extraction from documents.

This chapter integrates the key lessons you must master: recognizing core AI workloads and business scenarios, differentiating AI, machine learning, and generative AI, matching workloads to Azure AI solution patterns, and building test-taking discipline for scenario-based questions. As an exam candidate, train yourself to read from business requirement to workload type, then from workload type to likely Azure service family. Microsoft tests conceptual alignment more than implementation detail at this level.

Exam Tip: When a question feels vague, ask yourself, “What is the system expected to produce?” Predictions suggest machine learning. Descriptions or detections from images suggest computer vision. Extracted meaning from language suggests NLP. New content generated from prompts suggests generative AI. This simple filter eliminates many distractors.

Another high-value habit is distinguishing traditional AI workloads from generative AI. AI is the broad umbrella. Machine learning is one approach within AI for learning patterns from data. Generative AI is a subset focused on creating new content such as text, code, or images based on prompts and learned patterns. The exam may present all three terms together and test whether you understand the relationship. If a system predicts churn, that is machine learning. If a system drafts a customer email from a prompt, that is generative AI. Both are AI, but they solve different types of problems.

You should also connect workload language to Azure solution patterns without overcomplicating the answer. For AI-900, Microsoft expects you to recognize services such as Azure AI Vision, Azure AI Language, Azure AI Speech, Azure AI Document Intelligence, Azure AI Search, Azure Machine Learning, and Azure OpenAI. Questions may not require deployment steps, model architecture, or code. Instead, they test whether you can select the right family of services based on scenario clues.

  • Prediction of values or categories usually maps to machine learning.
  • Detection, analysis, OCR, and image understanding usually map to computer vision.
  • Speech-to-text, text analytics, translation, and question answering map to natural language workloads.
  • Form and invoice extraction map to document intelligence.
  • Searching across large collections with enrichment maps to knowledge mining and Azure AI Search.
  • Content generation, copilots, and prompt-based assistants map to generative AI and Azure OpenAI concepts.

Exam Tip: Do not confuse a user interface feature with an AI workload. A chatbot is not automatically generative AI. If it follows predefined intents and responses, it is a conversational AI or question answering solution. It becomes generative AI when it creates original responses from prompts and foundation models.

As you study the sections that follow, focus on the exam objective behind each topic: identify the workload, recognize the common scenario, eliminate near-miss answers, and justify why one Azure solution pattern is more appropriate than another. That is the mindset that earns points on AI-900.

Practice note for Recognize core AI workloads and business scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Describe features of common AI workloads

Section 2.1: Describe features of common AI workloads

The AI-900 exam expects you to recognize the major AI workload categories and the kinds of outputs they produce. The most common workloads are machine learning, computer vision, natural language processing, conversational AI, document intelligence, knowledge mining, and generative AI. Each category has a distinct purpose. Machine learning identifies patterns in data to predict outcomes. Computer vision interprets images and video. NLP derives meaning from text or speech. Document intelligence extracts structured data from forms and business documents. Knowledge mining makes large stores of content searchable and usable. Generative AI creates new content based on prompts.

A good exam strategy is to classify workloads by business outcome. If the business wants to forecast sales, detect fraud, or assign risk scores, you are in a prediction workload. If it wants to identify objects in photos, read printed text from images, or analyze a video stream, you are in a vision workload. If it wants to detect sentiment, extract key phrases, translate language, or convert speech to text, you are in an NLP or speech workload. If it wants to answer questions from a body of company content, think of question answering, knowledge mining, or search-based AI patterns.

Microsoft also tests whether you understand that one scenario can involve multiple workloads. A retail app might use vision to scan shelf images, language to summarize customer feedback, and machine learning to forecast stock demand. On the exam, however, the correct answer usually aligns with the primary requirement in the question stem. Read carefully for the main verb: classify, detect, extract, translate, summarize, generate, or predict.

  • Prediction and scoring: machine learning workload
  • Image tagging, OCR, face or object analysis: computer vision workload
  • Sentiment, translation, entity extraction: NLP workload
  • Form field extraction: document intelligence workload
  • Enterprise document indexing and retrieval: knowledge mining workload
  • Prompt-based content creation: generative AI workload

Exam Tip: If two choices seem plausible, choose the one that best matches the output format. For example, extracting invoice number, vendor name, and total amount is more specific than simply reading text from an image, so document intelligence is a better fit than generic OCR alone.

A common trap is selecting “machine learning” for every intelligent system. Machine learning is important, but many Azure AI services expose prebuilt AI capabilities without requiring you to train a custom model. At AI-900 level, you must know when a prebuilt AI service fits the requirement more directly than building a model from scratch.

Section 2.2: Machine learning versus AI workloads in business use cases

Section 2.2: Machine learning versus AI workloads in business use cases

This section addresses a classic AI-900 distinction: AI is the broad concept of systems that perform tasks requiring human-like intelligence, while machine learning is a subset of AI that learns patterns from data. On the exam, Microsoft may ask you to separate general AI workloads from machine learning-specific use cases. The safest way to do this is by asking whether the system is learning from historical examples to make predictions or decisions.

Machine learning business scenarios typically include regression, classification, and clustering. Regression predicts a numeric value, such as house price, demand volume, or delivery time. Classification predicts a label, such as approve or deny, churn or retain, spam or not spam. Clustering groups similar items without predefined labels, such as segmenting customers by behavior. While these deeper terms are explored more fully elsewhere in the course, you should already connect them to the “Describe AI workloads” objective because exam questions often disguise them as business problems rather than technical categories.

By contrast, not every AI workload is machine learning from your perspective as a consumer of Azure services. A company using Azure AI Language for sentiment analysis is using AI, but not necessarily building a machine learning model itself. A team using Azure AI Vision to extract text from images is also using AI. The AI-900 exam likes to test whether you can avoid overengineering. If a prebuilt service matches the requirement, it is usually the best answer.

Exam Tip: Look for clues that indicate custom predictive modeling, such as “use historical data to predict future outcomes,” “train a model,” or “score new records.” Those phrases strongly suggest machine learning. If the question instead asks for text translation, OCR, or speech transcription, a specialized Azure AI service is more likely correct.

Generative AI adds another distinction. It is still AI, but unlike traditional predictive models, it is optimized to create original output such as summaries, drafts, code, or conversational responses. A common trap is confusing classification with generation. If the task is to label a support message as billing, technical, or shipping, that is classification. If the task is to draft a response to the message, that is generative AI.

In business scenarios, always map the requirement to the simplest correct workload category first. Then think about Azure. Prediction often points toward Azure Machine Learning. Vision, language, speech, and document extraction often point toward Azure AI services. Prompt-driven content creation points toward Azure OpenAI concepts.

Section 2.3: Computer vision, NLP, document intelligence, and knowledge mining scenarios

Section 2.3: Computer vision, NLP, document intelligence, and knowledge mining scenarios

AI-900 frequently tests your ability to match scenario descriptions to the correct Azure AI solution pattern. Four areas often appear together because they sound similar at first glance: computer vision, natural language processing, document intelligence, and knowledge mining. The exam challenge is separating them by purpose.

Computer vision scenarios involve understanding images or video. Typical tasks include image classification, object detection, facial analysis awareness at a high level, OCR, image captioning, and video analysis. If the input is primarily visual and the system must detect or describe what is present, think vision. Example business cases include inspecting products on a manufacturing line, reading street signs, detecting unsafe conditions from camera feeds, or tagging images in a media library.

NLP scenarios involve deriving meaning from language in text or speech. Typical tasks include sentiment analysis, key phrase extraction, named entity recognition, language detection, translation, summarization, speech-to-text, text-to-speech, and conversational interactions. If the core problem is understanding or producing language, think NLP or speech. Customer feedback analysis, multilingual help desks, meeting transcription, and voice-enabled apps are standard examples.

Document intelligence sits between vision and language but is best treated as its own scenario type for the exam. Its purpose is not just reading text from documents; it is extracting structured fields, tables, and layout information from forms such as invoices, receipts, IDs, tax forms, and contracts. When a question emphasizes business forms and data capture into systems, document intelligence is usually the strongest answer.

Knowledge mining is about unlocking value from large volumes of content by enriching, indexing, and searching it. Azure AI Search often appears here. If a company wants employees to search across manuals, PDFs, emails, and scanned files to find relevant information quickly, knowledge mining is the pattern. It can incorporate AI enrichment such as OCR, entity extraction, and key phrase extraction to improve search quality.

  • Analyze photos or video feeds: computer vision
  • Translate text or transcribe audio: NLP or speech
  • Extract fields from invoices or receipts: document intelligence
  • Search and retrieve insights from enterprise content: knowledge mining

Exam Tip: The phrase “extract data from forms” is your clue for document intelligence. The phrase “make documents searchable” points more strongly to knowledge mining and Azure AI Search. These are common distractor pairs.

Another trap is choosing computer vision when the real need is search and retrieval over a document repository. OCR may be part of the pipeline, but if the business outcome is enterprise search, the broader pattern is knowledge mining.

Section 2.4: Generative AI fundamentals and common productivity use cases

Section 2.4: Generative AI fundamentals and common productivity use cases

Generative AI is now a visible part of the AI-900 blueprint, and Microsoft expects you to understand foundational concepts without needing deep model architecture knowledge. Generative AI uses large models trained on broad data patterns to create new content in response to prompts. The output might be text, code, summaries, explanations, images, or conversational responses. On the exam, the easiest way to identify generative AI is to look for verbs such as draft, compose, summarize, rewrite, generate, or chat.

Common productivity use cases include drafting emails, summarizing meetings, generating marketing copy, producing code suggestions, creating knowledge assistants, and building copilots that help users complete tasks in natural language. The term “copilot” generally refers to an AI assistant embedded in an application or workflow to improve productivity. It is not just a chatbot; it is a task-oriented assistant that uses context, user intent, and often enterprise data to provide useful output.

Prompting is another exam topic. A prompt is the instruction or context given to a generative model. Better prompts generally produce better results. AI-900 does not require advanced prompt engineering, but you should know that prompts can guide tone, format, constraints, and desired outcome. For example, asking for a summary in bullet points for an executive audience is more specific than asking for a generic summary.

Azure OpenAI appears at a conceptual level in this exam domain. You should understand that it provides access to powerful generative AI models within Azure, with enterprise-oriented governance and security considerations. Expect broad questions about when Azure OpenAI is appropriate, especially for copilots, summarization, and content generation scenarios.

Exam Tip: Do not assume every conversational interface requires generative AI. If the scenario is built around predefined intents, fixed answers, or narrow FAQ retrieval, a traditional conversational AI solution may fit. If the requirement is to generate original responses, summarize documents, or create drafts, generative AI is the better match.

Common traps include confusing search with generation and classification with generation. If the system retrieves an existing paragraph from a policy document, that is not generation by itself. If it creates a plain-language summary of the policy, that is generative AI. If it labels text as positive or negative, that is sentiment analysis, not generation.

Section 2.5: Responsible AI principles across Azure AI workloads

Section 2.5: Responsible AI principles across Azure AI workloads

Responsible AI is not a side topic on AI-900; it is a tested concept that applies across all workloads. Microsoft commonly frames responsible AI around principles such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. You do not need to memorize a philosophical essay, but you do need to recognize what each principle looks like in practice and how it affects workload choices.

Fairness means AI systems should avoid unjust bias and treat people equitably. Reliability and safety mean systems should perform consistently and minimize harm. Privacy and security relate to protecting data and controlling access. Inclusiveness means designing for a broad range of users and needs. Transparency means users should understand the limits and behavior of AI systems. Accountability means humans remain responsible for oversight and governance.

These principles appear in different forms across workloads. In machine learning, fairness matters when models affect hiring, lending, insurance, or admissions. In computer vision, safety and privacy matter when processing images of people or sensitive locations. In NLP and generative AI, harmful content, hallucinations, and misuse become important concerns. In document intelligence and knowledge mining, privacy and access control matter because extracted information may be sensitive.

Exam Tip: If a question asks how to reduce risk in a generative AI solution, think about content filtering, human review, grounding responses in trusted data, and making limitations clear to users. If the question is about predictive decision-making, think fairness and accountability.

A common trap is assuming responsible AI means only compliance or security. Those are part of it, but the exam may focus just as much on transparency or fairness. Another trap is believing responsible AI applies only to custom models. It also applies when using prebuilt Azure AI services because organizations are still accountable for how AI is deployed and monitored.

For exam success, tie each principle to a practical scenario. If the scenario mentions biased outcomes across groups, that is fairness. If it mentions protecting customer records, that is privacy and security. If users need to understand why AI should not be solely trusted, that is transparency and accountability. That mapping makes distractors easier to eliminate.

Section 2.6: Exam-style MCQ drills for Describe AI workloads

Section 2.6: Exam-style MCQ drills for Describe AI workloads

This objective area is heavily scenario-based, so your exam approach matters as much as your content knowledge. Microsoft-style multiple-choice questions often include one obviously wrong option and two plausible distractors. Your job is to identify the primary workload, not every technology that could be part of the solution. Read the scenario for the business goal, the input type, and the expected output. Those three clues usually narrow the answer quickly.

Start by identifying the input. Is it tabular historical data, text, audio, image, video, scanned forms, or open-ended prompts? Next identify the output. Is the system predicting a value, assigning a label, extracting fields, detecting sentiment, transcribing speech, answering from indexed content, or generating new text? Finally, identify whether the requirement is narrow and structured or open-ended and creative. Narrow structured tasks often point to specialized Azure AI services. Open-ended content creation points to generative AI and Azure OpenAI concepts.

Use elimination aggressively. If the requirement is to group similar customers with no predefined labels, eliminate classification immediately. If the requirement is to pull invoice totals and dates into a system of record, eliminate generic translation or sentiment choices. If the requirement is to summarize meeting notes, eliminate regression and clustering. This sounds basic, but under exam pressure candidates often skip this discipline and get trapped by familiar product names.

  • Ask: what is the business trying to accomplish?
  • Identify the data type: numbers, text, speech, image, video, documents, prompts
  • Identify the output type: prediction, label, extraction, translation, search, generation
  • Map to the simplest correct workload before selecting an Azure service family

Exam Tip: Beware of answers that are technically possible but not the best fit for AI-900. Microsoft generally wants the most direct, managed, and purpose-built Azure AI solution pattern, especially when the scenario is straightforward.

One final coaching point: do not overread the stem. If the question describes a simple need like sentiment analysis or OCR, it is probably testing recognition of that workload, not your ability to design a full enterprise architecture. Choose the answer that aligns cleanly with the stated need, then move on with confidence.

Chapter milestones
  • Recognize core AI workloads and business scenarios
  • Differentiate AI, machine learning, and generative AI concepts
  • Match workloads to Azure AI solution patterns
  • Practice scenario-based questions for Describe AI workloads
Chapter quiz

1. A company wants to predict whether a customer will cancel a subscription next month based on past purchase history, support activity, and account age. Which AI workload best fits this requirement?

Show answer
Correct answer: Machine learning for prediction
This scenario is a prediction problem based on historical data, which maps to machine learning. The goal is to forecast an outcome such as churn. Computer vision is incorrect because no image or video data is being analyzed. Generative AI is incorrect because the system is not being asked to create new content such as text or images; it is being asked to predict a business outcome.

2. A retail company scans supplier invoices and needs to automatically extract fields such as invoice number, vendor name, and total amount into a structured system. Which Azure AI solution pattern is the best match?

Show answer
Correct answer: Azure AI Document Intelligence for structured field extraction
The requirement is not just to read text from an image, but to extract structured fields from business documents. That is a document intelligence scenario. Azure AI Vision is a distractor because although OCR can detect text, the business need is field extraction from forms and invoices, which aligns more closely with Azure AI Document Intelligence. Azure OpenAI is incorrect because generating text from prompts does not address form parsing and field extraction.

3. A customer support team wants a solution that can listen to spoken calls, convert the speech to text, and then analyze the text for customer sentiment. Which workload category is most appropriate?

Show answer
Correct answer: Natural language processing with speech capabilities
This requirement combines speech-to-text and text analysis, both of which are language-related AI capabilities. Therefore, the best fit is a natural language workload with speech services. Computer vision is incorrect because the input is spoken audio rather than images or video. Anomaly detection is also incorrect because the goal is not to identify unusual patterns in numeric sequences, but to transcribe and analyze language.

4. A company wants employees to ask questions in natural language and receive answers grounded in internal policy documents spread across thousands of files. Which Azure AI solution pattern best matches this scenario?

Show answer
Correct answer: Knowledge mining with Azure AI Search
The scenario describes searching and extracting useful answers from a large body of enterprise content, which is a knowledge mining pattern commonly associated with Azure AI Search. Image classification is incorrect because there is no requirement to analyze pictures. Regression is incorrect because the company does not need to predict a numeric value; it needs to retrieve and surface relevant knowledge from documents.

5. A business wants a system that can draft marketing email content from a short prompt entered by a user. Which statement best describes this solution?

Show answer
Correct answer: It is generative AI because it creates new content from prompts
Drafting email content from a prompt is a classic generative AI scenario because the system produces new text rather than only classifying or predicting. Computer vision is incorrect because no image understanding is involved. The statement that it is only machine learning and not AI is also incorrect because generative AI is a subset of AI. On the AI-900 exam, Microsoft expects candidates to understand that AI is the broad category, while machine learning and generative AI are specific approaches within it.

Chapter 3: Fundamental Principles of ML on Azure

This chapter maps directly to one of the most heavily tested AI-900 domains: the fundamental principles of machine learning and how those principles appear in Azure. On the exam, Microsoft is not expecting you to build advanced models from scratch. Instead, the test measures whether you can recognize the type of machine learning problem being described, distinguish supervised from unsupervised approaches, understand basic model lifecycle terminology, and choose the appropriate Azure service or concept for a scenario.

The most important exam habit in this chapter is learning to classify the question before you classify the data. Many AI-900 items describe a business scenario in plain language, then ask which kind of machine learning should be used. If you rush to match keywords without identifying the core problem, distractors become very tempting. For example, if a question asks you to predict a numeric value, the answer is almost certainly regression, even if the business context mentions categories, users, or customer segments. If the goal is to assign one of several categories, that points to classification. If the goal is to discover natural groupings without predefined labels, that is clustering.

This chapter also helps you connect conceptual ML knowledge to Azure Machine Learning. The AI-900 exam stays at a fundamentals level, so you should know that Azure Machine Learning supports model training, automated machine learning, data preparation, deployment, endpoint management, and responsible AI practices. You are not expected to memorize deep technical implementation steps, but you should know what the platform is for and how it supports the model lifecycle from experimentation through deployment and monitoring.

Another recurring exam objective is understanding vocabulary. Terms like feature, label, training data, validation data, testing, overfitting, model evaluation, and inferencing often appear in answer choices. Microsoft frequently places two nearly correct options side by side, where the deciding factor is whether the model is learning from labeled examples or simply finding patterns. Careful reading matters. The exam often rewards precise conceptual understanding more than memorization.

Responsible AI is also part of the tested foundation. In AI-900, this usually appears as broad principles rather than policy details. You should recognize that AI solutions should be fair, reliable, safe, private, secure, inclusive, transparent, and accountable. When tied to Azure, these concepts support trustworthy model development and deployment rather than replacing core ML methods.

Exam Tip: When a question asks what Azure service supports the end-to-end machine learning lifecycle, think Azure Machine Learning. When it asks which type of model predicts a number, think regression. When it asks which approach assigns one of several categories, think classification. When it asks how to find similar groups in unlabeled data, think clustering.

As you move through this chapter, focus on the decision patterns the exam tests: what kind of problem is being solved, what type of data is available, how success is measured, and where Azure fits in the workflow. If you can consistently map scenario to ML concept, you will eliminate a large number of distractors and answer Microsoft-style practice questions with much greater confidence.

Practice note for Understand machine learning concepts tested on AI-900: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify regression, classification, clustering, and deep learning basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explore Azure Machine Learning and model lifecycle essentials: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Describe fundamental principles of machine learning

Section 3.1: Describe fundamental principles of machine learning

Machine learning is a branch of AI in which systems learn patterns from data rather than being programmed with explicit rules for every possible outcome. On the AI-900 exam, this idea is often tested through simple business scenarios: predicting prices, identifying whether a message is spam, grouping customers by behavior, or recognizing trends in historical data. Your goal is to determine whether the scenario describes learning from examples, assigning categories, predicting values, or discovering patterns.

A foundational distinction is between supervised and unsupervised learning. Supervised learning uses labeled data, meaning the training examples include the correct outcome. If you train a model on customer data where each record already shows whether the customer churned, you are using supervision. Unsupervised learning does not include known outcomes and instead tries to identify structure in the data, such as natural clusters. AI-900 questions may avoid these exact labels and instead describe whether the data contains known results. That is your clue.

Another important principle is that machine learning is data-driven. Models depend on the quality, relevance, and representativeness of the data used to train them. Even a strong algorithm cannot overcome poor data. Exam questions may hint at this by describing inaccurate predictions caused by biased, incomplete, or noisy data. If the scenario emphasizes improving data quality or using more representative training sets, that usually points to better model performance rather than a different AI service.

Machine learning is also probabilistic, not magical. A model produces predictions based on patterns it learned, and those predictions are not guaranteed to be perfect. This matters because the exam may contrast deterministic business logic with predictive models. If the task can be solved with simple fixed rules, it may not require machine learning at all. If the task involves uncertain outcomes or pattern recognition from many examples, ML is more appropriate.

Exam Tip: If a question asks what makes machine learning different from traditional programming, the best answer usually involves learning patterns from data to make predictions or decisions, rather than following only explicitly coded rules.

Common traps include confusing machine learning with broader AI services such as computer vision or natural language processing. Those are AI workloads, but many of them are powered by ML behind the scenes. On the exam, stay focused on the problem type being tested. If the item is about learning from data, evaluation, and prediction, it belongs in the machine learning domain. If it is about analyzing images, speech, or text directly with a managed Azure AI service, that is likely a different exam objective.

Section 3.2: Regression, classification, and clustering explained for beginners

Section 3.2: Regression, classification, and clustering explained for beginners

These three model types are among the most testable concepts in AI-900, and Microsoft often checks whether you can distinguish them quickly from plain-English scenarios. The easiest way to separate them is to ask one question: what kind of output is the model supposed to produce?

Regression predicts a numeric value. Examples include forecasting house prices, estimating delivery times, predicting sales revenue, or calculating future energy usage. If the answer is a number on a continuous scale, regression is the correct concept. A common exam trap is seeing categories like low, medium, and high in the scenario description and jumping to classification, even when the actual requested output is a number such as exact sales volume.

Classification predicts a category or class label. Examples include whether a loan application should be approved, whether an email is spam, whether a machine is likely to fail, or which type of customer a record belongs to when labels are already defined. Classification can be binary, such as yes or no, or multiclass, such as red, blue, or green product categories. If the model chooses from known labels, think classification.

Clustering groups similar items together without predefined labels. This is unsupervised learning. A company might cluster customers by purchasing behavior or group documents by similarity when the categories are not known in advance. On the exam, the keyword is often discovery: discover segments, identify natural groups, or organize data by similarity. That points to clustering rather than classification.

  • Regression = predict a number
  • Classification = predict a category
  • Clustering = find groups in unlabeled data

Exam Tip: If labels already exist in the training data, the exam usually points to supervised learning such as regression or classification. If no labels exist and the goal is to find patterns, clustering is the likely answer.

Watch for distractors involving anomaly detection or forecasting language. At the AI-900 level, Microsoft usually keeps the focus on the main family of the problem. If a question asks for grouping similar records, clustering beats classification even if the groups later receive business-friendly names. If a question asks for exact values, regression beats classification even if the values are later bucketed into ranges by humans.

The exam is not trying to test your knowledge of specific algorithms in depth. You do not need a deep mathematical understanding here. You need strong scenario recognition. Train yourself to translate business language into machine learning language: amount means regression, category means classification, similarity means clustering.

Section 3.3: Training data, features, labels, evaluation, and overfitting basics

Section 3.3: Training data, features, labels, evaluation, and overfitting basics

Once you identify the model type, the next exam skill is understanding the basic ingredients of model training. Training data is the dataset used to teach the model. Features are the input variables used to make a prediction, such as age, income, location, or device type. Labels are the known outcomes in supervised learning, such as the actual house price or whether a transaction was fraudulent. AI-900 often tests whether you can tell inputs from outputs. If an answer choice mixes up features and labels, it is usually wrong.

Evaluation matters because models must be measured, not assumed to be accurate. A model is trained on historical data, validated or tested on separate data, and then judged based on performance. The exam may refer broadly to accuracy or model performance rather than asking for advanced metrics. What matters is that you understand the purpose of evaluation: to estimate how well the model generalizes to new data.

Overfitting is one of the most common conceptual traps. A model that overfits learns the training data too closely, including noise and quirks, and then performs poorly on unseen data. A student-friendly way to think about it is memorization instead of learning. If a model performs extremely well during training but badly in real-world use, overfitting is a strong possibility. The opposite issue, underfitting, means the model is too simple to capture important patterns.

Exam Tip: If a question describes excellent training performance but poor test performance, select overfitting, not successful optimization. Microsoft likes this contrast.

Data splitting is another useful concept. Training data teaches the model, while validation or test data helps evaluate whether the model works on new examples. You do not need to memorize exact percentages for AI-900, but you should understand why a separate dataset is necessary. Without it, you cannot fairly measure generalization.

Common exam traps include treating labels as features, assuming more data always guarantees better results, or confusing evaluation with deployment. Evaluation happens before you trust the model in production. Deployment is when the trained model is made available for inferencing, often through an endpoint. If the scenario asks whether the model is accurate enough, think evaluation. If it asks how applications consume predictions, think deployment or inferencing.

Finally, remember that good training data should be relevant and representative. If the data reflects bias or excludes certain populations, the resulting model may perform unfairly. That creates a bridge to responsible AI concepts, which Azure also emphasizes.

Section 3.4: Deep learning concepts and neural network intuition

Section 3.4: Deep learning concepts and neural network intuition

AI-900 does not expect you to become a neural network engineer, but you should understand what deep learning is and why it matters. Deep learning is a subset of machine learning that uses neural networks with multiple layers to learn complex patterns from large amounts of data. It is especially useful for tasks involving images, speech, and natural language because those domains contain rich, high-dimensional patterns that are difficult to capture with simple rules.

A neural network is inspired loosely by the structure of the brain, though the analogy is simplified for exam purposes. It consists of layers of interconnected nodes. The input layer receives features, hidden layers transform and combine those features, and the output layer produces a prediction. The more layers involved, the deeper the network. This depth helps the model learn increasingly abstract patterns, such as edges in images, then shapes, then objects.

One way the exam may test deep learning is by asking when it is appropriate. If the scenario involves image recognition, object detection, speech transcription, or advanced language understanding, deep learning is often the best conceptual fit. If the scenario is a straightforward tabular business prediction, deep learning may be unnecessary and not the best answer at the fundamentals level.

Exam Tip: On AI-900, deep learning is usually associated with complex pattern recognition tasks, especially in computer vision and NLP. Do not over-select it for every ML scenario.

Another trap is confusing neural networks with the broader Azure services that use them. Azure AI services may rely on deep learning internally, but the exam might ask whether you need to build a custom model or use a prebuilt managed service. If the task is common and well-supported, a prebuilt service may be more appropriate than training a custom deep learning model.

You should also know that deep learning generally benefits from substantial data and compute resources. That is one reason cloud platforms like Azure are important. Azure provides infrastructure and managed services that help teams train and deploy advanced models without manually assembling every component. At the AI-900 level, the key takeaway is not the mathematics of backpropagation, but the practical insight that deep learning is powerful for complex, pattern-heavy tasks and is a major foundation of modern AI workloads.

Section 3.5: Azure Machine Learning capabilities and responsible ML on Azure

Section 3.5: Azure Machine Learning capabilities and responsible ML on Azure

Azure Machine Learning is Microsoft’s cloud platform for building, training, managing, and deploying machine learning models. For AI-900, think of it as the end-to-end environment for the ML lifecycle. It supports data preparation, experimentation, automated machine learning, model training, tracking, deployment, and monitoring. If an exam question asks which Azure service helps data scientists and developers manage machine learning workflows from creation to production, Azure Machine Learning is the core answer.

Automated machine learning, often called automated ML or AutoML, is another tested concept. It helps users train and tune models by automating parts of the model selection and optimization process. The exam may present this as a faster way to identify a suitable model for tabular data problems such as regression or classification. This does not eliminate the need for judgment, but it simplifies experimentation.

Deployment is also important. After a model is trained, it can be deployed to an endpoint so applications can send data and receive predictions. The exam may describe this as consuming the model in an app or making real-time predictions. Separate this from training. Training creates the model; deployment exposes it for inferencing.

Responsible AI is a core Azure theme. Microsoft emphasizes principles such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. On AI-900, these principles are not fringe topics; they are part of the fundamentals. If a question asks how to reduce harmful outcomes, improve trust, or ensure ethical use of AI, responsible AI concepts are likely being tested.

Exam Tip: If the scenario is about the ML platform itself, choose Azure Machine Learning. If the scenario is about a broad ethical principle such as avoiding bias or explaining model behavior, think responsible AI rather than a model type.

Common traps include choosing Azure AI services when the question is really about custom model lifecycle management. Azure AI services provide prebuilt AI capabilities, while Azure Machine Learning is for building and operationalizing your own ML solutions. Another trap is treating responsible AI as only a compliance issue. On the exam, it is broader: it is about building systems that are trustworthy, understandable, and appropriate for real users.

In short, Azure Machine Learning gives you the workspace and tools for ML development, while responsible AI provides the principles that should guide how those models are designed, evaluated, and deployed.

Section 3.6: Exam-style MCQ drills for Fundamental principles of ML on Azure

Section 3.6: Exam-style MCQ drills for Fundamental principles of ML on Azure

This section is about strategy rather than listing practice questions. In AI-900 multiple-choice items, Microsoft often presents short scenarios with familiar business wording and several plausible answers. Your success depends on turning each scenario into a machine learning pattern. Before looking at options, decide what the problem type is: numeric prediction, category prediction, finding groups, model management on Azure, or ethical use.

The best elimination method is to remove answers that solve a different AI workload. For example, if the question is about predicting a value from historical records, remove clustering immediately because clustering does not predict labeled outcomes. If the question is about discovering segments in unlabeled customer behavior, remove classification because no known classes exist yet. If the question is about managing training and deployment, remove service choices that are only for prebuilt vision or language tasks.

Pay attention to wording such as predict, classify, group, label, train, deploy, evaluate, and monitor. These are signal words. Predict an amount suggests regression. Classify into predefined outcomes suggests classification. Group by similarity suggests clustering. Train and deploy custom models on Azure suggests Azure Machine Learning. Fairness, transparency, and accountability suggest responsible AI.

Exam Tip: On many AI-900 questions, one word is enough to unlock the answer. Number, category, or group usually tells you the model family. Custom model lifecycle usually tells you the Azure service.

Also beware of overthinking. AI-900 is a fundamentals exam. If two answers differ in technical sophistication, the simpler fundamentals-aligned one is often correct. Microsoft is usually not testing obscure algorithm choices here. It is testing whether you understand the basic relationship between business needs and AI concepts.

When reviewing practice drills, ask yourself four things after each item: What was the data? Were labels present? What output was needed? Was the question asking about modeling, deployment, or ethics? If you can answer those consistently, your score improves quickly. This chapter’s lessons on machine learning concepts, core model types, data and evaluation basics, deep learning intuition, Azure Machine Learning capabilities, and exam elimination strategy together form a reliable framework for this objective area.

Chapter milestones
  • Understand machine learning concepts tested on AI-900
  • Identify regression, classification, clustering, and deep learning basics
  • Explore Azure Machine Learning and model lifecycle essentials
  • Practice question sets for Fundamental principles of ML on Azure
Chapter quiz

1. A retail company wants to predict the total dollar amount a customer will spend next month based on previous purchases, account age, and website activity. Which type of machine learning should the company use?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numeric value: the total dollar amount. Classification would be used if the model needed to assign customers to predefined categories such as high, medium, or low spenders. Clustering would be used to discover natural groupings in unlabeled data, not to predict a specific numeric outcome.

2. A bank wants to build a model that determines whether a loan application should be approved or denied based on labeled historical application data. Which machine learning approach should be used?

Show answer
Correct answer: Classification
Classification is correct because the model must assign each loan application to one of two predefined categories: approved or denied. Clustering is incorrect because it finds patterns or groups in unlabeled data rather than predicting known labels. Regression is incorrect because it predicts continuous numeric values, not discrete categories.

3. A marketing team has customer data but no labels. They want to identify groups of customers with similar purchasing behavior so they can design targeted campaigns. Which technique best fits this requirement?

Show answer
Correct answer: Clustering
Clustering is correct because the data is unlabeled and the goal is to find natural groupings of similar customers. Classification is incorrect because it requires predefined labels to learn from. Regression is also incorrect because it is used to predict a numeric value rather than discover segments within the data.

4. A data science team needs an Azure service to prepare data, train models, use automated machine learning, deploy models as endpoints, and monitor the model lifecycle. Which Azure service should they use?

Show answer
Correct answer: Azure Machine Learning
Azure Machine Learning is correct because AI-900 expects you to recognize it as the Azure service that supports the end-to-end machine learning lifecycle, including experimentation, training, deployment, and monitoring. Azure AI Language is a specialized service for language workloads such as sentiment analysis or entity recognition, not general ML lifecycle management. Azure AI Vision is a specialized vision service and does not provide the full model lifecycle platform described in the scenario.

5. You are reviewing a practice exam question that asks which statement best describes supervised learning. Which statement should you choose?

Show answer
Correct answer: It uses labeled data to train a model to predict known outcomes.
The correct statement is that supervised learning uses labeled data to train a model to predict known outcomes. This aligns with AI-900 exam domain knowledge for regression and classification scenarios. The option about grouping data without predefined labels describes unsupervised learning, specifically clustering. The statement that supervised learning is only used for deep learning models in Azure is incorrect because supervised learning is a broad concept that applies to many model types, not just deep learning.

Chapter 4: Computer Vision Workloads on Azure

This chapter maps directly to one of the core AI-900 exam domains: describing computer vision workloads and choosing the correct Azure AI service for image, video, and document scenarios. On the exam, Microsoft rarely rewards deep implementation detail. Instead, it tests whether you can recognize a business scenario, match it to the right Azure service, and avoid distractors that sound technically plausible but solve a different problem. Your goal in this chapter is to build that recognition skill.

Computer vision on Azure includes several related but distinct workloads. Some tasks involve understanding the contents of an image, such as identifying objects, generating captions, tagging visual features, or reading printed and handwritten text. Other tasks involve extracting structured data from forms and receipts. Still others involve face-related capabilities, content moderation, or building custom image models for a narrow business domain. The exam expects you to understand these categories at a conceptual level and know where Azure AI Vision, Azure AI Document Intelligence, Azure AI Face, and custom vision-style solutions fit.

A common exam pattern is to describe a scenario using everyday business language rather than product names. For example, a question may mention scanning invoices, reading street signs from photos, checking manufacturing images for unusual defects, or extracting fields from tax forms. You must translate the scenario into a workload category first, then identify the best matching Azure service. If you skip that first step, distractors become much harder to eliminate.

Exam Tip: Start by asking what the system must return: a caption, tags, detected text, structured fields, identity-related face data, or a custom classification decision. The output usually reveals the service more clearly than the input.

Another area the exam emphasizes is service boundaries. Azure services overlap enough to confuse beginners, but the AI-900 exam rewards precision. OCR is not the same as document field extraction. Image analysis is not the same as custom model training. Face detection is not the same as broad identity recognition. A frequent trap is choosing a general-purpose service when the scenario clearly needs a domain-specific one. For instance, if a business needs key-value pairs from forms, a document extraction service is a better fit than a general image analysis service that can merely read text.

This chapter also ties computer vision to responsible AI. Microsoft expects foundational awareness of ethical and policy boundaries, especially around face-related workloads. The exam may test whether you recognize that not every technically possible use case is supported, appropriate, or available without restrictions. Questions in this area are often less about syntax and more about safe, compliant service selection.

As you work through the sections, focus on these exam outcomes: understanding image, video, and document AI scenarios; choosing the correct Azure AI service for vision tasks; reviewing OCR, facial analysis, and image classification concepts; and strengthening your exam technique for Microsoft-style questions. By the end of the chapter, you should be able to identify the signal words that point to the correct answer and avoid the most common traps in computer vision questions.

Practice note for Understand image, video, and document AI scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Choose the right Azure AI service for vision tasks: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Review OCR, facial analysis, and image classification concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Describe common computer vision workloads on Azure

Section 4.1: Describe common computer vision workloads on Azure

The AI-900 exam expects you to recognize the major categories of computer vision workloads before you worry about product names. The most common workloads are image analysis, OCR, face-related analysis, video understanding, and document information extraction. The exam often describes these in business terms, so train yourself to convert the wording into the underlying workload.

Image analysis means deriving meaning from pictures. This can include generating captions, identifying objects, tagging visible features, detecting brands, or describing image content in natural language. OCR, by contrast, focuses specifically on reading text from images, screenshots, scanned files, or photos of printed and handwritten content. Document extraction goes one step further by turning text and layout into structured outputs such as dates, totals, invoice numbers, and table values.

Video scenarios are usually tested at a high level. Think of video as a sequence of images plus time-based context. If the question asks about identifying scenes, extracting frames, detecting actions, or producing searchable insights from video content, the exam is testing your awareness that video understanding is part of the broader vision workload family. However, on AI-900, you are usually not expected to design a full media processing pipeline.

Another common workload is image classification in a business-specific context. For example, classifying plant diseases, sorting products by packaging type, or identifying damaged versus undamaged items often points to a custom model rather than a general-purpose image analysis API. The exam may contrast broad built-in capabilities with scenarios requiring organization-specific labels.

Exam Tip: Watch for wording like extract text, identify fields, classify custom categories, or analyze faces. Those verbs narrow the answer quickly.

  • If the need is to understand general image content, think image analysis.
  • If the need is to read characters, think OCR.
  • If the need is to pull structured values from forms, think document intelligence.
  • If the need is to work with facial attributes or detection, think face-related capabilities and their restrictions.
  • If the need is to classify specialized images using business-defined labels, think custom vision-style modeling.

A classic trap is to assume that any image-based scenario should use one vision service. The exam deliberately separates general image tasks from document-specific and custom-model scenarios. Your best strategy is to identify the expected output format and whether the scenario is general-purpose or domain-specific.

Section 4.2: Azure AI Vision features for image analysis and OCR

Section 4.2: Azure AI Vision features for image analysis and OCR

Azure AI Vision is the core service category to remember for general image understanding and OCR-related scenarios. On the AI-900 exam, you are expected to know that it can analyze images, generate descriptions, detect visual elements, and read text from visual inputs. The exam does not typically require API details, but it absolutely expects you to distinguish these capabilities from other Azure AI services.

For image analysis, think about tasks such as producing a caption for an image, identifying common objects, tagging image content, or recognizing visual categories. If a retailer wants to analyze user-uploaded product photos or a media company wants searchable descriptions of image assets, these are strong Azure AI Vision patterns. The service is designed for prebuilt understanding of common visual concepts rather than custom categories unique to one company.

OCR is another high-value AI-900 topic. Azure AI Vision can extract printed and handwritten text from images. The exam may mention reading street signs, scanning menus, capturing text from a mobile camera, or processing screenshots. These all point toward OCR. The key distinction is that OCR returns text content, while image analysis returns descriptive or semantic understanding of the image itself.

Exam Tip: If the question only needs the words found in the image, OCR is the clue. If it needs meaning about the scene or objects, image analysis is the clue.

A common trap is confusing OCR with document intelligence. OCR reads text, but document intelligence is better when the task requires extracting labeled fields or table structures from business documents. Another trap is choosing a custom model when the scenario simply needs standard image descriptions or built-in OCR. AI-900 often rewards the simplest service that fully meets the stated requirement.

The exam may also test your service-selection discipline using phrases like minimal training effort or prebuilt capability. Those phrases usually point away from custom training and toward Azure AI Vision or another prebuilt service. If no custom labels, specialized product categories, or organization-specific classes are mentioned, built-in vision features are often the best answer.

When eliminating distractors, ask whether the requirement is unstructured image understanding, raw text extraction, or structured form parsing. That one comparison will help you choose among Azure AI Vision, document intelligence, and custom-model approaches.

Section 4.3: Face-related capabilities, moderation, and responsible use boundaries

Section 4.3: Face-related capabilities, moderation, and responsible use boundaries

Face-related scenarios are memorable on the AI-900 exam because Microsoft uses them to test both technical awareness and responsible AI understanding. At a high level, Azure offers face-related capabilities such as detecting that a face is present and analyzing selected facial characteristics. However, this is an area where students often overgeneralize, so be careful. Not every face-related scenario should be assumed to be supported in the same way, and the exam may intentionally probe service limitations or policy boundaries.

The first concept to know is the difference between detecting a face and identifying a person. Detection answers whether a face exists in an image and where it appears. More advanced face scenarios may involve comparing faces or supporting identity-oriented workflows, but these uses are sensitive and may be limited by responsible AI policies. The AI-900 exam tends to focus on awareness rather than implementation. Expect scenario wording that asks you to recognize that face technologies require careful governance and are not interchangeable with generic image analysis.

Content moderation is another related topic because image applications often need safety screening. If a platform accepts user-uploaded images or videos, moderation may be necessary to detect inappropriate content or enforce platform rules. The exam may pair moderation with computer vision to test whether you understand that analyzing images is not only about extracting value, but also about reducing harm and risk.

Exam Tip: When a question includes identity, surveillance, sensitive attributes, or compliance concerns, do not rush to the most powerful-sounding technical option. Look for the answer that reflects responsible and approved use.

Common traps include assuming face analysis is just another unrestricted tagging feature, or ignoring the ethics dimension entirely. Microsoft certification questions often reward awareness that some capabilities are intentionally limited, reviewed, or governed. If one answer acknowledges service boundaries or responsible use and another answer treats face analysis casually, the responsible option is often better.

For exam strategy, separate these ideas clearly: general image analysis describes scenes and objects; face-related capabilities focus specifically on faces; moderation screens for harmful or inappropriate content; and responsible AI governs whether and how such systems should be used. That structure helps you handle scenario-based questions without guessing.

Section 4.4: Document intelligence and information extraction scenarios

Section 4.4: Document intelligence and information extraction scenarios

One of the highest-yield distinctions on the AI-900 exam is the difference between reading document text and extracting structured information from documents. Azure AI Document Intelligence is the service family to associate with forms, invoices, receipts, IDs, tax documents, and other business documents where layout and field relationships matter. If the scenario mentions key-value pairs, totals, line items, tables, or named fields, think document intelligence rather than basic OCR.

For example, a company may want to process incoming invoices and automatically capture invoice number, vendor name, due date, and total amount. OCR alone might read the words on the page, but it does not inherently understand which value belongs to which field. Document intelligence is designed to interpret structure and layout so that applications can consume organized outputs. This is exactly the kind of service-selection distinction the AI-900 exam likes to test.

The exam may also mention receipts, purchase orders, forms, or scanned PDFs. These are all clues that information extraction is more important than just text recognition. Another clue is downstream automation. If the extracted data needs to be inserted into a business system with labeled fields, document intelligence is usually the stronger choice.

Exam Tip: OCR answers, “What text is here?” Document intelligence answers, “What data fields does this document contain?”

A common trap is choosing Azure AI Vision because the input is an image or scanned file. That reasoning is incomplete. The exam cares more about the desired output than the file type. If the expected output is structured business data, document intelligence is the better fit. Another trap is choosing machine learning broadly when a prebuilt document model would meet the requirement more directly and with less effort.

As an exam coach, I recommend using a simple mental test: if a person could manually draw boxes around fields like total, date, or customer name, the scenario is probably document intelligence. If the task is merely to make the text searchable or readable, OCR may be enough. That distinction helps you eliminate distractors quickly.

Section 4.5: Custom vision style concepts, anomaly detection links, and service selection

Section 4.5: Custom vision style concepts, anomaly detection links, and service selection

Not every vision problem is solved by a prebuilt service. The AI-900 exam also expects you to recognize when a scenario requires custom image classification or object detection based on organization-specific labels. Historically, learners often describe this as a custom vision-style scenario: training a model to recognize categories that are meaningful to one business but not part of general image understanding. Examples include classifying defective versus acceptable products, identifying a company’s own product lines, or distinguishing disease types in crop images.

The key exam concept is service selection based on specificity. If the categories are common and broad, a built-in image analysis capability may be enough. If the categories are narrow, proprietary, or unique to the business, a custom-trained approach is a better fit. The exam may frame this as needing to train on labeled images provided by the customer.

There is also a useful connection to anomaly detection thinking. In quality-control scenarios, students sometimes confuse custom image classification with anomaly detection. If the requirement is to determine whether an item belongs to known labeled classes such as damaged, undamaged, or misaligned, that sounds like classification. If the requirement is to identify unusual patterns without a broad library of labels, anomaly detection concepts may be more relevant. AI-900 does not usually demand deep model design here, but it does test whether you can tell the difference in intent.

Exam Tip: The phrase use our own labeled images is one of the strongest clues that a custom model is expected.

Common distractors include picking OCR simply because the images contain packaging text, or choosing document intelligence because the source is a scanned image. Stay focused on the business outcome. Are you reading text, extracting document fields, understanding general scenes, or assigning custom classes? Those are different workloads.

When in doubt, compare these three choices: prebuilt image analysis for common objects and captions, document intelligence for structured business documents, and custom vision-style modeling for business-specific labels. Most AI-900 computer vision questions can be solved by making that comparison carefully.

Section 4.6: Exam-style MCQ drills for Computer vision workloads on Azure

Section 4.6: Exam-style MCQ drills for Computer vision workloads on Azure

This final section is about test-taking skill, not memorizing isolated facts. Microsoft-style AI-900 questions in the computer vision domain usually present a short scenario, then ask for the most appropriate service or capability. Your task is to detect the one requirement that matters most. Students often miss questions not because they lack knowledge, but because they focus on a secondary detail and ignore the primary output.

Begin every multiple-choice vision question with three quick checks. First, identify the input type: image, video, scanned document, or business form. Second, identify the expected output: text, structured fields, caption/tags, face-related analysis, or custom categories. Third, look for constraint words such as prebuilt, custom-trained, minimal development, responsible use, or organization-specific. Those words often eliminate two or three options immediately.

Exam Tip: On AI-900, the best answer is usually the Azure service that solves the stated problem most directly with the least unnecessary complexity.

Here are common patterns to rehearse mentally during practice:

  • Photos or screenshots where the main need is reading text point to OCR.
  • Invoices, receipts, and forms with named fields point to document intelligence.
  • General image understanding with captions or tags points to Azure AI Vision image analysis.
  • Scenarios involving faces require careful distinction and awareness of responsible AI boundaries.
  • Business-specific image categories point to a custom-trained vision approach.

Now for the biggest traps. Trap one: choosing the broadest service instead of the most precise one. Trap two: confusing text extraction with field extraction. Trap three: ignoring ethical or policy restrictions in face-related scenarios. Trap four: selecting a custom model when a prebuilt service clearly satisfies the requirement. Trap five: overthinking implementation details that AI-900 does not ask about.

As you continue practice testing, review every missed vision question by labeling it with one of five buckets: image analysis, OCR, document extraction, face/moderation, or custom vision. If you can consistently sort scenarios into those buckets, your score in this exam objective area will rise quickly. Confidence on AI-900 comes from pattern recognition, and computer vision questions reward exactly that skill.

Chapter milestones
  • Understand image, video, and document AI scenarios
  • Choose the right Azure AI service for vision tasks
  • Review OCR, facial analysis, and image classification concepts
  • Practice exam-style questions for Computer vision workloads on Azure
Chapter quiz

1. A retail company wants to process scanned receipts and extract fields such as merchant name, transaction date, and total amount into a structured format. Which Azure AI service should you choose?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is the best choice because the requirement is to extract structured fields from receipts, not just read text. This matches the AI-900 exam domain that distinguishes document field extraction from general OCR. Azure AI Vision can perform OCR and image analysis, but it is not the best fit for returning receipt fields in a structured schema. Azure AI Face is unrelated because the scenario does not involve facial detection or analysis.

2. A city planning team needs an application that reads printed street signs from photos submitted by field workers. The solution does not need form processing or custom model training. Which Azure AI service is the most appropriate?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is correct because the main requirement is OCR from images of street signs. On the AI-900 exam, reading text from images maps to a vision workload. Azure AI Document Intelligence is more appropriate when extracting structured fields from documents such as forms, invoices, or receipts. A custom image classification model would classify image categories, but it would not be the best choice for reading text content from signs.

3. A manufacturer wants to train a model to classify product images as either acceptable or defective based on examples from its own assembly line. Which approach best fits this requirement?

Show answer
Correct answer: Use a custom vision-style image classification solution
A custom vision-style image classification solution is correct because the business needs a custom model trained on its own domain-specific images. The AI-900 exam commonly tests the difference between general image analysis and custom classification. Azure AI Face is specifically for face-related analysis and is not intended for product defect classification. Azure AI Document Intelligence is designed for extracting data from documents, not classifying manufacturing images.

4. A company wants to build a photo app that identifies whether a face is present in an image and returns facial landmarks for image framing. Which Azure AI service should be used?

Show answer
Correct answer: Azure AI Face
Azure AI Face is the correct choice because the requirement is face detection and facial analysis. In the AI-900 exam domain, face-related workloads should be mapped to the specialized face service rather than a general image service. Azure AI Vision analyzes images broadly and can describe or tag content, but it is not the primary service for face-specific analysis scenarios. Azure AI Document Intelligence is for document extraction and is unrelated to facial landmarks.

5. You need to recommend a service for a solution that analyzes product photos and returns captions, tags, and general visual features. The solution does not need receipt field extraction, face-specific analysis, or custom training. Which service should you recommend?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is correct because the scenario requires general image analysis such as captions, tags, and visual feature detection. This aligns directly with AI-900 coverage of computer vision workloads. Azure AI Face would be too narrow because it focuses on face-related analysis rather than broad scene understanding. Azure AI Document Intelligence would be incorrect because it is intended for extracting structured information from documents, forms, and receipts rather than analyzing general product photos.

Chapter 5: NLP and Generative AI Workloads on Azure

This chapter maps directly to a major AI-900 exam objective: describing natural language processing workloads on Azure and recognizing generative AI scenarios, including copilots, prompts, and Azure OpenAI. On the exam, Microsoft rarely asks you to build a solution step by step. Instead, the test measures whether you can identify the correct Azure AI service for a business requirement, distinguish similar-sounding options, and avoid classic distractors. That means you must think in terms of workloads first and products second.

Natural language processing, or NLP, focuses on deriving meaning from text or speech. In AI-900 language, this includes tasks such as sentiment analysis, extracting key phrases, recognizing entities, classifying text, translating content, generating answers from knowledge sources, converting speech to text, converting text to speech, and enabling conversational experiences. A common exam pattern is to describe a user need such as “analyze customer reviews,” “transcribe call recordings,” or “translate spoken conversations,” then ask you to choose the best Azure service category.

Another tested area is generative AI. You need to understand what generative AI workloads do, what copilots are, why prompts matter, and how Azure OpenAI supports enterprise use cases. The AI-900 exam does not expect deep prompt engineering or model training expertise, but it does expect conceptual clarity. You should know the difference between traditional NLP analysis and generative AI content creation. For example, extracting entities from a support ticket is not the same as generating a summary of that ticket, even though both are language-related tasks.

Exam Tip: If the scenario is about analyzing existing text for meaning, think Azure AI Language capabilities. If the scenario is about creating new text, summarizing, drafting, or conversational generation, think generative AI and Azure OpenAI concepts. If the scenario centers on spoken audio, think speech services first.

This chapter integrates the exam objectives around Azure NLP concepts, speech and language understanding, translation, sentiment, entity extraction, conversational AI, and generative AI workloads. It also helps you compare related services and recognize traps. One of the most common mistakes candidates make is choosing a service because the name sounds familiar rather than because the workload matches. AI-900 rewards precise workload-to-service alignment.

As you read, focus on three exam habits. First, isolate the actual task the business wants to perform. Second, identify the input type: text, audio, documents, or prompts. Third, eliminate answers that belong to other AI domains such as computer vision or machine learning model training. This strategy is especially useful when the exam includes plausible but incorrect Azure options.

  • NLP workload recognition: sentiment, translation, entity extraction, question answering, and conversational AI
  • Speech workload recognition: speech-to-text, text-to-speech, and speech translation
  • Generative AI recognition: copilots, prompts, content generation, summarization, and Azure OpenAI concepts
  • Exam strategy: identify trigger words, remove distractors, and match requirement to the most direct Azure AI service

By the end of this chapter, you should be able to classify common AI-900 language scenarios quickly and confidently. That is exactly what the certification exam expects: not implementation detail, but sound service selection and conceptual understanding.

Practice note for Master Azure NLP concepts including speech and language understanding: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Compare translation, sentiment, entity extraction, and conversational AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand generative AI workloads, copilots, prompts, and Azure OpenAI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Describe natural language processing workloads on Azure

Section 5.1: Describe natural language processing workloads on Azure

Natural language processing workloads on Azure involve systems that can read, interpret, classify, translate, summarize, or interact using human language. For AI-900, the exam expects you to recognize common NLP scenarios rather than memorize APIs. Typical workloads include sentiment analysis of reviews, extraction of names and places from documents, automatic language detection, conversational question answering, and speech-based interaction.

Azure groups many text-based capabilities under Azure AI Language. This is often the correct family of services when the input is written text and the task is to detect meaning. If the scenario asks you to identify whether customer feedback is positive or negative, extract key phrases from support tickets, or find organizations and locations in a contract, that points toward language analysis workloads. If the scenario asks to translate content from one language to another, translation is the key workload. If the scenario is about spoken input or output, that shifts toward speech workloads rather than pure text analytics.

A major exam skill is separating NLP from adjacent domains. For example, analyzing handwritten text from an image is not primarily an NLP question; that starts as a vision or document intelligence problem because text must first be extracted. Likewise, predicting sales from historical data is machine learning, not NLP, even if the data includes text fields. Always identify the main requirement.

Exam Tip: Words like analyze, detect, extract, classify, identify sentiment, and find key phrases usually indicate traditional NLP. Words like generate, draft, summarize, rewrite, or answer creatively often indicate generative AI instead.

Common exam traps include answer choices that mention machine learning model training, custom vision, or bot frameworks when the prompt only asks for text analysis. Unless the scenario explicitly needs custom model development, the AI-900 exam usually favors a managed Azure AI service. Microsoft wants you to choose the simplest service that directly satisfies the requirement.

Another trap is confusing language understanding with conversational delivery. A chatbot is the interface, but the intelligence behind understanding user language may come from language services. On the test, do not choose a bot option merely because users are chatting. Ask what the system must actually do with the language.

Section 5.2: Language services for sentiment, key phrases, entities, and question answering

Section 5.2: Language services for sentiment, key phrases, entities, and question answering

This section aligns closely with exam scenarios that ask you to compare text analytics tasks. Azure AI Language supports several important capabilities tested in AI-900: sentiment analysis, key phrase extraction, entity recognition, and question answering. The exam often presents a business use case and expects you to match it to the right capability.

Sentiment analysis determines whether text expresses positive, negative, neutral, or mixed opinion. A classic exam example is analyzing product reviews or social media comments. If a question mentions customer mood, satisfaction, or opinion mining, sentiment analysis is the best fit. Key phrase extraction identifies important terms or topics in text, such as pulling “billing error” or “delivery delay” from a support message. Entity recognition identifies specific items such as people, organizations, dates, currency values, and locations. If the scenario requires identifying company names, cities, or account numbers in text, think entities.

Question answering is different. Instead of simply extracting information, it returns answers from a knowledge source, such as an FAQ or documentation set. If a company wants users to ask natural-language questions like “What is your refund policy?” and receive responses based on existing content, question answering is the concept being tested. This is not the same as open-ended generative AI, even though both may answer questions.

Exam Tip: When the answer choices include both entity extraction and key phrase extraction, focus on whether the requirement is to identify named, typed items like people and places, or just important topics. Named items suggest entities; broad themes suggest key phrases.

Common traps include confusing sentiment with intent. Sentiment measures opinion tone, while intent is about what the user wants to do. “I am angry about my bill” contains negative sentiment; “I want to pay my bill” expresses an intent. Another trap is assuming question answering means generating new responses from scratch. In many exam questions, question answering is grounded in an existing knowledge base or content source.

To eliminate distractors, ask yourself what output is expected: polarity score, extracted terms, labeled entities, or direct answers. The clearer you are about the desired output, the faster you can pick the correct Azure language capability.

Section 5.3: Speech workloads on Azure including speech to text and translation

Section 5.3: Speech workloads on Azure including speech to text and translation

Speech workloads are a frequent AI-900 topic because they are easy to describe in business scenarios. Azure speech capabilities support converting spoken audio into text, converting text into spoken audio, recognizing speakers in some contexts, and translating speech. The exam usually focuses on speech-to-text, text-to-speech, and speech translation.

Speech-to-text is used when organizations want meeting transcripts, call center transcription, voice note conversion, or searchable records from audio. If the input is spoken language and the output is text, this is the right concept. Text-to-speech is the reverse: turning written content into natural-sounding audio. This appears in scenarios such as accessibility, voice assistants, and spoken notifications.

Speech translation is especially testable because it combines two tasks. The system accepts spoken language in one language and produces translated output in another. If a scenario mentions real-time multilingual meetings or translating a speaker during a presentation, speech translation is a strong match. If the input is text rather than audio, then ordinary text translation is more appropriate.

Exam Tip: Look carefully at the input and output modes. Audio in and text out suggests speech-to-text. Text in and audio out suggests text-to-speech. Audio in one language and translated result out suggests speech translation. The exam often hides the answer in these input-output clues.

A common trap is choosing translation services when the real challenge is first understanding spoken audio. Another is selecting conversational AI simply because users are speaking. Speech is about modality; conversation is about interaction flow and intent. A voice-enabled bot may use both, but the exam question usually emphasizes one primary requirement.

When eliminating wrong answers, remove any service tied to images, object detection, or predictive machine learning if the scenario is clearly audio-based. Also watch for wording like “real-time captions,” “voice commands,” “spoken prompts,” and “multilingual speech,” all of which point toward Azure speech workloads rather than general language analytics.

Section 5.4: Conversational AI, bots, and language understanding scenarios

Section 5.4: Conversational AI, bots, and language understanding scenarios

Conversational AI combines interface and intelligence. On AI-900, you should understand that a bot is the conversation channel or application experience, while language understanding helps interpret what the user means. Many candidates miss questions in this area because they focus only on the word “chatbot” and ignore the underlying requirement.

A conversational AI solution may answer FAQs, route requests, collect information, or automate support interactions. If the requirement is to create a customer-facing system that can converse using text or voice, a bot concept is involved. If the requirement is to detect what the user is asking for, classify their request, or extract details from their message, language understanding is involved. In practice, these often work together.

For exam purposes, you do not need architectural depth. You do need to know how to distinguish scenarios. If the company wants a virtual agent that responds to users across channels, bot technology is relevant. If the company wants to identify the intent behind messages such as “book a flight” or “check my order,” the exam is testing language understanding. If the company wants answers from an FAQ, question answering may be the better fit than broad intent modeling.

Exam Tip: Ask yourself whether the problem is “How do users interact with the system?” or “How does the system interpret the user’s words?” The first points to bots and conversational interfaces. The second points to language understanding capabilities.

Common traps include selecting sentiment analysis for any customer-service scenario, even when the actual task is intent recognition or FAQ response. Another trap is choosing generative AI for every chatbot situation. Generative AI can power conversational experiences, but the exam may instead be asking about classic bot workflows or knowledge-based answers.

Use elimination strategically. If the scenario describes multi-turn interaction, self-service support, or conversation across channels, think conversational AI. If it emphasizes extracting the user’s goal or filling slots such as dates or locations, think language understanding. If it emphasizes answering known policy questions from curated content, think question answering.

Section 5.5: Describe generative AI workloads on Azure including Azure OpenAI and copilots

Section 5.5: Describe generative AI workloads on Azure including Azure OpenAI and copilots

Generative AI is now a prominent AI-900 exam area. You need to understand what generative AI does, how copilots use it, and why Azure OpenAI matters. Generative AI workloads create new content based on prompts and patterns learned from large data sets. Common examples include drafting emails, summarizing documents, generating code suggestions, rewriting text, extracting insights into a natural-language summary, and answering open-ended questions.

Azure OpenAI provides access to powerful language models in the Azure ecosystem, with enterprise governance, security, and responsible AI considerations. On the exam, you are not expected to know deployment mechanics in depth. You are expected to recognize that Azure OpenAI supports generative use cases such as content generation, summarization, chat experiences, and natural-language interaction over enterprise data when combined with appropriate solutions.

A copilot is an AI assistant embedded into an application or workflow that helps users perform tasks more efficiently. The key idea is assistance, not full autonomy. Copilots can suggest text, summarize information, answer questions, and automate parts of business processes while keeping the human in the loop. If the exam describes an assistant that helps employees write, search, or analyze within a familiar app, that is a copilot-style workload.

Prompts are the instructions given to a generative model. Better prompts typically yield more relevant outputs. AI-900 treats prompts conceptually: they guide model behavior. You should also understand responsible use issues such as hallucinations, harmful content, data grounding, and the need for human review.

Exam Tip: If the requirement is to generate or transform content in natural language, Azure OpenAI is often the conceptual match. If the requirement is to classify, score, or extract facts from text, traditional language services are usually the better answer.

Common traps include assuming generative AI is always more appropriate than simpler NLP tools. If a company only needs sentiment scores, using a generative model would not be the most direct or exam-friendly answer. Another trap is overlooking responsible AI. Microsoft frequently expects you to recognize the need for content filtering, transparency, and human oversight when generative systems are used in business settings.

Section 5.6: Exam-style MCQ drills for NLP workloads on Azure and Generative AI workloads on Azure

Section 5.6: Exam-style MCQ drills for NLP workloads on Azure and Generative AI workloads on Azure

This section is about test-taking strategy rather than memorization. AI-900 multiple-choice questions on NLP and generative AI often include one clearly correct answer, one partially correct but too broad answer, one answer from a different AI domain, and one outdated or irrelevant option. Your job is to identify the core workload fast and eliminate anything that does not directly match the requirement.

Start with trigger words. Reviews, opinions, and customer satisfaction usually point to sentiment analysis. Important topics point to key phrase extraction. Names, places, dates, and currencies point to entity recognition. FAQ responses point to question answering. Audio transcripts point to speech-to-text. Multilingual spoken conversation points to speech translation. Drafting, summarizing, rewriting, and open-ended chat point to generative AI and Azure OpenAI concepts.

Next, classify the input and output. This is one of the best ways to avoid traps. If the input is audio, speech services deserve immediate attention. If the input is text and the output is a score or label, traditional NLP is likely correct. If the output is newly generated prose, summary text, or a conversational draft, generative AI is likely the better fit.

Exam Tip: Eliminate any answer that solves a different modality first. For example, if no image or video is involved, remove vision services. If no prediction from structured historical data is required, remove machine learning training options. This can cut the answer set in half quickly.

Watch for broad wording. Some distractors sound impressive but are less precise than a managed Azure AI capability. AI-900 generally rewards the simplest correct service. Also be careful with “chatbot” language. A chatbot may use question answering, language understanding, speech services, or generative AI depending on the actual requirement. Do not stop at the interface label.

Finally, think like Microsoft. The exam wants to know whether you can recommend an appropriate Azure service category responsibly. That means choosing solutions that align with user needs while recognizing responsible AI concerns, especially in generative workloads. If an answer mentions human review, content safety, or grounded responses in a generative scenario, it may be a stronger choice than one focused only on raw capability.

Chapter milestones
  • Master Azure NLP concepts including speech and language understanding
  • Compare translation, sentiment, entity extraction, and conversational AI
  • Understand generative AI workloads, copilots, prompts, and Azure OpenAI
  • Practice integrated question sets for NLP workloads on Azure and Generative AI workloads on Azure
Chapter quiz

1. A company wants to analyze thousands of customer product reviews to determine whether each review expresses a positive, negative, or neutral opinion. Which Azure AI capability should they use?

Show answer
Correct answer: Sentiment analysis in Azure AI Language
Sentiment analysis in Azure AI Language is the correct choice because the requirement is to evaluate the opinion expressed in text. Speech-to-text is used to convert spoken audio into written text, so it does not directly determine sentiment. Computer Vision image classification is unrelated because the scenario involves written reviews rather than images. On the AI-900 exam, identifying the input type and the business task is key to selecting the correct service.

2. A support center needs to convert recorded phone calls into written transcripts so agents can search and review conversations later. Which Azure AI service should they choose?

Show answer
Correct answer: Azure AI Speech for speech-to-text
Azure AI Speech for speech-to-text is correct because the source data is spoken audio and the goal is transcription. Azure AI Language entity extraction analyzes text to identify items such as names, dates, or locations, but it does not transcribe audio. Azure OpenAI can generate or summarize text, but it is not the primary service for converting speech recordings into text. AI-900 questions often distinguish speech workloads from text analysis and generative workloads.

3. A global organization wants users to speak in English during meetings and have the spoken content translated into Spanish in near real time. Which workload best matches this requirement?

Show answer
Correct answer: Speech translation
Speech translation is the best match because the scenario involves spoken input and translated spoken or text output in another language. Key phrase extraction identifies important phrases from existing text, but it does not handle live spoken translation. Conversational language understanding is used to detect user intent and entities in utterances for apps and bots, not to translate speech between languages. On AI-900, trigger words such as speak, meeting, and translated usually indicate a speech service scenario.

4. A company wants to build an internal copilot that can draft email replies and summarize long support cases based on user prompts. Which Azure service is the most appropriate choice?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is correct because the workload is generative AI: drafting replies and summarizing content from prompts. Azure AI Language is primarily used for analyzing existing text, such as sentiment, entity extraction, or key phrase detection, rather than generating new content as the main task. Azure AI Vision is focused on image and visual analysis, which is unrelated here. The AI-900 exam expects you to distinguish analysis of language from generation of language.

5. A business wants to extract names of people, organizations, and locations from insurance claim text submitted by customers. Which Azure AI capability should they use?

Show answer
Correct answer: Named entity recognition in Azure AI Language
Named entity recognition in Azure AI Language is correct because the goal is to identify and categorize entities such as people, organizations, and locations from text. Text-to-speech performs the opposite type of speech task by converting written text into audio, so it does not analyze claim documents. Prompt-based content generation in Azure OpenAI creates new content rather than extracting structured information from existing text. In AI-900, entity extraction is a classic NLP analysis scenario and should be matched to Azure AI Language.

Chapter 6: Full Mock Exam and Final Review

This chapter is the bridge between knowing AI-900 content and performing well under exam conditions. Up to this point, the course has covered the core tested domains: AI workloads and common scenarios, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI concepts including responsible AI and Azure OpenAI basics. Now the focus shifts from learning topics in isolation to combining them the way Microsoft does on the actual exam. That means mixed-domain thinking, careful reading, elimination of distractors, and disciplined review of weak spots.

The AI-900 exam is not primarily a math test, a coding test, or an architecture-deep-dive exam. It tests whether you can recognize the right AI concept, identify the correct Azure AI service for a business need, distinguish similar-sounding options, and apply foundational responsible AI principles. In a full mock exam, the challenge is less about any one topic and more about context switching. You may move from classification to face detection, then to sentiment analysis, then to copilots, then to responsible AI governance. This chapter helps you rehearse that exact experience.

The two mock exam parts in this chapter should be treated as timed simulation blocks. Do not casually skim them. Instead, use them to practice pacing, confidence control, and answer selection habits. Your goal is to become faster at recognizing keywords that map to exam objectives. For example, if a scenario asks to predict a numerical value, that points to regression. If it asks to assign labels like approved or denied, that points to classification. If it asks to group unlabeled records into similar sets, that indicates clustering. If it asks to analyze images for objects, tags, OCR, or captions, that signals computer vision services. If it asks for speech-to-text, translation, sentiment, key phrases, or question answering, that points toward language and speech services. If it asks about prompt design, copilots, or responsible content generation, that is generative AI territory.

Exam Tip: The AI-900 exam often rewards clear concept recognition more than deep implementation detail. When two answers seem close, ask which one best fits the business task described, not which one sounds more technically impressive.

As you work through this chapter, pay attention to your error patterns. Did you miss questions because you confused service names? Because you overlooked words like classify, detect, generate, summarize, or translate? Because you changed correct answers after second-guessing? Weak spot analysis is not just about domains; it is about decision habits. Strong candidates do not simply mark right or wrong. They identify why an answer was missed and what clue should have led to the correct choice.

  • Use mixed practice to strengthen domain switching.
  • Review wrong answers by concept, not only by score.
  • Watch for Microsoft-style distractors that are plausible but not the best fit.
  • Prioritize service-purpose matching over memorizing every product detail.
  • Finish with a practical exam day checklist and final review plan.

This final chapter is therefore a performance chapter. It combines Mock Exam Part 1, Mock Exam Part 2, Weak Spot Analysis, and the Exam Day Checklist into one structured finish. If you engage with it actively, you will leave not just more knowledgeable, but more exam-ready.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full-length mixed-domain mock exam set one

Section 6.1: Full-length mixed-domain mock exam set one

Your first full-length mixed-domain set should simulate the mental rhythm of the real AI-900 exam. The purpose is not to test one chapter at a time, but to force quick recognition across all objectives. In this set, expect a deliberate blend of AI workloads, machine learning, computer vision, NLP, and generative AI. The exam often checks whether you can separate foundational concepts that sound similar. For instance, a model that predicts sales amounts is regression, while a model that predicts whether a customer will churn is classification. A system that groups customers by shared behavior without predefined labels is clustering. In mixed practice, these distinctions must become automatic.

When reviewing this first mock set, focus on why the correct answer matches the scenario wording. Microsoft-style questions typically include one best answer, one answer from the wrong AI domain, one answer that is too advanced or unrelated, and one answer that sounds generally technical but does not solve the specific requirement. A common trap is choosing a real Azure service that is capable in a broad sense but not the intended service for the task described. For example, if the scenario is about extracting text from images, the correct direction is OCR-related computer vision capability, not a general machine learning platform.

Exam Tip: Read the final sentence of the scenario first, then go back and scan for constraints such as minimize effort, no-code, analyze images, classify text, build a chatbot, or generate content responsibly. Those phrases often reveal the correct answer path faster than the background details.

Use this set to measure pacing. If you are spending too long comparing two answers, ask yourself whether the exam is testing a concept category or a specific Azure offering. AI-900 is usually checking broad understanding first. Another trap in mock exam set one is overthinking responsible AI questions. The exam usually expects recognition of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability at a basic level. Do not invent edge cases that are not stated. Anchor your answer to the principle most directly violated or supported by the scenario.

After completing this set, create a short performance log. Record the domains you answered confidently, the domains where you guessed, and the exact phrases that confused you. That record becomes the foundation for targeted remediation in later sections.

Section 6.2: Full-length mixed-domain mock exam set two

Section 6.2: Full-length mixed-domain mock exam set two

The second full-length mock set should feel harder even if the topics are familiar. That is because the objective now is endurance and consistency. Many learners do well on content they have just reviewed, but accuracy drops when similar concepts appear repeatedly in different forms. This set should include more service-selection judgment and more contrast between traditional AI workloads and generative AI workloads. On AI-900, you must know not only what machine learning is, but also when an Azure AI service is the better answer than building a custom model.

One major exam objective tested here is choosing the correct service family. If the scenario involves analyzing forms, extracting printed or handwritten text, or understanding document structure, think document intelligence capabilities. If it involves image tagging, object detection, face-related analysis limits, or OCR from images, think computer vision capabilities. If it involves speech synthesis, transcription, translation, or language detection, think speech and language services. If it involves creating natural language responses, summarizing content, or building copilots with large language models, think generative AI and Azure OpenAI-related concepts. The exam wants you to see the boundary lines clearly.

Exam Tip: Watch for questions that mix the words “chatbot,” “question answering,” and “copilot.” These are not always interchangeable. A classic conversational bot, a knowledge-base answer system, and a generative AI assistant can overlap in function but are not the same exam concept.

Set two is also where candidates often fall into the trap of selecting a custom machine learning workflow when the scenario only calls for a prebuilt AI capability. AI-900 tends to value practical service selection. If the task is standard and common, the expected answer is often a managed Azure AI service rather than designing and training a model from scratch. Another common trap is confusing predictive analytics with content generation. A model that forecasts demand is not generative AI. A model that drafts text or summarizes a document is.

After this second set, compare your performance with set one. Did your machine learning errors decrease while your service-selection errors increased? Did you lose points in generative AI because of vague understanding of prompts, grounding, or responsible use? This comparison matters more than the raw score because it tells you whether your issue is knowledge, fatigue, or imprecise reading under pressure.

Section 6.3: Answer review strategy and explanation-based remediation

Section 6.3: Answer review strategy and explanation-based remediation

Weak spot analysis is most effective when you stop treating missed items as isolated mistakes. Instead, classify each miss into one of a few exam-relevant categories: concept confusion, service confusion, careless reading, distractor attraction, or overthinking. This is the heart of explanation-based remediation. The goal is not to memorize the right answer from one question; it is to become able to recognize the same pattern in a different form on test day.

Start with concept confusion. If you repeatedly mix up regression, classification, and clustering, create a three-line comparison and rehearse it until it becomes reflexive. If you confuse NLP with speech or computer vision with document intelligence, identify the input type and expected output. Next, review service confusion. Ask what business task each Azure AI service is designed to solve at a high level. AI-900 does not require deep deployment steps, but it does require correct matching of need to service. Then address careless reading. Many incorrect answers come from ignoring words such as “best,” “most appropriate,” “prebuilt,” or “without training a custom model.”

Exam Tip: For every wrong answer, write a one-sentence explanation that begins with “The clue was…” This forces you to identify the exact text feature that should have guided your choice.

Explanation-based remediation also helps with responsible AI questions. If you selected fairness when the scenario was really about transparency, identify the signal. Was the issue bias in outcomes, or was it the inability to explain how a result was produced? If you chose privacy when the issue was reliability and safety, ask whether the concern was data exposure or system failure and harm. These distinctions appear frequently because the principles are related but not identical.

Finally, review your correct answers too. If you guessed correctly, do not count that as mastery. Mark it as unstable knowledge. Stable knowledge means you can explain why three options are wrong, not just why one is right. That level of review is what turns mock exams into score improvement rather than score reporting.

Section 6.4: Final domain-by-domain checklist for AI-900

Section 6.4: Final domain-by-domain checklist for AI-900

Your final checklist should map directly to the exam objectives. Begin with AI workloads and common scenarios. Be able to distinguish computer vision, NLP, speech, anomaly detection, conversational AI, generative AI, and machine learning scenarios from short business descriptions. Next, confirm machine learning fundamentals. You should confidently identify regression, classification, clustering, training versus inference, features versus labels, and the basic purpose of model evaluation. You do not need advanced formulas, but you do need conceptual precision.

For computer vision, verify that you can recognize image classification, object detection, OCR, facial-analysis-related concepts at the foundational level, and document extraction scenarios. For NLP, ensure you can identify sentiment analysis, key phrase extraction, entity recognition, language detection, translation, speech-to-text, text-to-speech, and conversational solutions. For generative AI, make sure you understand prompts, completions, copilots, grounding concepts at a basic level, content generation use cases, and responsible AI concerns such as harmful output, hallucinations, and governance controls. Also be comfortable with Azure OpenAI as a Microsoft Azure offering for generative AI capabilities.

Exam Tip: If a domain feels weak, do not try to relearn everything. Build a one-page “trigger phrase” sheet. Match phrases like predict a number, assign a category, group similar items, analyze an image, extract text, detect sentiment, translate speech, generate content, and improve prompt quality to the correct concept or service family.

Include a responsible AI check in every domain. Microsoft frequently integrates responsible use across topics rather than isolating it. Ask yourself whether you can identify fairness, inclusiveness, privacy and security, accountability, transparency, and reliability and safety in practical scenarios. This is a common scoring opportunity because the questions are often straightforward if you know the vocabulary.

Finally, verify exam readiness at the service-recognition level. You do not need expert architecture knowledge, but you should not be surprised by the names of major Azure AI services or by scenario wording that points to them. If your checklist reveals uncertainty, focus your final review on distinctions, not volume.

Section 6.5: Test-taking tactics, flagging strategy, and confidence control

Section 6.5: Test-taking tactics, flagging strategy, and confidence control

Strong AI-900 performance depends as much on execution as on knowledge. Start with a pacing rule. Move steadily through the exam, answering direct recognition questions quickly and saving deeper comparison questions for later review if needed. Do not let one uncertain item consume the time you need for several easier points. A practical flagging strategy is to answer the best option you currently see, flag the item, and continue. This prevents blank answers and protects your pace.

Confidence control matters because AI-900 includes many familiar-sounding answer choices. Candidates often lose points by changing correct answers after second-guessing themselves. If your first answer came from a clear keyword match and objective-based reasoning, trust it unless you later find a specific clue you missed. Changing answers because an option “sounds more Azure-like” is not a strong reason. Changing because you noticed the scenario explicitly asked for a prebuilt service rather than custom training is a valid reason.

Exam Tip: Use elimination aggressively. Remove any option from the wrong AI domain first. Then remove answers that are too broad, too advanced, or not aligned to the business requirement. Going from four options to two greatly improves accuracy.

Another key tactic is reading for intent. Microsoft questions often contain extra context that is realistic but nonessential. Train yourself to spot the task, the data type, the desired outcome, and any implementation constraint. This is especially important in generative AI questions, where scenarios may include references to copilots, prompts, summarization, or responsible filters. The tested skill is usually selecting the right concept, not admiring the complexity of the scenario.

Manage stress by expecting a few uncertain questions. No candidate feels perfect on every item. Your job is not to achieve certainty on all questions; it is to make the highest-probability choice using domain recognition, elimination, and calm pacing. That mindset keeps one difficult item from affecting the next five.

Section 6.6: Final review plan, exam day readiness, and next certification steps

Section 6.6: Final review plan, exam day readiness, and next certification steps

Your final review plan should be light, targeted, and confidence-building. In the last study session before the exam, avoid cramming large new topics. Instead, review your weak spot log, your one-page domain checklist, and the explanations for any misses you classified as concept confusion or service confusion. Revisit trigger phrases and responsible AI principles. The goal is to sharpen recall, not create cognitive overload.

For exam day readiness, verify practical details early: testing appointment, identification, check-in instructions, internet and room requirements for online delivery if applicable, and time needed to settle in. Then do a short mental warm-up by recalling high-yield distinctions: regression versus classification versus clustering; prebuilt service versus custom model; computer vision versus document intelligence; sentiment versus translation versus speech; chatbot versus generative copilot; fairness versus transparency versus privacy. These are the distinctions that often convert hesitation into quick points.

Exam Tip: On exam morning, review only concise notes. If you open a large resource and start discovering unfamiliar details, you may increase anxiety without improving your score.

As you finish the exam, use any remaining time to revisit flagged questions methodically. Do not reopen every item. Focus only on those where you had a clear uncertainty. Re-read the scenario and ask what objective is being tested. Usually the best answer becomes clearer when you stop reading for detail and start reading for category and intent.

After passing AI-900, the next step depends on your role. If you are moving toward Azure implementation, continue into role-based Azure AI or data certifications. If you are a business stakeholder or beginner, use this certification as proof that you understand foundational AI workloads, Azure AI service selection, and responsible AI basics. Either way, this chapter marks the transition from practice mode to certification readiness. Trust the structure you have built: mixed mock exams, weak spot analysis, exam strategy, and a calm final review process.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. You are taking a timed AI-900 practice exam. A question asks for the type of machine learning used to predict next month's sales revenue based on historical sales data. Which type of machine learning should you select?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numerical value, which is a core machine learning concept tested in AI-900. Classification would be used to predict a category or label such as approved or denied, not a continuous number. Clustering is used to group unlabeled data by similarity and does not predict a known numeric target.

2. A company wants to build a solution that reads printed text from photos of receipts and extracts the text for downstream processing. Which Azure AI capability best fits this requirement?

Show answer
Correct answer: Optical character recognition (OCR) in Azure AI Vision
OCR in Azure AI Vision is correct because the business requirement is to detect and extract text from images. Sentiment analysis is for identifying opinion or emotional tone in language and would not read text from an image. Anomaly detection is used to identify unusual patterns in data and is unrelated to extracting printed characters from receipt photos.

3. During a weak spot review, you notice that you often confuse AI services with similar-sounding features. On the real exam, which strategy is most likely to improve answer accuracy?

Show answer
Correct answer: Match the business task in the scenario to the primary purpose of the service
Matching the business task to the primary purpose of the service is correct because AI-900 emphasizes selecting the best-fit Azure AI service for a stated scenario. Choosing the most advanced-sounding option is a common trap and often leads to distractors that are plausible but not the best fit. Memorizing product names alone is insufficient because the exam tests service-purpose matching and concept recognition in context.

4. A support center wants to analyze customer chat messages to determine whether each message expresses a positive, neutral, or negative opinion. Which Azure AI capability should you recommend?

Show answer
Correct answer: Sentiment analysis
Sentiment analysis is correct because the requirement is to classify text according to opinion or emotional tone. Object detection is used to identify and locate objects in images, not analyze text. Face detection is also an image-based capability and would not determine whether a written chat message is positive or negative.

5. A candidate reviews a mock exam and finds they missed several questions after changing their original answers despite having identified the correct concept initially. According to good final-review practice for AI-900, what should the candidate do next?

Show answer
Correct answer: Analyze the pattern behind the mistakes, including second-guessing and missed keywords
Analyzing the pattern behind mistakes is correct because effective weak spot analysis includes identifying whether errors came from concept confusion, overlooked keywords, or poor decision habits such as second-guessing. Focusing only on score is wrong because it does not reveal why answers were missed. Retaking the exam without reviewing incorrect items is also ineffective because it skips the targeted reflection needed to improve exam performance.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.