HELP

Microsoft AI Fundamentals AI-900 Exam Prep

AI Certification Exam Prep — Beginner

Microsoft AI Fundamentals AI-900 Exam Prep

Microsoft AI Fundamentals AI-900 Exam Prep

Pass AI-900 with beginner-friendly Microsoft exam prep

Beginner ai-900 · microsoft · azure ai fundamentals · ai certification

Prepare for Microsoft AI-900 with Confidence

Microsoft Azure AI Fundamentals, exam code AI-900, is designed for learners who want to understand core artificial intelligence concepts and Azure AI services without needing a deep technical background. This course blueprint is built specifically for non-technical professionals who want a clear, structured path to certification. Whether you work in business, operations, sales, project management, education, or administration, the course helps you understand what Microsoft expects on the exam and how to answer questions in the AI-900 style.

The course follows the official Microsoft exam domains and organizes them into a practical six-chapter learning path. Chapter 1 introduces the certification itself, including exam registration, scheduling, scoring expectations, question formats, and a realistic beginner study strategy. Chapters 2 through 5 cover the official knowledge areas in depth, while Chapter 6 brings everything together through a full mock exam, targeted review, and final test-day preparation.

Official AI-900 Exam Domains Covered

This course maps directly to the major AI-900 domains listed by Microsoft. Learners will build understanding across the following areas:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Each domain is presented in beginner-friendly language first, then reinforced with exam-style scenario practice so learners can connect definitions, services, and business use cases.

How the 6-Chapter Structure Helps You Pass

Chapter 1 helps you start correctly. Many first-time candidates are unsure how Microsoft exams work, what the passing experience feels like, or how to organize study time. This opening chapter removes that uncertainty and gives you a study framework that fits busy schedules.

Chapter 2 focuses on describing AI workloads, a foundational area that helps learners distinguish common AI solution patterns such as prediction, classification, anomaly detection, computer vision, natural language processing, and generative AI. It also introduces responsible AI principles, which appear frequently in Microsoft fundamentals exams.

Chapter 3 explains the fundamental principles of machine learning on Azure. You will learn the differences between supervised and unsupervised learning, how regression and classification work at a conceptual level, and what Azure Machine Learning does in the Microsoft ecosystem. The goal is not coding mastery, but exam-ready understanding.

Chapter 4 covers computer vision workloads on Azure, including image analysis, OCR, visual detection scenarios, and when Azure AI Vision or custom solutions may apply. Chapter 5 combines NLP workloads on Azure with generative AI workloads on Azure, helping learners connect language analysis, speech services, translation, conversational AI, prompt-based systems, and Azure OpenAI Service concepts.

Chapter 6 is designed as a capstone. It includes a full mock exam experience, weak-spot analysis, service comparison review, and exam-day tactics so learners can move from content familiarity to performance readiness.

Built for Beginners and Non-Technical Professionals

This AI-900 blueprint assumes basic IT literacy but no prior certification experience and no programming background. Concepts are introduced in plain language, then linked to Microsoft terminology so learners can recognize the wording they will likely see on the real exam. That makes the course especially useful for first-time certification candidates who need clarity, structure, and confidence.

  • Clear mapping to official Microsoft AI-900 objectives
  • Scenario-based learning instead of overly technical explanations
  • Exam-style practice woven into every content chapter
  • A dedicated mock exam and final review chapter
  • Study methods tailored to busy adult learners

Why This Course Works on Edu AI

On Edu AI, this course is positioned as a focused exam-prep experience rather than a generic AI overview. That means every chapter is aligned with the certification goal: helping you recognize concepts, compare Azure AI services, and answer AI-900 questions with less guesswork. If you are ready to begin, Register free and start building your certification plan. You can also browse all courses to explore related Azure and AI learning paths.

By the end of this course, learners will have a complete roadmap for the Microsoft AI-900 exam, stronger recall of all official domains, and a practical final review process that supports first-attempt success.

What You Will Learn

  • Describe AI workloads and common AI solution scenarios aligned to the AI-900 exam domain Describe AI workloads
  • Explain the fundamental principles of machine learning on Azure, including supervised, unsupervised, and responsible AI concepts
  • Identify computer vision workloads on Azure and match Azure AI services to image analysis, OCR, face, and video scenarios
  • Identify natural language processing workloads on Azure, including text analytics, speech, translation, and conversational AI
  • Describe generative AI workloads on Azure, including copilots, prompts, foundation models, and responsible generative AI considerations
  • Apply Microsoft AI-900 exam strategy, question analysis, elimination techniques, and final review methods to improve pass readiness

Requirements

  • Basic IT literacy and comfort using web applications
  • No prior certification experience is needed
  • No programming background is required
  • Interest in Microsoft Azure AI concepts and certification preparation
  • Willingness to practice with exam-style questions and review weak areas

Chapter 1: AI-900 Exam Foundations and Study Strategy

  • Understand the AI-900 exam format and objectives
  • Set up registration, scheduling, and account readiness
  • Build a beginner-friendly study plan
  • Learn scoring, question styles, and exam success habits

Chapter 2: Describe AI Workloads

  • Recognize core AI workload categories
  • Match business problems to AI solution types
  • Understand responsible AI fundamentals
  • Practice AI-900 style workload identification questions

Chapter 3: Fundamental Principles of ML on Azure

  • Understand machine learning basics in plain language
  • Differentiate supervised, unsupervised, and reinforcement learning
  • Identify Azure machine learning capabilities and workflows
  • Practice AI-900 style ML questions with scenario matching

Chapter 4: Computer Vision Workloads on Azure

  • Identify image and video AI scenarios
  • Map vision use cases to Azure AI services
  • Understand OCR, face, and custom vision options
  • Practice AI-900 style computer vision questions

Chapter 5: NLP and Generative AI Workloads on Azure

  • Understand language AI and speech workloads
  • Match NLP tasks to Azure AI services
  • Explain generative AI concepts, prompts, and copilots
  • Practice AI-900 style NLP and generative AI questions

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Specialist

Daniel Mercer is a Microsoft Certified Trainer who has coached beginners and business professionals through Azure certification paths. He specializes in translating Microsoft AI concepts into exam-ready language, with a strong focus on AI-900 objectives and practical test strategy.

Chapter 1: AI-900 Exam Foundations and Study Strategy

The Microsoft Azure AI Fundamentals AI-900 certification is designed to validate broad, entry-level understanding of artificial intelligence concepts and the Microsoft Azure services that support them. This is not a deep developer exam, and that distinction matters immediately for how you study. The exam expects you to recognize common AI workloads, identify the best-fit Azure AI service for a scenario, understand core machine learning ideas at a conceptual level, and demonstrate awareness of responsible AI and generative AI considerations. In other words, AI-900 tests whether you can speak the language of Azure AI clearly enough to support business conversations, project planning, solution mapping, and early design decisions.

Many learners make the mistake of treating AI-900 as either too technical or too easy. Both assumptions are risky. It is beginner-friendly, but the exam is precise. Microsoft often tests whether you can distinguish between similar services, such as image analysis versus OCR, speech translation versus text translation, or classical machine learning concepts versus generative AI scenarios. A passing candidate is not just familiar with AI buzzwords; they can connect a workload to the correct Azure capability and avoid distractors that sound plausible but do not fully match the requirement.

This chapter gives you the foundation for the rest of the course. You will learn how the exam is structured, how the official objectives map to your study plan, how to register and prepare your exam logistics, how scoring and question styles work, and how to build study habits that improve retention and pass readiness. Because AI-900 spans several broad domains, your success depends less on memorizing isolated facts and more on recognizing patterns. When a scenario mentions extracting printed text from images, you should immediately think OCR. When the scenario involves classifying customer feedback sentiment, that points toward natural language processing. When it describes generating content from prompts, that belongs to generative AI workloads. The exam rewards this kind of fast, organized pattern recognition.

Exam Tip: Throughout your preparation, ask two questions for every topic: “What workload is this?” and “Which Azure service best fits it?” That habit mirrors the logic used in many AI-900 questions.

You should also approach this certification as a strategic stepping stone. For business analysts, project managers, sales specialists, and students, AI-900 proves credible foundational knowledge. For future Azure administrators, data scientists, or AI engineers, it builds vocabulary that supports more advanced Azure study later. The best preparation combines conceptual learning, service recognition, and practical exam discipline. By the end of this chapter, you should know not only what the exam covers, but how to prepare in a way that is efficient, realistic, and aligned to how Microsoft tests.

  • Understand the AI-900 exam format and objective areas before diving into service details.
  • Set up your Microsoft and Pearson VUE readiness early so logistics do not disrupt your study timeline.
  • Use a beginner-friendly study plan focused on Azure AI workloads, not deep coding skills.
  • Learn the scoring mindset, common question styles, and elimination techniques that improve accuracy.

Think of Chapter 1 as your exam roadmap. The chapters that follow will teach you the actual AI content domains in depth. This chapter shows you how to turn that content into a passing performance on exam day.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set up registration, scheduling, and account readiness: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a beginner-friendly study plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: Overview of the Microsoft Azure AI Fundamentals AI-900 certification

Section 1.1: Overview of the Microsoft Azure AI Fundamentals AI-900 certification

AI-900 is Microsoft’s foundational certification for learners who need broad awareness of artificial intelligence and Azure AI services. It is aimed at beginners, but “beginner” should not be confused with “casual.” The exam is built to confirm that you understand AI workloads, can recognize common solution scenarios, and can identify the Azure tools used to address them. That means the exam is as much about classification and service matching as it is about definitions.

From an exam-objective perspective, AI-900 covers five major areas you will see repeatedly in your preparation: describing AI workloads and considerations, explaining the fundamental principles of machine learning on Azure, identifying computer vision workloads, identifying natural language processing workloads, and describing generative AI workloads on Azure. These map directly to the course outcomes and to the way Microsoft frames entry-level AI understanding. The exam does not require you to build production models, write complex code, or tune architectures. Instead, it tests whether you know what AI can do, when to use it, and which Azure offering fits a given business need.

A common trap is assuming that broad familiarity with AI terms is enough. In reality, AI-900 often separates candidates who know general AI vocabulary from those who know Microsoft-specific service positioning. For example, knowing what object detection is helps, but you also need to know which Azure AI capability supports image analysis scenarios. Knowing what chatbots are is useful, but you must also recognize conversational AI in the Azure context.

Exam Tip: Build a “workload to service” map as you study. If you can quickly connect vision, language, speech, machine learning, and generative AI scenarios to the correct Azure offering, you will answer many exam questions more confidently.

This certification is also valuable beyond the exam itself. For non-technical professionals, it creates confidence in AI discussions. For technical learners, it establishes the conceptual base needed before moving to role-based Azure certifications. Treat AI-900 as a structured introduction to how Microsoft organizes AI solutions on Azure, because that is exactly what the exam measures.

Section 1.2: Official exam domains and how Describe AI workloads to Generative AI workloads on Azure are weighted

Section 1.2: Official exam domains and how Describe AI workloads to Generative AI workloads on Azure are weighted

One of the smartest study habits for any certification is to begin with the official skills outline. AI-900 is objective-driven, and Microsoft publishes domain areas with approximate weightings. While percentages can change over time, the important exam strategy is to study proportionally. If a domain has higher weighting, it deserves more time, more review cycles, and more scenario practice. If a domain is smaller, you still need coverage, but not at the expense of the heavily tested areas.

At a high level, the AI-900 domains move from broad to specific. You begin with describing AI workloads and considerations, which includes understanding what AI is used for and recognizing common scenarios. Then you study the fundamentals of machine learning on Azure, including supervised learning, unsupervised learning, regression, classification, clustering, and responsible AI principles. From there, the exam branches into solution categories: computer vision workloads, natural language processing workloads, and generative AI workloads on Azure. This sequencing matters because later domains assume you can already recognize the type of workload being described.

When learners underperform on AI-900, it is often because they study topic lists as isolated facts rather than as categories. Microsoft does not only ask “what is OCR” in a direct way. It may present a business requirement and expect you to infer that the task is text extraction from images, which is a vision workload, and then identify the matching Azure service. The same applies to NLP and generative AI scenarios. The exam therefore tests both recognition and discrimination.

Exam Tip: Organize your notes by domain, but also create a second layer by scenario type. For example: “read text from images,” “analyze sentiment,” “detect objects,” “translate speech,” “generate content from prompts.” This helps you answer applied questions faster.

Pay special attention to machine learning fundamentals and responsible AI. Candidates sometimes overfocus on trendy generative AI terms and neglect traditional ML concepts such as classification versus regression or supervised versus unsupervised learning. That is a mistake. The exam expects foundational literacy across the full outline, not just familiarity with current AI headlines. Your preparation should reflect the official domains, their approximate weighting, and the scenario-based style in which those domains are tested.

Section 1.3: Registration process, Pearson VUE options, identification, and scheduling tips

Section 1.3: Registration process, Pearson VUE options, identification, and scheduling tips

Exam success starts before you answer the first question. Administrative mistakes create avoidable stress, and stress reduces performance. For AI-900, you typically register through Microsoft’s certification portal and complete scheduling through Pearson VUE. You may have options for online proctored testing or a physical test center, depending on availability in your region. Choose based on your environment, reliability, and personal focus style rather than convenience alone.

If you test online, your room setup matters. You will need a quiet, private space, a dependable internet connection, and a computer that passes the system check. Online delivery is convenient, but candidates often underestimate the strictness of the environment requirements. Background noise, desk clutter, unauthorized materials, or unstable connectivity can create interruptions. A test center may feel less flexible, but for many learners it reduces technical uncertainty.

Make sure your legal identification matches your registration details exactly or closely enough to satisfy exam policies. Do not leave this until the day before the exam. Also verify your Microsoft account access in advance, especially if you have multiple personal and work accounts. Confusion over sign-in credentials is more common than many candidates expect.

Exam Tip: Schedule your exam date early, even if it is several weeks away. A booked date creates urgency and helps you build a structured study plan backward from exam day.

Choose your time slot strategically. If you focus best in the morning, avoid a late-night appointment. If your home environment is noisy at certain hours, do not schedule during those windows. Build in margin for check-in procedures, technical validation, and pre-exam nerves. Practical readiness is part of exam readiness. A strong candidate does not just know AI concepts; they also remove preventable obstacles that could affect concentration or confidence on test day.

Section 1.4: Exam format, scoring model, passing mindset, and question types

Section 1.4: Exam format, scoring model, passing mindset, and question types

AI-900 is typically presented as a short fundamentals exam, but it still requires disciplined execution. You should expect a variety of question styles rather than a single repetitive format. These may include multiple-choice questions, multiple-response items, scenario-based prompts, and other structured formats Microsoft commonly uses in fundamentals exams. The exact item mix can vary, so your goal is not to predict the exam perfectly but to be comfortable reading carefully and identifying what the question is truly asking.

The passing score is commonly reported on a scaled score model, and many candidates misunderstand what that means. A scaled score is not simply a raw percentage. Different question sets may vary, and Microsoft uses scoring methods designed to maintain consistent exam standards. For your study strategy, the key takeaway is this: do not chase a specific raw percent. Instead, aim for consistent competence across all domains, with extra strength in the highest-weighted areas.

Mindset matters more than many learners realize. Fundamentals exams often include answer choices that are partially correct, broadly related, or technically possible but not best aligned with the requirement. Your job is to choose the most appropriate answer in the Azure context. That means watching for keywords such as classify, predict, detect, extract, analyze, translate, summarize, or generate. Those verbs often reveal the intended workload type.

Exam Tip: If two answers both seem plausible, ask which one most directly satisfies the exact business need described. The best AI-900 answer is usually the most specific fit, not the most general technology reference.

A common trap is rushing because the exam is labeled “fundamentals.” Candidates skim, miss qualifiers like image versus text, speech versus language, or training versus inferencing, and lose easy points. Read slowly enough to catch distinctions, but not so slowly that you overanalyze simple items. A passing mindset combines calm reading, workload recognition, and disciplined elimination of distractors that sound modern or impressive but do not match the requirement precisely.

Section 1.5: Study strategy for non-technical professionals and note-taking methods

Section 1.5: Study strategy for non-technical professionals and note-taking methods

AI-900 is especially approachable for non-technical professionals, but success comes from structured study, not passive exposure. If you do not come from a coding or data science background, your advantage is often that you naturally think in business scenarios. Use that strength. Instead of starting with abstract definitions alone, begin by asking what organizations want AI systems to do: classify images, analyze customer sentiment, translate speech, extract text, forecast trends, or generate content. Then connect each scenario to the underlying AI concept and Azure service.

A practical beginner-friendly study plan should include short, regular sessions across multiple weeks. For example, one session might focus on machine learning concepts, another on computer vision, another on NLP, and another on generative AI and responsible AI. Revisit topics instead of studying them once. Spaced repetition is powerful for service recognition, which is central to this exam.

Your notes should be compact, comparative, and scenario-based. Do not just write long paragraphs copied from documentation. Instead, create tables or bullet lists with columns such as workload, business use case, key clue words, Azure service, and common confusion points. For machine learning, compare classification, regression, and clustering side by side. For language services, compare sentiment analysis, translation, speech recognition, and conversational AI. For generative AI, note the roles of prompts, copilots, and foundation models.

Exam Tip: Add a “not this” column to your notes. Example: OCR is for reading text from images, not for detecting sentiment in text. This kind of contrast helps eliminate wrong answers quickly.

Another effective method is the one-page domain summary. After studying a domain, close the materials and write what you remember from memory. Then check gaps. This active recall process shows what you truly know, not what merely looks familiar. Non-technical learners often outperform more technical candidates when they use disciplined note-taking and scenario mapping because AI-900 rewards understanding and differentiation more than implementation detail.

Section 1.6: How to use practice questions, review loops, and final-week revision plans

Section 1.6: How to use practice questions, review loops, and final-week revision plans

Practice questions are useful, but only if you use them correctly. Their purpose is not to memorize answer keys or hope that the same wording appears on the exam. Their real value is diagnostic. They show whether you can recognize the tested concept, distinguish between similar services, and remain accurate under exam-like pressure. After each practice session, spend more time reviewing explanations than counting scores. Ask yourself why the correct answer fits better than the distractors.

A strong review loop has three steps. First, attempt practice questions honestly without immediate help. Second, categorize every miss by reason: concept gap, service confusion, careless reading, or overthinking. Third, revisit source material and update notes based on that specific weakness. This turns practice into targeted improvement rather than random repetition. If you keep missing NLP versus speech distinctions, that becomes a focused review topic. If you confuse supervised and unsupervised learning, revise those comparisons until the difference feels automatic.

The final week before the exam should be about consolidation, not cramming. Review your domain summaries, service comparison notes, and common trap list. Revisit high-yield distinctions: classification versus regression, OCR versus image analysis, text translation versus speech translation, chatbot versus text analytics, traditional ML versus generative AI. Keep study sessions shorter and sharper so you maintain confidence and avoid burnout.

Exam Tip: In the final 48 hours, prioritize clarity over volume. Reviewing core patterns calmly is more effective than trying to absorb large amounts of new material at the last minute.

Also prepare your exam-day routine. Confirm your appointment, identification, login details, and testing environment. Sleep matters. So does emotional control. A candidate who enters the exam rested, organized, and practiced in elimination techniques often performs better than one who studied more content but arrives distracted. Your goal in the final stage is simple: convert knowledge into dependable exam execution. That is how you turn preparation into a passing result.

Chapter milestones
  • Understand the AI-900 exam format and objectives
  • Set up registration, scheduling, and account readiness
  • Build a beginner-friendly study plan
  • Learn scoring, question styles, and exam success habits
Chapter quiz

1. A candidate is beginning preparation for the Microsoft AI-900 exam. Which study approach is MOST aligned with the actual purpose and difficulty of the exam?

Show answer
Correct answer: Focus on recognizing AI workloads and matching them to the correct Azure AI services at a conceptual level
AI-900 is an entry-level fundamentals exam that emphasizes conceptual understanding, common AI workloads, responsible AI awareness, and identifying the best-fit Azure service for a scenario. Option A matches this objective. Option B is too developer-focused for AI-900, which does not require deep coding ability. Option C is also too detailed and operational; the exam is not primarily about memorizing portal configuration steps or SDK syntax.

2. A learner wants to avoid exam-day issues when taking AI-900 through Pearson VUE. Which action should be completed EARLY in the study process?

Show answer
Correct answer: Set up Microsoft and Pearson VUE account readiness, including scheduling and identity details, before the exam date approaches
Chapter 1 emphasizes handling registration, scheduling, and account readiness early so logistics do not disrupt the study timeline. Option B is correct because it reduces avoidable exam-day problems. Option A is risky because late verification can expose account or identification issues too close to the appointment. Option C is incorrect because logistics are part of exam readiness and can affect planning, deadlines, and confidence.

3. A student asks how to build an effective beginner-friendly AI-900 study plan. Which recommendation best reflects the chapter guidance?

Show answer
Correct answer: Organize study around workload patterns such as vision, NLP, and generative AI, then connect each workload to the appropriate Azure service
The chapter stresses pattern recognition: identify the workload, then identify the Azure service that best fits. Option A follows that strategy and mirrors how AI-900 questions are commonly framed. Option B goes too deep for a fundamentals exam; advanced math is not the starting point for AI-900. Option C is wrong because scenario recognition is central to the exam, especially when distinguishing between similar services and use cases.

4. A company wants employees to improve performance on AI-900 questions that describe business scenarios and ask for the best Azure AI solution. Which exam habit is MOST useful?

Show answer
Correct answer: For each scenario, ask: 'What workload is this?' and 'Which Azure service best fits it?'
Option A is correct because the chapter explicitly recommends this two-question habit to mirror AI-900 exam logic. It helps candidates map scenarios to the right workload and service. Option B is incorrect because AI-900 often tests best-fit selection, not the most advanced or broadest service. Option C is also wrong because distractors are common and often include plausible but not fully correct Azure AI services.

5. A candidate is reviewing AI-900 scoring and question styles. Which statement best reflects an effective exam success mindset?

Show answer
Correct answer: Success depends on precise recognition of similar concepts and using elimination when options sound plausible
Option B is correct because AI-900 is beginner-friendly but precise. Candidates must distinguish between similar concepts and services, and elimination is an important strategy when distractors sound reasonable. Option A is wrong because buzzword familiarity alone is not enough; the exam tests accurate service and workload mapping. Option C is also incorrect because the exam focuses more on conceptual understanding and scenario analysis than obscure implementation details.

Chapter 2: Describe AI Workloads

This chapter maps directly to the AI-900 exam objective area focused on describing AI workloads and recognizing common AI solution scenarios. On the exam, Microsoft does not expect you to build models or write code. Instead, you must identify which type of AI workload fits a business need, distinguish similar-sounding concepts, and avoid being distracted by extra wording in scenario-based questions. This chapter helps you recognize core AI workload categories, match business problems to AI solution types, understand responsible AI fundamentals, and strengthen your readiness for AI-900 style workload identification items.

A common exam pattern is to present a short business scenario and ask which AI capability is being described. For example, a company may want to identify defects in images, extract text from scanned forms, detect unusual payment behavior, recommend likely outcomes, or enable a virtual assistant to answer users in natural language. Your job is not to overthink implementation details. Your job is to classify the workload correctly. That means learning the language of the exam: prediction, classification, regression, anomaly detection, computer vision, natural language processing, document intelligence, knowledge mining, and generative AI.

Another tested skill is knowing the difference between broad umbrella terms. AI is the overall field of creating systems that exhibit intelligent behavior. Machine learning is a subset of AI in which models learn patterns from data. Data science is a broader discipline focused on collecting, preparing, analyzing, and interpreting data, often using statistics and machine learning techniques. Microsoft often tests this distinction indirectly by describing a scenario and asking what technology category best applies. If the system learns from historical data to make future judgments, think machine learning. If the prompt is asking about the overall business capability, think AI. If the focus is exploration, transformation, and analysis of data, think data science.

This chapter also connects to later AI-900 domains. When you identify a workload correctly, you are better prepared to match it to Azure AI services in later chapters. For example, image analysis aligns with computer vision workloads, OCR aligns with document and vision scenarios, text sentiment aligns with NLP, and copilots or summarization align with generative AI. Even if the question does not mention Azure by name, understanding the workload category first is your fastest path to the right answer.

Exam Tip: In AI-900, start by identifying the business outcome before thinking about tools. Ask yourself: Is the scenario about seeing, reading, speaking, predicting, detecting unusual behavior, searching knowledge, or generating new content? That first classification usually eliminates most incorrect answers.

One of the biggest traps in this domain is confusing related terms. Classification predicts categories, while regression predicts numeric values. Anomaly detection looks for unusual patterns rather than assigning a normal category. Conversational AI focuses on interaction through language, often with chatbots or voice assistants. Knowledge mining is not the same as document OCR; it is about extracting, enriching, indexing, and searching information across large content collections. Generative AI differs from classic prediction because it creates new content such as text, code, or images based on prompts and learned patterns.

Responsible AI is also part of workload selection. On the exam, you may be asked which principle is most relevant when a model disadvantages one group, fails under changing conditions, exposes personal information, excludes users with disabilities, provides no explanation, or lacks clear ownership. These map to fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. In AI-900, Microsoft wants you to recognize that selecting an AI solution is not only a technical decision but also an ethical and governance decision.

  • Recognize broad workload families before narrowing to service names.
  • Read scenario verbs carefully: classify, predict, detect, extract, analyze, converse, summarize, generate.
  • Separate traditional machine learning workloads from generative AI workloads.
  • Use responsible AI principles to evaluate whether a solution is appropriate and trustworthy.
  • Watch for exam traps that swap similar terms such as OCR versus translation, classification versus anomaly detection, and chatbot versus question answering.

As you study this chapter, think like the exam. Microsoft frequently rewards pattern recognition: if you can match common business needs to the correct AI workload quickly, you gain time for harder questions elsewhere on the test. The sections that follow break this domain into the exact concepts most likely to appear and explain how to identify the correct answer with confidence.

Sections in this chapter
Section 2.1: Describe AI workloads and the difference between AI, machine learning, and data science

Section 2.1: Describe AI workloads and the difference between AI, machine learning, and data science

For AI-900, you need a clean mental model of three terms that are often used interchangeably in casual conversation but are distinct on the exam: artificial intelligence, machine learning, and data science. Artificial intelligence is the broadest concept. It refers to software systems that perform tasks associated with human intelligence, such as understanding language, recognizing images, making predictions, and interacting conversationally. Machine learning is a subset of AI in which systems learn patterns from data rather than being programmed only with fixed rules. Data science is a discipline focused on collecting, cleaning, exploring, modeling, and communicating data insights. It may use machine learning, but it also includes statistics, visualization, and analytics.

When the exam asks about workloads, it is testing whether you can identify what kind of intelligent task is being performed. Common AI workload families include machine learning, computer vision, natural language processing, conversational AI, knowledge mining, and generative AI. If a system uses past data to forecast future outcomes, that is likely a machine learning workload. If it reads text from an image or detects objects in a photo, that is computer vision. If it extracts key phrases or detects sentiment, that is natural language processing.

A frequent trap is choosing data science when the scenario clearly describes a deployed intelligent application. Data science often happens before deployment and supports analysis and model development. AI workloads describe what the application does in operation. Another trap is assuming that any data-based system is machine learning. A rules-based chatbot is still AI-oriented, but not every scenario requires machine learning.

Exam Tip: If the question emphasizes discovering insights from datasets, preparing data, or statistical analysis, think data science. If it emphasizes learning from examples to make decisions or predictions, think machine learning. If it emphasizes the overall intelligent behavior or solution category, think AI.

You should also know that machine learning itself includes different approaches. Supervised learning uses labeled data to predict known outcomes. Unsupervised learning finds patterns without labeled answers. While deeper machine learning details are covered elsewhere, at this stage the exam wants you to understand where ML fits inside AI and how that differs from the wider analytical practice of data science. This distinction helps you eliminate plausible but incorrect options in definition-based and scenario-based questions.

Section 2.2: Common AI workloads including prediction, classification, anomaly detection, and conversational AI

Section 2.2: Common AI workloads including prediction, classification, anomaly detection, and conversational AI

This section targets one of the most heavily tested AI-900 skills: matching a business problem to the correct AI solution type. Prediction is a broad term, but on the exam it usually points to machine learning that estimates an outcome from historical data. Within prediction, you must distinguish between classification and regression. Classification predicts a category or label, such as whether an email is spam, whether a transaction is fraudulent, or which product type an image contains. Regression predicts a numeric value, such as house price, delivery time, or future sales amount.

Anomaly detection is different from both classification and regression. It focuses on identifying rare or unusual patterns that do not fit expected behavior. Typical business scenarios include detecting suspicious login activity, unusual manufacturing sensor readings, or abnormal financial transactions. The exam may try to lure you toward classification when the wording mentions fraud, defects, or errors. The clue for anomaly detection is that the goal is to spot outliers or abnormal events rather than assign one of several standard categories.

Conversational AI refers to systems that interact with users through natural language, often through chat or speech. Typical examples include virtual agents, support bots, voice assistants, and systems that answer user questions interactively. On AI-900, conversational AI is often described through user experience rather than technical detail. If users are asking questions, receiving responses, and having back-and-forth interactions, conversational AI is the likely answer.

Exam Tip: Look for the output type. If the answer is a label, think classification. If the answer is a number, think regression. If the goal is finding rare unusual behavior, think anomaly detection. If the system talks with the user, think conversational AI.

Another common trap is confusing recommendation-style scenarios with classification. Recommending a likely product or predicting whether a customer will churn is still a predictive machine learning scenario, but the exam may emphasize business language instead of model terminology. Focus on what the system is producing. Also be careful not to confuse conversational AI with text analytics. Text analytics extracts information from text; conversational AI manages interaction with a user. The exam expects practical recognition, not algorithm memorization.

To prepare well, practice rewriting business scenarios in plain language. “Identify unauthorized card activity” becomes anomaly detection. “Determine whether a loan should be approved” becomes classification. “Estimate next month revenue” becomes regression. “Provide customer support through a virtual assistant” becomes conversational AI. That translation skill is exactly what the domain tests.

Section 2.3: Computer vision, natural language processing, document intelligence, and knowledge mining scenarios

Section 2.3: Computer vision, natural language processing, document intelligence, and knowledge mining scenarios

AI-900 frequently presents realistic scenarios and expects you to choose the right workload family among computer vision, natural language processing, document intelligence, and knowledge mining. Computer vision deals with extracting meaning from images and video. Typical tasks include image classification, object detection, face-related analysis, caption generation, OCR from images, and video understanding. If the system needs to “see” or interpret visual content, start with computer vision.

Natural language processing, or NLP, deals with understanding and generating human language in text or speech. Examples include sentiment analysis, key phrase extraction, entity recognition, language detection, translation, speech-to-text, text-to-speech, and intent recognition. On the exam, the best clue is that the input or output is human language rather than pixels. If the scenario is about analyzing reviews, transcribing audio, translating conversations, or recognizing customer intent, NLP is the best fit.

Document intelligence sits at the intersection of vision and language. It focuses on extracting structured information from forms, receipts, invoices, contracts, and scanned documents. A trap here is assuming this is only OCR. OCR extracts raw text, but document intelligence goes further by identifying fields, tables, key-value pairs, and layout. If the scenario involves processing business documents and turning them into usable structured data, document intelligence is the stronger answer.

Knowledge mining is about unlocking value from large collections of documents and content. It involves ingesting data, enriching it with AI skills such as OCR or entity extraction, indexing it, and enabling search and discovery. This is not just reading one document; it is building searchable knowledge from many sources. Scenarios often include enterprise search, finding expertise, surfacing insights from archives, or making documents searchable.

Exam Tip: Ask what the user wants at the end. Understand an image? Computer vision. Understand human language? NLP. Extract fields from forms? Document intelligence. Search and discover insights across many documents? Knowledge mining.

Common traps include confusing OCR with translation, face analysis with identity verification, and document extraction with enterprise search. Also note that AI-900 expects high-level recognition, not deep service configuration. Your advantage comes from focusing on the business problem statement. If the scenario says “scan receipts and capture merchant name and total,” that is document intelligence. If it says “make millions of scanned reports searchable,” that is knowledge mining. If it says “identify objects in warehouse camera footage,” that is computer vision. If it says “detect sentiment in social posts,” that is NLP.

Section 2.4: Generative AI use cases for content creation, summarization, and assistance

Section 2.4: Generative AI use cases for content creation, summarization, and assistance

Generative AI is now a major part of AI-900. Unlike traditional AI systems that mainly classify, predict, or detect, generative AI creates new content based on patterns learned from large datasets. This content can include text, code, images, and other media. The exam typically tests your ability to identify generative AI scenarios such as drafting emails, generating product descriptions, creating meeting summaries, answering questions with natural responses, assisting with coding, and powering copilots.

A copilot is a generative AI assistant embedded into a workflow to help users complete tasks more efficiently. It does not merely retrieve data; it can synthesize, draft, summarize, explain, and propose next steps. Summarization is one of the most common exam examples. If a scenario asks for reducing long documents, chats, or transcripts into shorter key points, that strongly indicates a generative AI workload. Content creation is another clue, especially when the system produces original text or images from prompts.

You should also understand prompts and foundation models at a conceptual level. A prompt is the instruction or context given to the model. A foundation model is a large pre-trained model that can be adapted or prompted for many tasks. The exam is not asking you to train one. It is asking whether you recognize that generative AI often relies on broad pre-trained models and user prompting rather than task-specific hand-coded logic.

Exam Tip: If the system creates a new response, draft, summary, or image, think generative AI. If it only labels, scores, or extracts existing information, think traditional AI workload instead.

A common trap is confusing generative AI with search or analytics. Search finds existing content. Summarization creates a condensed new version. Text analytics extracts facts such as sentiment or key phrases. Generative AI can produce fluent explanations or rewritten content. Another trap is assuming all chat experiences are generative AI. Some chatbots are rules-based or retrieval-based. The best clue is whether the system generates natural, context-aware content instead of following a fixed decision tree.

Responsible use matters here as well. Generative models can produce inaccurate or harmful output, so AI-900 may test awareness of grounding, monitoring, safety controls, and human review. In exam scenarios, choose generative AI when the business need is creative or assistive content generation, especially for copilots, summarization, drafting, and natural question answering.

Section 2.5: Responsible AI principles including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Section 2.5: Responsible AI principles including fairness, reliability, privacy, inclusiveness, transparency, and accountability

Responsible AI is not a side topic on AI-900. It is built into the exam domain and often appears in direct principle-matching questions. Microsoft emphasizes six core principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. You do not need legal detail, but you do need to recognize what each principle means in practice and how it appears in a scenario.

Fairness means AI systems should not produce unjustified bias or disadvantage for certain groups. If a hiring model rejects qualified applicants from one demographic more often than others, fairness is the issue. Reliability and safety mean the system should perform consistently and handle failures appropriately, especially in changing or high-risk environments. Privacy and security focus on protecting personal data and preventing unauthorized access or misuse. Inclusiveness means designing AI that works for people with diverse abilities, languages, and backgrounds. Transparency means users and stakeholders should understand when AI is being used and have meaningful insight into how decisions are made. Accountability means humans and organizations remain responsible for AI outcomes.

Exam Tip: Match the problem wording to the principle. Bias points to fairness. Inconsistent or unsafe performance points to reliability and safety. Exposure of personal information points to privacy and security. Lack of accessibility points to inclusiveness. Unclear decision logic points to transparency. Lack of ownership or governance points to accountability.

Common traps involve overlap. For example, a model that works poorly for wheelchair users is not just a quality issue; it is inclusiveness. A model that gives no explanation for loan denials may feel unfair, but if the emphasis is on explainability, transparency is the better match. If the question highlights who is answerable for the system, choose accountability.

On the exam, responsible AI may also be tested through generative AI scenarios. For instance, if a content generator can produce harmful or incorrect output, you should think about reliability and safety, transparency, and accountability. If a system uses customer conversations to train a model without proper controls, privacy and security becomes central. Microsoft wants candidates to understand that successful AI solutions are not judged only by accuracy, but also by trustworthiness and governance.

As an exam strategy, memorize the six principles with a short scenario cue for each. Doing this turns abstract vocabulary into practical recognition, which is exactly how AI-900 frames the questions.

Section 2.6: Exam-style practice for the domain Describe AI workloads

Section 2.6: Exam-style practice for the domain Describe AI workloads

Success in this domain comes from disciplined scenario analysis. AI-900 questions often include extra words designed to distract you from the actual workload being tested. Your first task is to isolate the business goal in one sentence. Is the company trying to predict a value, assign a category, detect abnormal behavior, extract information from images or documents, understand language, enable conversation, search large content collections, or generate new content? Once you classify the goal, answer selection becomes much easier.

A strong elimination method is to sort answer options into families. Remove options that belong to the wrong modality first. If the problem is about images, remove NLP choices. If it is about speech translation, remove computer vision choices. If it is about generating a summary, remove classic predictive ML choices. Then compare the remaining options using the exact output required. Category, number, anomaly, extracted field, search result, or generated response each point to different workloads.

Exam Tip: Pay attention to verbs. “Classify,” “predict,” “estimate,” “detect unusual,” “extract,” “translate,” “transcribe,” “search,” “summarize,” and “generate” are high-value clues that map directly to tested workload categories.

Another exam strategy is to watch for scope. A single invoice being processed suggests document intelligence. A whole repository of scanned contracts being indexed for enterprise search suggests knowledge mining. A support bot following scripted paths may be conversational AI without being generative AI. A copilot that drafts responses and summarizes meetings is much more likely a generative AI scenario.

Do not let Azure product names, if they appear, distract you from the core objective of this chapter. Microsoft often rewards candidates who understand the workload before the service. This is especially useful when two answer choices are both real Azure technologies but only one matches the scenario type. The exam is not asking which service sounds familiar. It is asking which workload best solves the stated problem.

For final review, create your own quick decision guide: seeing equals vision, language equals NLP, forms equals document intelligence, searchable collections equals knowledge mining, prediction from data equals machine learning, unusual patterns equals anomaly detection, interactive assistants equals conversational AI, and new content from prompts equals generative AI. If you can make those matches rapidly and consistently, you will be well prepared for this AI-900 domain.

Chapter milestones
  • Recognize core AI workload categories
  • Match business problems to AI solution types
  • Understand responsible AI fundamentals
  • Practice AI-900 style workload identification questions
Chapter quiz

1. A retailer wants to analyze photos from store shelves to identify damaged product packaging automatically. Which AI workload should they use?

Show answer
Correct answer: Computer vision
Computer vision is correct because the scenario involves interpreting image data to detect defects. Natural language processing is used for working with text or speech, not photos of products. Regression is a machine learning technique for predicting numeric values, such as price or demand, and does not fit image-based defect detection.

2. A bank wants to flag credit card transactions that differ significantly from a customer's normal spending behavior. Which AI solution type best fits this requirement?

Show answer
Correct answer: Anomaly detection
Anomaly detection is correct because the goal is to identify unusual patterns that stand out from expected behavior. Classification assigns items to predefined categories, which is different from looking for rare or suspicious outliers. Conversational AI is used for chatbot or voice-based interactions and is unrelated to transaction pattern analysis.

3. A company wants to build a solution that reads scanned invoices, extracts vendor names and totals, and stores the results in a database. Which AI workload is most appropriate?

Show answer
Correct answer: Document intelligence
Document intelligence is correct because the scenario focuses on extracting structured information from scanned forms and invoices, including OCR and field extraction. Knowledge mining is broader and focuses on extracting, enriching, indexing, and searching across large collections of content rather than primarily reading forms. Generative AI creates new content such as text or images and is not the best match for extracting invoice fields.

4. You need to match a business problem to a machine learning approach. Which scenario is an example of regression?

Show answer
Correct answer: Estimating the monthly sales amount for next quarter
Regression is correct because it predicts a numeric value, in this case monthly sales amount. Predicting spam or not spam is classification because the output is a category. Identifying unusual sensor readings is anomaly detection because it looks for abnormal patterns rather than predicting a numeric target.

5. A hiring model consistently recommends fewer qualified applicants from one demographic group than from others with similar qualifications. Which responsible AI principle is most directly affected?

Show answer
Correct answer: Fairness
Fairness is correct because the scenario describes unequal treatment or disadvantage across demographic groups. Transparency relates to understanding and explaining how a model makes decisions, which may also matter, but it is not the primary issue described. Accountability concerns who is responsible for the system and its outcomes, which is important in governance but does not most directly describe biased recommendations.

Chapter focus: Fundamental Principles of ML on Azure

This chapter is written as a guided learning page, not a checklist. The goal is to help you build a mental model for Fundamental Principles of ML on Azure so you can explain the ideas, implement them in code, and make good trade-off decisions when requirements change. Instead of memorising isolated terms, you will connect concepts, workflow, and outcomes in one coherent progression.

We begin by clarifying what problem this chapter solves in a real project context, then map the sequence of tasks you would follow from first attempt to reliable result. You will learn which assumptions are usually safe, which assumptions frequently fail, and how to verify your decisions with simple checks before you invest time in optimisation.

As you move through the lessons, treat each one as a building block in a larger system. The chapter is intentionally structured so each topic answers a practical question: what to do, why it matters, how to apply it, and how to detect when something is going wrong. This keeps learning grounded in execution rather than theory alone.

  • Understand machine learning basics in plain language — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Differentiate supervised, unsupervised, and reinforcement learning — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Identify Azure machine learning capabilities and workflows — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.
  • Practice AI-900 style ML questions with scenario matching — learn the purpose of this topic, how it is used in practice, and which mistakes to avoid as you apply it.

Deep dive: Understand machine learning basics in plain language. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Differentiate supervised, unsupervised, and reinforcement learning. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Identify Azure machine learning capabilities and workflows. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

Deep dive: Practice AI-900 style ML questions with scenario matching. In this part of the chapter, focus on the decision points that matter most in real work. Define the expected input and output, run the workflow on a small example, compare the result to a baseline, and write down what changed. If performance improves, identify the reason; if it does not, identify whether data quality, setup choices, or evaluation criteria are limiting progress.

By the end of this chapter, you should be able to explain the key ideas clearly, execute the workflow without guesswork, and justify your decisions with evidence. You should also be ready to carry these methods into the next chapter, where complexity increases and stronger judgement becomes essential.

Before moving on, summarise the chapter in your own words, list one mistake you would now avoid, and note one improvement you would make in a second iteration. This reflection step turns passive reading into active mastery and helps you retain the chapter as a practical skill, not temporary information.

Sections in this chapter
Section 3.1: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 3.2: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 3.3: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 3.4: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 3.5: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Section 3.6: Practical Focus

Practical Focus. This section deepens your understanding of Fundamental Principles of ML on Azure with practical explanation, decisions, and implementation guidance you can apply immediately.

Focus on workflow: define the goal, run a small experiment, inspect output quality, and adjust based on evidence. This turns concepts into repeatable execution skill.

Chapter milestones
  • Understand machine learning basics in plain language
  • Differentiate supervised, unsupervised, and reinforcement learning
  • Identify Azure machine learning capabilities and workflows
  • Practice AI-900 style ML questions with scenario matching
Chapter quiz

1. A retail company wants to predict next month's sales revenue for each store based on historical sales, promotions, and seasonal trends. Which type of machine learning should the company use?

Show answer
Correct answer: Supervised learning
Supervised learning is correct because the goal is to predict a known numeric value from labeled historical data, which is a regression task. Unsupervised learning is incorrect because it is used to find patterns or groupings in unlabeled data, not to predict a known target value. Reinforcement learning is incorrect because it is designed for sequential decision-making based on rewards, not forecasting from historical labeled examples.

2. A company has customer transaction data but no labels. It wants to group customers with similar buying behavior for targeted marketing. Which approach should the company choose?

Show answer
Correct answer: Clustering
Clustering is correct because it is an unsupervised learning technique used to group similar data points when no labels are available. Classification is incorrect because it requires predefined categories in labeled training data. Regression is incorrect because it predicts continuous numeric values rather than identifying natural groupings in data.

3. A manufacturer wants to build, train, and deploy a machine learning model on Azure with minimal coding effort. Which Azure service should they use?

Show answer
Correct answer: Azure Machine Learning
Azure Machine Learning is correct because it provides tools for creating, training, managing, and deploying machine learning models, including automated machine learning and designer-based workflows. Azure AI Language is incorrect because it is focused on natural language scenarios such as sentiment analysis and entity recognition, not general ML lifecycle management. Azure AI Vision is incorrect because it is designed for image-related AI workloads rather than end-to-end machine learning model development.

4. You are reviewing an ML project. The team trained a model, but before optimizing hyperparameters they want to confirm whether the model is actually better than a simple starting point. What should they do first?

Show answer
Correct answer: Compare the model against a baseline
Comparing the model against a baseline is correct because a baseline helps determine whether the model adds value before investing time in optimization. Increasing the training data size immediately is incorrect because more data may help later, but it does not first verify whether the current approach is effective. Deploying the model to production is incorrect because the team should validate model usefulness and performance before deployment.

5. A software company is building a system that learns how to choose the best discount offer by trying different actions and receiving feedback based on customer purchases. Which type of machine learning does this scenario describe?

Show answer
Correct answer: Reinforcement learning
Reinforcement learning is correct because the system improves by taking actions and learning from reward-based feedback over time. Unsupervised learning is incorrect because it does not use reward signals or action-based learning; it focuses on discovering patterns in unlabeled data. Supervised learning is incorrect because it learns from labeled examples with known outcomes rather than interacting with an environment and receiving rewards.

Chapter 4: Computer Vision Workloads on Azure

This chapter focuses on one of the most testable AI-900 areas: computer vision workloads on Azure. On the exam, Microsoft expects you to recognize common vision scenarios, identify which Azure AI service fits the business need, and distinguish between prebuilt capabilities and custom-trained solutions. You are not being tested as a developer who must write code. Instead, you are being tested as a fundamentals candidate who can match a requirement such as image captioning, OCR, face analysis, or video understanding to the correct Azure offering.

Computer vision workloads involve extracting meaning from visual inputs such as photos, scanned forms, camera streams, and video files. In real organizations, this can mean analyzing retail shelf images, reading text from receipts, flagging unsafe visual content, detecting objects in manufacturing, or describing what appears in an image for accessibility scenarios. The AI-900 exam usually frames these capabilities as business problems first, then asks you to choose the most appropriate Azure AI service. That means your success depends on recognizing keywords and avoiding service confusion.

As you work through this chapter, keep the lesson goals in mind: identify image and video AI scenarios, map vision use cases to Azure AI services, understand OCR, face, and custom vision options, and build confidence with AI-900 style computer vision reasoning. The exam often rewards careful reading. Small wording differences such as classify versus detect, read printed text versus analyze image content, or prebuilt versus custom can completely change the right answer.

Exam Tip: For AI-900, always start by identifying the workload category before thinking about product names. Ask yourself: Is the problem about general image understanding, reading text, recognizing faces, analyzing video, or building a custom model from labeled images? Once you classify the scenario correctly, the answer choices become much easier to eliminate.

A second exam pattern is service overlap. Azure AI Vision can perform several common image analysis tasks, while OCR-related tasks may point to reading text capabilities or document-focused extraction. Face-related scenarios are separate from general image tagging. Custom image classification differs from using prebuilt image analysis. Many incorrect answers on the exam sound plausible because they belong to the same broad domain. Your job is to choose the best fit, not just a possible fit.

  • Use Azure AI Vision for common image analysis scenarios such as tagging, captioning, object detection, OCR, and some video-related visual analysis workflows.
  • Think carefully about whether the business wants general-purpose insights from images or a domain-specific model trained on company data.
  • Remember that OCR is about text extraction from images, screenshots, scans, and photographed documents.
  • Treat face scenarios separately from generic object or scene recognition.
  • Watch for responsible AI and safety wording, especially around facial analysis and harmful image content.

This chapter is mapped directly to the AI-900 objective of identifying computer vision workloads on Azure and matching Azure AI services to image analysis, OCR, face, and video scenarios. Each section explains what the exam is really testing, where candidates commonly make mistakes, and how to identify the best answer from similar options. Mastering this chapter will not only improve your score in the vision domain, but also strengthen your overall exam strategy because the same pattern-matching approach appears across language and generative AI topics too.

Practice note for Identify image and video AI scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Map vision use cases to Azure AI services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand OCR, face, and custom vision options: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Computer vision workloads on Azure and common business scenarios

Section 4.1: Computer vision workloads on Azure and common business scenarios

Computer vision is the branch of AI that enables systems to interpret images and video. For AI-900, you should be able to recognize common workload categories and connect them to business outcomes. Typical scenarios include analyzing photographs, reading signs or labels, monitoring video feeds, identifying visual defects, and extracting useful information from scanned documents. The exam often gives you a short scenario and asks which Azure AI capability best supports it.

Common business scenarios include retail image analysis for product placement, manufacturing quality inspection, healthcare document imaging, security camera monitoring, digitization of paper forms, and accessibility applications such as image captions. Video scenarios may involve analyzing recorded footage or camera streams to detect events, summarize content, or identify frames that contain relevant visual features. Even when the scenario mentions video, the test may still be focused on visual analysis concepts rather than deep media engineering.

What the exam is really testing here is your ability to separate workload intent. If a company wants to know what is in an image, think image analysis. If it wants to read a street sign or invoice, think OCR. If it wants to identify attributes related to a human face, think face-related capabilities. If it wants a model trained on its own labeled product images, think custom vision rather than a general prebuilt model.

Exam Tip: Keywords such as photos, camera, scanned documents, screenshots, labels, shelves, forms, and surveillance often signal a vision workload. Do not get distracted by words like dashboard or app. Focus on what the AI must do with the visual input.

A common trap is choosing a machine learning service just because the scenario sounds advanced. AI-900 frequently expects the simpler managed AI service answer. If the requirement is standard and common, such as tagging or OCR, the best answer is usually an Azure AI prebuilt service rather than building a custom model from scratch.

Section 4.2: Image analysis capabilities such as tagging, captioning, detection, and classification

Section 4.2: Image analysis capabilities such as tagging, captioning, detection, and classification

Image analysis is a major exam topic because it includes several related but distinct tasks. You need to know the difference between tagging, captioning, detection, and classification. Tagging assigns descriptive labels to image content, such as car, outdoor, person, or dog. Captioning generates a natural-language description, such as a person riding a bicycle on a city street. Detection identifies and locates objects within an image, often with bounding boxes. Classification assigns the image, or sometimes a detected object, to a category.

On AI-900, these terms are often used precisely. If the scenario asks for labels that describe visual content, that points to tagging. If it asks for a sentence-like summary for accessibility or content description, that points to captioning. If it asks to locate where objects appear in an image, that points to object detection. If it asks to decide whether an image belongs to one category or another, that is classification.

Azure AI Vision supports many prebuilt image analysis capabilities. This is often the best answer when the scenario involves common, broadly applicable tasks and the organization does not need a model trained on highly specific internal image classes. For example, identifying whether an image contains outdoor scenes, people, vehicles, or common objects aligns well with prebuilt analysis.

Exam Tip: Detection and classification are easy to confuse. Classification answers what the image is. Detection answers what objects are present and where they are located. If the question mentions coordinates, regions, or locating multiple items, choose detection-related capabilities.

A common exam trap is over-reading the scenario and assuming custom training is required. If the images involve general everyday objects and no mention is made of company-specific categories, prebuilt image analysis is usually enough. Another trap is mixing tagging with OCR. Tags describe image content; OCR extracts text. If the desired output is words that already appear inside the image, the question is not about tagging.

Section 4.3: Optical character recognition, document image extraction, and reading text from images

Section 4.3: Optical character recognition, document image extraction, and reading text from images

Optical character recognition, or OCR, is the process of detecting and extracting printed or handwritten text from images. This includes receipts, scanned forms, photographed signs, screenshots, invoices, business cards, and PDFs that contain visual text. AI-900 regularly tests OCR because it is a very common business scenario and easy to distinguish when you know the clues.

When a question asks how to read text from an image, extract words from a scan, digitize paper content, or pull text from a street sign photo, OCR is the correct concept. Azure AI Vision includes OCR-related capabilities for reading text from images. In document-heavy business cases, the focus may also be on extracting structured information from document images, but the core exam idea remains the same: the service is interpreting text embedded in a visual source.

The exam may present OCR alongside distractors such as image tagging, translation, or speech recognition. The fastest way to eliminate wrong answers is to ask what the input and output are. If the input is an image and the desired output is text, OCR is the answer. If the input is speech and the output is text, that would be speech recognition, not OCR. If the output is labels about scene contents rather than the exact printed words, that would be image analysis, not OCR.

Exam Tip: Watch for verbs such as read, extract, digitize, capture text, scan, parse printed text, or identify words in an image. These almost always indicate OCR-related functionality.

A common trap is confusing OCR with language understanding. OCR gets the text out of the image first. Any later analysis of that text would be a separate language task. On the exam, choose the service that solves the stated requirement, not a downstream processing step the question never asked for.

Section 4.4: Face-related capabilities, content moderation, and visual safety considerations

Section 4.4: Face-related capabilities, content moderation, and visual safety considerations

Face-related computer vision scenarios appear on the exam because they represent a distinct capability area with important responsible AI considerations. In fundamentals terms, face-related capabilities can include detecting a human face in an image and analyzing certain visual facial attributes. The exam may frame this in scenarios such as organizing photos, checking whether a face appears in an image, or supporting limited face-based image analysis workflows.

However, AI-900 also expects awareness that facial AI involves privacy, fairness, transparency, and potential misuse concerns. Microsoft exam questions may test not only what face analysis can do, but also the fact that these technologies require careful governance. You do not need deep policy expertise, but you should understand that responsible AI matters more here than in many other vision tasks.

Content moderation and visual safety are related but separate ideas. Some workloads need to detect potentially unsafe or inappropriate visual content. In these cases, the goal is not to identify the scene for business analytics, but to screen or classify images according to safety or policy concerns. The exam may use wording such as harmful content, inappropriate images, or moderation of uploaded media. That should steer you toward content safety concepts rather than standard tagging or captioning.

Exam Tip: If the scenario emphasizes safety, harmful content, policy enforcement, or protecting users from inappropriate media, do not choose general image analysis. Choose the service or capability focused on content moderation or safety.

A common trap is assuming face-related features are just another form of object detection. The exam treats faces as their own category. Another trap is ignoring ethics language. If an answer choice mentions responsible use, fairness, or sensitivity around facial data, it may be the clue Microsoft wants you to notice.

Section 4.5: Azure AI Vision, custom vision concepts, and when to use prebuilt versus custom models

Section 4.5: Azure AI Vision, custom vision concepts, and when to use prebuilt versus custom models

One of the most important exam decisions is choosing between a prebuilt Azure AI Vision capability and a custom vision approach. Azure AI Vision is designed for common, ready-to-use image analysis tasks such as tagging, captioning, OCR, and object detection. It is the best fit when the problem is general and the organization wants to add visual intelligence quickly without training its own model.

Custom vision concepts become relevant when the business needs to recognize specialized image categories or detect objects that are unique to its environment. Examples include classifying proprietary product defects, distinguishing among internal equipment types, or identifying brand-specific packaging that a prebuilt model may not understand reliably. In these cases, the organization provides labeled images to train a model for its own scenario.

The exam usually tests this distinction through phrases like company-specific, proprietary, specialized, or trained with labeled images. Those clues point toward a custom model. In contrast, phrases like identify objects in photos, generate captions, or read text from images typically point to a prebuilt service.

Exam Tip: Ask two questions: First, is the task common enough for a Microsoft-managed model? Second, does the business need to train on its own labeled image set? If yes to the second question, the exam is usually pointing to a custom vision solution.

A common trap is assuming that custom always means better. On AI-900, the right answer is the one that fits the requirement with the least unnecessary complexity. If a prebuilt service meets the business need, it is usually preferred. Another trap is forgetting that custom classification and custom detection are different ideas. Classification assigns categories; detection locates objects as well.

Section 4.6: Exam-style practice for the domain Computer vision workloads on Azure

Section 4.6: Exam-style practice for the domain Computer vision workloads on Azure

To score well in this domain, practice thinking like the exam writer. Microsoft often builds answer choices from related services in the same family. Your task is to identify the exact workload described, then eliminate options that solve adjacent but different problems. Start every vision question by identifying the input type, desired output, and whether the capability should be prebuilt or custom.

Use a simple decision process. If the input is an image and the output is descriptive labels or captions, think image analysis. If the output is text from the image, think OCR. If the task is about faces specifically, think face-related capabilities and remember responsible AI concerns. If the task is to classify or detect highly specific business objects using labeled company data, think custom vision. If the prompt emphasizes harmful or inappropriate imagery, think content safety rather than general image understanding.

Exam Tip: Eliminate answers by spotting mismatches in modality. Speech services do not read photographed text. Language services do not detect objects in pictures. General image tagging does not extract invoice numbers from a scan. This elimination strategy is one of the fastest ways to improve accuracy.

Another useful strategy is to watch for scope words. Words like any image, common objects, standard analysis, and describe this photo usually signal prebuilt AI Vision capabilities. Words like custom labels, train with our images, proprietary classes, and domain-specific detection indicate custom models. When a question includes both OCR and image analysis in the options, focus on whether the requirement is to understand the scene or to read the exact text shown in the image.

Finally, be careful not to add assumptions. AI-900 questions are typically narrower than real projects. If the scenario only asks to extract text, do not choose a broader end-to-end analytics stack. If it asks for object location, do not settle for classification. Precision wins in this domain.

Chapter milestones
  • Identify image and video AI scenarios
  • Map vision use cases to Azure AI services
  • Understand OCR, face, and custom vision options
  • Practice AI-900 style computer vision questions
Chapter quiz

1. A retailer wants to analyze photos of store shelves to generate tags such as "beverage," "bottle," and "indoor," and to produce a short caption describing each image. The company does not want to train a custom model. Which Azure AI service should it use?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is the best choice for prebuilt image analysis tasks such as tagging, captioning, and object detection. Azure AI Face is designed for face-specific analysis rather than general scene understanding. Azure Machine Learning could be used to build a custom solution, but the scenario explicitly states that the company does not want to train a custom model, so it is not the best fit for an AI-900 style question.

2. A finance team needs to extract printed and handwritten text from scanned receipts and photographed invoices. Which capability should you identify for this requirement?

Show answer
Correct answer: Optical character recognition (OCR)
OCR is used to read and extract text from images, scans, screenshots, and photographed documents, which matches the receipt and invoice scenario. Face analysis is unrelated because the requirement is not about identifying or analyzing people. Image classification assigns labels to an image as a whole, but it does not focus on extracting the actual text content from documents.

3. A company wants to build a model that can distinguish between its own proprietary product packaging designs using labeled images collected from its factories. The packaging types are unique to the company and are not part of a standard prebuilt model. What should the company use?

Show answer
Correct answer: A custom vision model trained on labeled images
A custom vision model trained on labeled images is the correct choice because the scenario requires recognizing company-specific packaging that a prebuilt service is unlikely to understand accurately. Azure AI Face is only for face-related workloads and does not apply to product packaging. Prebuilt image captioning may provide generic descriptions, but it is not designed to reliably classify proprietary packaging types unique to the business.

4. A security team needs to verify that uploaded employee badge photos contain a human face and perform face-related analysis as part of an access control workflow. Which Azure AI service is the best match?

Show answer
Correct answer: Azure AI Face
Azure AI Face is the correct service because the requirement is specifically about detecting and analyzing faces. Azure AI Vision is used for broader image analysis tasks such as tagging, captioning, and OCR, but AI-900 expects candidates to treat face scenarios separately from generic image understanding. Azure AI Language is for text workloads, so it is not relevant to photo-based face analysis.

5. You are reviewing requirements for an AI solution. Which scenario is the best example of a computer vision workload on Azure?

Show answer
Correct answer: Detecting objects and reading warning labels from images captured on a manufacturing floor
Detecting objects and reading warning labels from images is a computer vision workload because it involves image analysis and OCR. Analyzing customer reviews for sentiment is a natural language processing task, not a vision task. Translating spoken audio is a speech workload, so while it is an AI scenario, it does not belong to the computer vision domain tested in this chapter.

Chapter 5: NLP and Generative AI Workloads on Azure

This chapter maps directly to a high-value portion of the AI-900 exam: recognizing natural language processing workloads, speech workloads, conversational AI scenarios, and the emerging generative AI capabilities available on Azure. On the exam, Microsoft often tests whether you can match a business requirement to the correct Azure AI service. That means success depends less on deep coding knowledge and more on accurate scenario recognition. If a question describes extracting opinions from customer reviews, identifying people and places in documents, converting spoken audio to text, translating speech, summarizing content, or building a copilot-style assistant, you must quickly identify the workload category and then connect it to the right Azure offering.

Start with a simple mental model. Traditional NLP workloads analyze or transform language that already exists. Examples include sentiment analysis, key phrase extraction, entity recognition, question answering, translation, and speech transcription. Generative AI workloads, by contrast, create new content based on prompts and context. These can power copilots, chat assistants, draft generation, summarization, and interactive reasoning experiences. The exam expects you to understand both categories and to distinguish between them. A common trap is confusing classic Azure AI Language capabilities with Azure OpenAI generative capabilities. If the task is extracting structured meaning from text, think Azure AI Language. If the task is producing original text or chat responses, think generative AI and Azure OpenAI Service.

Another major exam theme is service matching. Microsoft may describe a scenario without naming the product. You should recognize that Azure AI Language supports text analytics features such as sentiment analysis, key phrase extraction, named entity recognition, and question answering. Azure AI Speech supports speech to text, text to speech, speech translation, and speaker-related capabilities. Conversational AI scenarios may involve bots, orchestration, and language understanding patterns. Generative AI workloads often point toward foundation models, prompt-based experiences, grounding with enterprise data, and responsible AI controls.

Exam Tip: Read the noun and the verb in the scenario. The noun tells you the data type, such as text, speech, documents, or chat. The verb tells you the workload, such as classify, extract, recognize, translate, generate, summarize, or converse. Matching those two clues is often enough to eliminate wrong answers.

This chapter also strengthens exam strategy. AI-900 questions may include plausible distractors because many Azure AI services sound related. For example, translation can appear in both text and speech contexts. A speech translation scenario is not the same as translating written text. Likewise, question answering based on a knowledge base is not the same as an open-ended generative chatbot. Your goal is not to memorize every feature list, but to build clean distinctions. In the sections that follow, you will review the exact concepts most likely to appear on the exam, learn how to identify common traps, and practice thinking in the service-selection style used by AI-900.

As you study, focus on business language. The exam often frames topics as business outcomes: analyze call center conversations, detect customer sentiment, create subtitles, extract important terms from legal documents, build a virtual agent, summarize long reports, or create a copilot grounded in company data. Azure AI is tested as a toolbox for those outcomes. If you can consistently map the scenario to the right tool, you will be prepared for this domain and ready to connect NLP and generative AI concepts to the broader AI-900 blueprint.

Practice note for Understand language AI and speech workloads: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Match NLP tasks to Azure AI services: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: NLP workloads on Azure including sentiment analysis, key phrase extraction, entity recognition, and question answering

Section 5.1: NLP workloads on Azure including sentiment analysis, key phrase extraction, entity recognition, and question answering

Natural language processing on the AI-900 exam usually begins with Azure AI Language. This service supports several common text analysis tasks, and the exam frequently checks whether you can distinguish them. Sentiment analysis identifies whether text is positive, negative, neutral, or mixed. This is useful for product reviews, support tickets, and social media monitoring. Key phrase extraction identifies the most important terms or phrases in text, such as product names, issues, or business topics. Named entity recognition identifies specific categories of information in text, such as people, organizations, locations, dates, and quantities. Question answering enables systems to return answers from a curated knowledge source, such as FAQs, manuals, or support documents.

A common exam trap is confusing key phrase extraction with entity recognition. Key phrases are important concepts, but they are not necessarily predefined entity types. For example, “slow battery charging” might be a key phrase, but it is not a person, place, or date. Another trap is confusing sentiment analysis with opinion mining. Sentiment gives an overall sentiment judgment, while opinion mining goes deeper into sentiments attached to aspects. On AI-900, if the scenario simply asks whether customer comments are positive or negative, sentiment analysis is the clearest match.

Question answering is also tested through scenario wording. If the problem states that users ask natural language questions and the system answers from a known set of documents or FAQs, think question answering rather than generative AI. The distinction matters because question answering retrieves from a knowledge source and returns likely answers, while generative AI produces more flexible responses. The exam may describe help desk articles, policy documents, or internal FAQs. That language should point you toward Azure AI Language capabilities rather than a large language model.

  • Sentiment analysis: determine emotional tone in text.
  • Key phrase extraction: pull out important terms from documents.
  • Entity recognition: identify people, places, dates, brands, and similar structured items.
  • Question answering: respond to user questions using a known knowledge base.

Exam Tip: If the scenario focuses on extracting meaning from existing text, not creating new content, it is usually a classic NLP workload. That is your clue to prefer Azure AI Language over Azure OpenAI.

What the exam tests here is your ability to map task language to service language. Words like detect, analyze, extract, identify, or answer from documents usually indicate Azure AI Language. When eliminating choices, remove services centered on images, custom machine learning, or speech unless the input is explicitly audio or multimodal. AI-900 rewards fast and precise workload recognition more than technical implementation detail.

Section 5.2: Speech workloads on Azure including speech to text, text to speech, translation, and speech intelligence

Section 5.2: Speech workloads on Azure including speech to text, text to speech, translation, and speech intelligence

Speech workloads are another core AI-900 domain, and they map primarily to Azure AI Speech. The exam often presents these capabilities in practical business scenarios. Speech to text converts spoken audio into written text. This is useful for transcription, captions, meeting notes, and call analytics. Text to speech converts written text into natural-sounding audio and is common in accessibility tools, virtual assistants, and automated phone systems. Speech translation goes a step further by translating spoken language from one language into another. Speech intelligence includes capabilities related to analyzing spoken interactions, recognizing speakers, and enabling richer speech-driven experiences.

A major exam trap is forgetting to identify the input format. If the scenario begins with a recording, a phone call, a meeting, or a live microphone stream, you are in speech territory. Some candidates incorrectly choose Azure AI Language because they see the word “text” in the output requirement. Remember that speech to text starts with audio, so Azure AI Speech is the better match. Likewise, if the scenario is about reading text aloud with a realistic synthetic voice, text to speech is the correct workload, not translation or bot functionality.

Translation can be especially tricky. If the requirement is to translate written text, that aligns with language translation capabilities. If the requirement is to translate spoken words in real time during a conversation, that points to speech translation. On the exam, pay close attention to whether the source is typed text or spoken audio. This single detail often separates two plausible answers.

Exam Tip: Anchor on the phrase “spoken language.” Whenever the problem involves listening, dictation, narration, subtitles, voice responses, or multilingual speech conversations, think Azure AI Speech first.

What the exam is really measuring is whether you can distinguish speech workloads from general NLP workloads. Azure AI Speech is about understanding and generating audio-based language interactions. Azure AI Language is about analyzing textual content. If you train yourself to ask, “Is the input or output voice/audio?” you will eliminate many distractors quickly. This is one of the easiest ways to score reliable points in the NLP domain.

Section 5.3: Conversational AI, language understanding basics, and bot scenarios on Azure

Section 5.3: Conversational AI, language understanding basics, and bot scenarios on Azure

Conversational AI combines language services, orchestration, and user interaction patterns to create systems that can engage in dialogue. On AI-900, the exam does not usually require implementation detail, but it does expect you to understand what a bot does and how language understanding supports user intent. A bot can interact with users through text or voice to answer questions, automate tasks, route requests, or support self-service experiences. Typical bot scenarios include customer support chat, appointment scheduling, internal help desk assistance, and guided task completion.

Language understanding basics revolve around interpreting user input so the system can identify intent and relevant details. Intent refers to what the user wants to do, such as booking a flight, resetting a password, or checking an order status. Relevant details may include dates, product names, locations, or account identifiers. Even if the exam does not emphasize older service names or implementation specifics, it still expects you to understand the pattern: user says something in natural language, the system detects meaning, and the bot responds or triggers an action.

A common trap is to assume every conversational interface is generative AI. Not all bots are powered by large language models. Many are rules-based, knowledge-base driven, or intent-driven. If the scenario emphasizes structured responses, specific workflows, FAQs, or limited domain conversations, it may describe traditional conversational AI rather than a generative copilot. By contrast, if the scenario emphasizes flexible draft generation, open-ended chat, summarization, or creative response generation, that points more toward generative AI.

Exam Tip: On AI-900, “bot,” “virtual agent,” and “conversational AI” do not automatically mean Azure OpenAI. Read whether the system is answering known questions, following workflows, or generating novel responses.

When identifying correct answers, focus on the business role of the solution. If the goal is user interaction through dialogue, a bot or conversational AI approach fits. If the goal is extracting entities from uploaded documents, that is NLP analytics, not conversational AI. The exam tests whether you can keep these solution patterns separate. Think of conversational AI as the user-facing layer that may use language understanding, question answering, speech, or generative AI underneath depending on the scenario.

Section 5.4: Generative AI workloads on Azure including copilots, chat experiences, content generation, and summarization

Section 5.4: Generative AI workloads on Azure including copilots, chat experiences, content generation, and summarization

Generative AI is now a prominent AI-900 topic, and the exam expects you to understand what these workloads do at a conceptual level. Generative AI uses foundation models to create new content based on prompts. On Azure, this includes chat experiences, copilots, summarization, drafting assistance, rewriting, classification support, and content generation for business workflows. A copilot is generally an AI assistant embedded in an application or process to help users complete tasks more effectively. It may answer questions, summarize information, draft responses, generate reports, or guide decisions.

Chat experiences are a common use case because they offer a conversational interface for interacting with a large language model. However, the exam is unlikely to focus on development detail. Instead, it will test whether you can identify when a business need is generative. For example, if users want a system that can draft email responses, summarize meeting notes, create product descriptions, or answer questions in natural language using broad reasoning, those are generative AI patterns. Summarization is especially important because it turns long content into concise overviews, executive summaries, or action-item lists.

A common exam trap is assuming that any question-answering system is generative AI. If the scenario is restricted to curated FAQs or support content, question answering may be the better fit. Generative AI becomes the better answer when flexibility, drafting, open-ended dialogue, or language creation is central to the requirement. Another trap is confusing predictive ML with generative AI. Predicting a numeric value or classifying records is machine learning, not generative text creation.

  • Copilots assist users inside apps and workflows.
  • Chat experiences support conversational interaction with AI.
  • Content generation creates drafts, messages, descriptions, and other text.
  • Summarization condenses long documents or conversations into key points.

Exam Tip: If the user asks the system to create, draft, rewrite, or summarize, generative AI is likely the correct workload. If the system is only labeling or extracting from existing content, it is likely classic NLP instead.

The exam tests whether you can recognize the business outcomes of generative AI without overcomplicating the technology. Keep your focus on practical use cases and on the distinction between analyzing existing language and generating new language. That distinction appears repeatedly in AI-900 scenario questions.

Section 5.5: Azure OpenAI Service, prompts, grounding concepts, and responsible generative AI practices

Section 5.5: Azure OpenAI Service, prompts, grounding concepts, and responsible generative AI practices

Azure OpenAI Service is the Azure offering most closely associated with large language models and generative AI experiences. For AI-900, you are not expected to engineer production-scale solutions, but you should understand the concepts of prompts, model outputs, grounding, and responsible use. A prompt is the instruction or input provided to the model. Good prompts help define the task, expected tone, desired format, and relevant context. Prompt quality affects output quality, which is why prompt design is frequently discussed in generative AI fundamentals.

Grounding means providing relevant context or trusted data so the model can produce more accurate, useful, and domain-specific responses. In exam terms, grounding helps reduce unsupported or generic answers by anchoring the model to enterprise content, approved sources, or retrieved documents. This is an important distinction because a model without grounding may answer fluently but not necessarily accurately for the organization’s specific needs. If the question describes connecting a generative system to company documents, product data, or approved knowledge, grounding is a key concept.

Responsible generative AI is highly testable. Microsoft expects candidates to understand risks such as harmful content, bias, privacy concerns, hallucinations, and misuse. The exam may ask at a high level how to make generative AI safer and more trustworthy. The correct ideas usually include content filtering, human oversight, grounding, monitoring, transparency, and aligning outputs with policy and intended use. Responsible AI is not a separate afterthought; it is part of the solution design.

Exam Tip: If an answer choice includes improving relevance with trusted data, reducing harmful outputs, or applying human review and content filters, it is often aligned with Microsoft’s responsible generative AI guidance.

A common trap is treating prompts as guarantees. Prompts guide the model, but they do not ensure factual correctness. That is why grounding and validation matter. Another trap is assuming Azure OpenAI replaces all other AI services. It does not. Classic Azure AI services are still often the better answer for deterministic tasks such as OCR, sentiment analysis, or speech transcription. AI-900 rewards balanced judgment: know when Azure OpenAI is appropriate, and know when a specialized Azure AI service is more direct, lower risk, and better aligned to the requirement.

Section 5.6: Exam-style practice for the domains NLP workloads on Azure and Generative AI workloads on Azure

Section 5.6: Exam-style practice for the domains NLP workloads on Azure and Generative AI workloads on Azure

To prepare for AI-900 questions in this domain, practice reading scenarios through a service-selection lens. The exam often gives short descriptions with one or two decisive clues. Your job is to identify the data type, the task, and the expected output. If the source is text and the task is to detect sentiment, extract entities, identify key phrases, or answer from an FAQ, you are usually in Azure AI Language territory. If the source or destination is audio, think Azure AI Speech. If the solution is a user-facing assistant for structured conversations, think conversational AI and bot scenarios. If the requirement is to generate, summarize, rewrite, or chat more flexibly, think generative AI and Azure OpenAI Service concepts.

Use elimination aggressively. Remove any answer that does not match the modality first. For example, if no image or video is involved, eliminate vision services immediately. Then remove answers that do not match the action. “Extract” and “classify” suggest traditional AI analysis. “Draft,” “compose,” and “summarize” suggest generative AI. “Speak,” “transcribe,” and “subtitle” suggest speech workloads. This narrowing method is especially effective because AI-900 distractors are often adjacent technologies rather than completely unrelated products.

Another strong test strategy is to watch for overpowered answers. If a simple sentiment analysis task is offered alongside a generative AI platform, the simpler specialized service is often the better choice. Microsoft frequently tests whether you can choose the most appropriate service, not the most advanced-sounding one. The same applies to question answering versus open-ended generative chat. If a curated FAQ solves the problem, do not overreach.

Exam Tip: The exam likes business phrasing. Translate each requirement into a workload category before you look at answer choices: text analytics, speech, conversational AI, or generative AI. That habit prevents confusion.

In your final review, make sure you can do four things quickly: identify classic NLP tasks, distinguish text from speech scenarios, separate traditional bots from generative copilots, and explain why grounding and responsible AI matter in Azure OpenAI solutions. If you can perform those distinctions consistently, you will be well prepared for the NLP and generative AI objectives in AI-900.

Chapter milestones
  • Understand language AI and speech workloads
  • Match NLP tasks to Azure AI services
  • Explain generative AI concepts, prompts, and copilots
  • Practice AI-900 style NLP and generative AI questions
Chapter quiz

1. A company wants to analyze thousands of customer product reviews to determine whether the opinions expressed are positive, negative, or neutral. Which Azure AI service capability should they use?

Show answer
Correct answer: Azure AI Language sentiment analysis
Azure AI Language sentiment analysis is the correct choice because the requirement is to classify the opinion expressed in existing text. Azure AI Speech speech-to-text is used to convert spoken audio into text, not to analyze written reviews. Azure OpenAI text generation creates new content from prompts, but the scenario is about extracting meaning from existing text, which is a classic NLP workload tested in AI-900.

2. A support center needs a solution that converts live phone conversations into written transcripts in real time. Which Azure service should be selected?

Show answer
Correct answer: Azure AI Speech speech-to-text
Azure AI Speech speech-to-text is correct because the workload involves recognizing spoken audio and converting it into text. Azure AI Language named entity recognition can extract people, places, and organizations from text, but it does not transcribe audio. Azure OpenAI chat completions can generate conversational responses, but it is not the primary service for real-time speech transcription.

3. A business wants to build a copilot-style assistant that can generate draft email responses and summarize user-provided content based on prompts. Which Azure offering is the best fit?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is the best fit because the scenario describes generative AI tasks: creating draft responses and summarizing content from prompts. Azure AI Language question answering is designed for retrieving answers from a knowledge base or provided content, not for broad generative drafting. Azure AI Speech text-to-speech converts text into spoken audio, which does not meet the requirement to generate written responses.

4. A legal team wants to process contracts and automatically identify people, organizations, and locations mentioned in the text. Which capability should they use?

Show answer
Correct answer: Azure AI Language named entity recognition
Azure AI Language named entity recognition is correct because it extracts structured entities such as people, organizations, and locations from text. Azure AI Speech translation is for translating spoken language, which is unrelated to entity extraction from documents. Azure OpenAI image generation creates images from prompts and is not designed for structured text analytics workloads.

5. A global company wants meeting participants to speak in one language and have their words translated into another language during the conversation. Which Azure AI capability best matches this requirement?

Show answer
Correct answer: Azure AI Speech speech translation
Azure AI Speech speech translation is correct because the scenario involves spoken language being translated during a conversation. Azure AI Language key phrase extraction identifies important terms in written text, but it does not handle live spoken translation. Azure OpenAI summarization can condense text, but the requirement is translation of speech, which is a different workload category commonly distinguished on the AI-900 exam.

Chapter 6: Full Mock Exam and Final Review

This chapter is the capstone of your AI-900 preparation. By this stage, you should already recognize the core exam domains: AI workloads and common solution scenarios, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI workloads with responsible AI considerations. The purpose of this chapter is not to introduce brand-new content, but to help you convert knowledge into exam performance. Microsoft AI-900 is a fundamentals exam, yet many candidates lose points not because the concepts are too difficult, but because they misread scenario wording, confuse similar Azure AI services, or fail to map a business requirement to the correct AI capability.

The chapter is organized around a realistic final preparation flow. First, you complete a full mixed mock exam in two parts to simulate context switching across all official domains. Then you review your answers using a structured analysis method instead of simply checking whether you were right or wrong. After that, you diagnose weak spots in the areas that most often reduce scores: AI workloads, machine learning terminology, computer vision scenarios, language workloads, and generative AI service mapping. Finally, you finish with a practical review sheet and an exam day checklist so that your last hours of preparation are focused and efficient.

What does the real exam test for? It primarily tests recognition, distinction, and mapping. You are expected to identify the type of AI workload described, recognize which Azure AI service best fits a scenario, distinguish supervised from unsupervised learning, understand responsible AI principles, and identify use cases for image analysis, OCR, language understanding, speech, translation, conversational AI, and generative AI. The exam often presents short business-oriented descriptions. Your task is to spot the key signal words: classify, predict, cluster, detect objects, extract text, analyze sentiment, translate speech, build a bot, generate content, or ground a copilot on enterprise data.

Exam Tip: AI-900 rarely rewards deep implementation detail. It rewards accurate matching of requirement to capability. If two answers seem technically possible, choose the one that most directly and simply satisfies the stated need.

A strong final review strategy also means understanding common traps. Candidates often confuse OCR with image classification, sentiment analysis with key phrase extraction, conversational AI with generative AI, and Azure Machine Learning with prebuilt Azure AI services. Another frequent trap is assuming that any intelligent feature requires custom model training. In AI-900, many scenarios are solved with prebuilt services rather than custom data science workflows. If the requirement is standard and common, Microsoft often expects you to choose an Azure AI service rather than a full machine learning pipeline.

The lessons in this chapter mirror how top scorers prepare. Mock Exam Part 1 and Part 2 help you practice endurance and domain switching. Weak Spot Analysis helps you categorize mistakes by concept rather than by question number. The Exam Day Checklist turns knowledge into action: pacing, reading discipline, elimination technique, confidence control, and final answer review. Treat this chapter like a rehearsal. Your goal is not only to know the content, but to recognize patterns quickly, eliminate distractors confidently, and enter the exam with a calm, systematic plan.

  • Use the mock exam to identify patterns of confusion, not just missed answers.
  • Review every incorrect and guessed response to determine why the right answer is right and why the others are wrong.
  • Focus remediation on service mapping, terminology, and scenario interpretation.
  • Memorize high-yield distinctions, especially across Azure AI services.
  • Practice exam discipline: read the requirement, identify the workload, then choose the simplest fit.

As you work through the final sections of this chapter, keep the course outcomes in mind. You must be able to describe AI workloads, explain machine learning fundamentals on Azure, identify computer vision and NLP scenarios, recognize generative AI use cases and responsible AI considerations, and apply exam strategy under pressure. That combination of technical recognition and disciplined test-taking is what produces pass readiness.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full-length mixed mock exam covering all official AI-900 domains

Section 6.1: Full-length mixed mock exam covering all official AI-900 domains

Your final mock exam should feel like the real AI-900 experience: mixed topics, frequent shifts in context, and answer choices that test whether you can distinguish similar concepts. In a proper mixed mock, do not group all machine learning questions together or all computer vision questions together. The real exam expects you to move quickly from an AI workload scenario to a responsible AI principle, then to an NLP service, then to a generative AI use case. That context switching is part of the challenge.

When taking Mock Exam Part 1 and Mock Exam Part 2, simulate exam conditions. Sit in one session if possible, limit distractions, and avoid checking notes between blocks. Mark uncertain items mentally or on scratch paper and continue. Do not let one confusing scenario consume your time. Your objective is to build recognition speed across all official domains: AI workloads and common scenarios, machine learning on Azure, computer vision, natural language processing, and generative AI.

As you work through a full mixed mock, use a three-step decision process. First, identify the workload category. Is the scenario about prediction, clustering, visual recognition, text understanding, speech, or generated content? Second, identify whether the scenario calls for a prebuilt Azure AI service or a broader machine learning platform such as Azure Machine Learning. Third, eliminate distractors by checking for mismatch between requirement and service capability.

Exam Tip: If a scenario describes a standard task such as OCR, sentiment analysis, translation, or object detection, expect a prebuilt Azure AI service to be the correct fit. Reserve Azure Machine Learning for custom model training and broader ML lifecycle scenarios.

Common traps in mixed mocks include choosing a technically advanced answer when a simpler managed service is sufficient, and confusing adjacent services. For example, image-related wording does not always mean image classification; it may indicate OCR, face detection, or object detection. Language-related wording does not always mean chatbots; it may indicate key phrase extraction, sentiment analysis, translation, or speech-to-text. Generative AI wording may mention copilots, prompts, or grounded responses; these clues point toward Azure OpenAI-related scenarios rather than traditional conversational bots alone.

After each mock part, do not just record a score. Tag each missed item by domain and by reason: concept gap, service confusion, wording trap, or rushed reading. This creates the foundation for Weak Spot Analysis later in the chapter. The mock exam is not only a test of memory; it is a diagnostic map of where your exam instincts are strongest and where they need refinement.

Section 6.2: Answer review methodology and how to decode Microsoft exam wording

Section 6.2: Answer review methodology and how to decode Microsoft exam wording

The value of a mock exam comes from the review process. Many candidates waste practice opportunities by checking the score and moving on. Effective answer review means examining every incorrect answer, every guessed answer, and even every correct answer that felt uncertain. Your goal is to understand the wording patterns Microsoft uses and train yourself to detect what the exam is really asking.

Start with a four-part review method. First, restate the scenario requirement in your own words. Second, identify the tested objective: AI workload recognition, machine learning type, Azure service mapping, responsible AI concept, or generative AI scenario. Third, explain why the correct answer fits precisely. Fourth, explain why each distractor is wrong. This final step is where real learning happens, because many AI-900 distractors are not absurd; they are plausible but incomplete, too broad, or aimed at a different workload.

Microsoft wording often includes subtle qualifiers such as classify, predict, detect, extract, analyze, generate, summarize, translate, or converse. These verbs matter. Classify and predict often suggest supervised learning. Cluster suggests unsupervised learning. Extract text suggests OCR. Analyze sentiment suggests text analytics. Generate content or assist users with natural-language prompts suggests generative AI. Build a conversational interface may refer to bots, but if the emphasis is on content generation or grounding answers in organizational data, generative AI is the stronger cue.

Exam Tip: Pay attention to scope words such as custom, prebuilt, real-time, responsible, and enterprise data. These often separate two otherwise similar answer choices.

Another common wording pattern is the business scenario that avoids naming the technology directly. You may be told that a company wants to detect defects in product images, convert scanned forms into searchable text, analyze customer feedback, or create a user assistant that drafts responses. The test is assessing your ability to infer the correct AI category and Azure service from the business need, not from explicit terminology.

Be especially careful with answer choices that are true statements but do not answer the exact question. This is a classic exam trap. For example, a service may support AI generally, yet not be the best tool for the described workload. Always ask: which option most directly satisfies the requirement with the least unnecessary complexity? That is usually the Microsoft exam logic.

Section 6.3: Weak area remediation across Describe AI workloads and ML on Azure

Section 6.3: Weak area remediation across Describe AI workloads and ML on Azure

If your mock exam shows weakness in the early AI-900 domains, focus first on vocabulary precision. The exam expects you to describe common AI workloads clearly: machine learning, computer vision, natural language processing, conversational AI, anomaly detection, forecasting, and generative AI. Many low scores begin with broad familiarity but weak distinction. You may recognize that a scenario uses AI, yet miss whether it is prediction, classification, regression, clustering, or anomaly detection.

For machine learning on Azure, review the essential objective: understanding supervised learning, unsupervised learning, and responsible AI principles. Supervised learning uses labeled data to predict known outcomes, such as classifying emails or forecasting sales. Unsupervised learning finds structure in unlabeled data, such as grouping customers by behavior. The exam usually does not require mathematical detail; it tests whether you can map a scenario to the correct learning type. If the scenario includes historical examples with known answers, think supervised. If it involves discovering natural groupings or patterns, think unsupervised.

Azure Machine Learning belongs in your final review as the platform for building, training, managing, and deploying ML models. A common trap is selecting Azure Machine Learning for tasks already handled by prebuilt Azure AI services. Use Azure Machine Learning when the scenario stresses custom model development, experimentation, model management, or the ML lifecycle. Do not overuse it for standard AI capabilities available as managed services.

Responsible AI also appears regularly. Know the core principles at a recognition level: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. The exam may describe a concern such as biased outcomes, lack of explainability, or safeguarding personal data. Your job is to identify which principle is being emphasized. This is often straightforward if you slow down and read the concern carefully.

Exam Tip: If you miss a responsible AI question, ask whether the issue is about the model behaving fairly, operating safely, protecting data, being understandable, including diverse users, or ensuring human responsibility. One of those six usually fits cleanly.

To remediate weak spots, create a one-page chart with four columns: workload, definition, common business clue words, and likely Azure solution. Repetition of these patterns will improve both speed and accuracy.

Section 6.4: Weak area remediation across Computer vision, NLP, and Generative AI workloads on Azure

Section 6.4: Weak area remediation across Computer vision, NLP, and Generative AI workloads on Azure

This is the highest-yield remediation area because AI-900 candidates often confuse similar services across vision, language, and generative AI. Begin with computer vision. Separate these ideas clearly: image analysis describes visual content; object detection identifies and locates objects; OCR extracts printed or handwritten text from images; face-related capabilities analyze facial attributes or detect faces; and video scenarios may involve extracting visual insights over time. If the requirement is to make scanned text searchable, the key clue is OCR, not image classification. If the need is to detect products, vehicles, or defects within an image, think object detection.

For NLP, keep service mapping practical. Text Analytics-style scenarios involve sentiment analysis, key phrase extraction, language detection, and entity recognition. Translation scenarios involve converting text or speech between languages. Speech scenarios involve speech-to-text, text-to-speech, and speech translation. Conversational AI scenarios involve creating a bot or interactive assistant. The main exam challenge is not knowing that these services exist, but distinguishing which one directly solves the stated problem.

Generative AI introduces another layer of confusion because candidates may blur it with traditional NLP or bot frameworks. Generative AI workloads focus on producing content, summarizing information, answering questions from prompts, assisting users through copilots, and using foundation models. Azure OpenAI-related scenarios often involve prompt engineering, grounded responses, and responsible generation. If the scenario is about drafting text, summarizing documents, generating code, or creating a natural-language assistant that composes answers, generative AI is likely the target domain.

Exam Tip: A chatbot does not automatically mean generative AI, and generative AI does not automatically mean a standard bot service. Read whether the system is primarily routing dialog flows or generating new content based on prompts and model reasoning.

Responsible generative AI also matters. Watch for concerns about harmful content, hallucinations, data grounding, transparency, and safe user interaction. Microsoft may test whether you recognize that prompt design, content filtering, and grounding model responses in trusted enterprise data improve reliability and safety. If your mock results are weak here, rebuild your notes around scenario words rather than product names. The exam asks for practical recognition more often than memorized definitions.

Section 6.5: Final review sheet of key Azure AI services, terms, and scenario mappings

Section 6.5: Final review sheet of key Azure AI services, terms, and scenario mappings

Your final review sheet should compress the entire course into quick scenario-to-service mappings. This is not a full set of notes; it is a high-speed recall tool for the last review session before the exam. Organize it by workload and phrase each item as a scenario signal paired with the most likely Azure answer.

For AI workloads and ML, remember: prediction or classification from labeled data suggests supervised learning; grouping unlabeled data suggests unsupervised learning; custom model development suggests Azure Machine Learning. For responsible AI, map fairness to avoiding biased outcomes, transparency to understandable decisions, privacy and security to protecting data, reliability and safety to dependable behavior, inclusiveness to supporting diverse users, and accountability to human oversight.

  • Image content understanding: Azure AI Vision-type image analysis scenario
  • Read text from images or forms: OCR scenario
  • Detect and locate items in images: object detection scenario
  • Analyze customer opinion in text: sentiment analysis scenario
  • Extract important terms from text: key phrase extraction scenario
  • Convert spoken words to text or text to speech: speech scenario
  • Translate between languages: translation scenario
  • Build a conversational assistant with dialog: bot scenario
  • Generate summaries, drafts, or prompt-based content: generative AI scenario
  • Create copilots using foundation models and prompts: Azure OpenAI-style generative AI scenario

Also memorize likely trap pairs. OCR versus image analysis. Translation versus sentiment analysis. Speech recognition versus conversational AI. Bot framework concepts versus generative AI copilots. Azure Machine Learning versus prebuilt Azure AI services. These are classic decision points on AI-900.

Exam Tip: In the last review window, stop trying to learn obscure details. Focus on distinctions that help you answer scenario questions quickly and confidently.

If possible, rewrite your review sheet from memory once. Any item you cannot recall smoothly is a final weak spot. Fix that before exam day.

Section 6.6: Exam day strategy, time management, confidence control, and last-minute checklist

Section 6.6: Exam day strategy, time management, confidence control, and last-minute checklist

Exam day performance depends on discipline more than intensity. The AI-900 exam is designed to test recognition and judgment under time pressure, not deep implementation. Your strategy should therefore be simple and repeatable. Read each item carefully, identify the domain, eliminate mismatched answers, and select the option that most directly addresses the requirement. Avoid overthinking. Many wrong answers come from adding complexity that the question never requested.

Time management starts with pacing. Move steadily and avoid spending too long on any one item. If a question feels ambiguous, eliminate the obvious wrong answers, choose the best remaining option, and move on. Confidence matters because hesitation can create avoidable second-guessing. Trust well-learned distinctions: OCR for text in images, sentiment for opinion, supervised for labeled outcomes, Azure Machine Learning for custom models, generative AI for prompt-based content creation.

Control test anxiety by using a reset routine. If you feel stuck, pause for one breath, restate the requirement in plain language, and ask what capability is actually needed. This often cuts through distracting wording. Also remember that not every answer choice is equally precise. Microsoft commonly includes broad cloud or AI statements that sound true but are not the best fit.

Exam Tip: Do not change answers casually at the end. Change an answer only if you can identify a specific misread keyword or a clear service mismatch.

Your last-minute checklist should include practical items: confirm exam logistics, arrive early or test your online setup, bring permitted identification, and avoid heavy last-minute studying that increases stress. In the final hour, review only your one-page sheet of key mappings, responsible AI principles, and common trap pairs. Mentally rehearse your approach: identify workload, identify clue words, eliminate distractors, choose the simplest correct fit.

The final goal is calm execution. You do not need perfection to pass AI-900. You need solid domain recognition, clean service mapping, and steady exam discipline. If you have completed the mock exams, performed weak spot analysis, and reviewed the final checklist in this chapter, you are approaching the exam the way successful candidates do.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A retail company wants to process photos from store shelves to extract product names and prices printed on labels. The solution must use a prebuilt Azure AI capability with minimal custom training. Which capability should you choose?

Show answer
Correct answer: Optical character recognition (OCR)
OCR is correct because the requirement is to extract printed text from images. Image classification is used to assign an image to a category, such as identifying whether an image contains a product type, but it does not read text. Object detection can identify and locate objects in an image, such as products on shelves, but it does not directly extract the words and prices printed on labels. AI-900 commonly tests the distinction between reading text in images and analyzing image content.

2. You review a mock exam question that asks for the best Azure solution to analyze customer reviews and determine whether each review is positive, negative, or neutral. Which Azure AI capability best fits this requirement?

Show answer
Correct answer: Sentiment analysis
Sentiment analysis is correct because the task is to classify opinion text as positive, negative, or neutral. Key phrase extraction identifies important terms or phrases in text, but it does not assign sentiment. Azure Machine Learning could be used to build a custom model, but AI-900 emphasizes choosing the simplest and most direct fit. For a standard text analytics requirement like this, a prebuilt Azure AI language service is the expected answer.

3. A manager is taking a final practice test and sees this requirement: 'Group customers into segments based on similar purchasing behavior without using known labels.' Which machine learning concept does this describe?

Show answer
Correct answer: Clustering
Clustering is correct because the goal is to group similar items without preexisting labeled outcomes, which is an unsupervised learning task. Supervised learning requires labeled training data, so it does not match the scenario. Regression is a type of supervised learning used to predict numeric values, such as future sales amounts, rather than discover natural groupings. AI-900 often tests recognition of keywords such as group, segment, similar, and without labels.

4. A company wants to build a solution that answers employee questions by generating responses grounded in internal company documents. During final review, you must choose the option that most directly matches this requirement. Which should you select?

Show answer
Correct answer: A generative AI solution grounded on enterprise data
A generative AI solution grounded on enterprise data is correct because the requirement is to generate answers based on internal documents, which aligns with retrieval-grounded copilots and generative AI workloads. A computer vision model is unrelated because the scenario is about answering questions from documents, not analyzing images. An unsupervised clustering model groups similar records but does not generate grounded natural-language answers. AI-900 increasingly tests recognition of generative AI scenarios and the idea of grounding responses on organizational data.

5. On exam day, a candidate encounters a question where two Azure services seem technically possible. Based on AI-900 exam strategy, what is the best approach?

Show answer
Correct answer: Choose the service that most directly and simply satisfies the stated requirement
Choosing the service that most directly and simply satisfies the requirement is correct because AI-900 focuses on accurate service mapping rather than deep implementation complexity. The exam often expects prebuilt Azure AI services for common scenarios. Choosing the most customized option is a trap because more complex solutions are not automatically better. Choosing Azure Machine Learning whenever AI is mentioned is also incorrect because many standard tasks, such as OCR, sentiment analysis, speech, or translation, are better matched to prebuilt services.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.