HELP

Microsoft AI Fundamentals AI-900 Exam Prep

AI Certification Exam Prep — Beginner

Microsoft AI Fundamentals AI-900 Exam Prep

Microsoft AI Fundamentals AI-900 Exam Prep

Pass AI-900 with beginner-friendly Microsoft exam prep

Beginner ai-900 · microsoft · azure ai fundamentals · ai certification

Prepare for the Microsoft AI-900 Exam with Confidence

Microsoft Azure AI Fundamentals, also known as AI-900, is one of the best entry points for learners who want to understand artificial intelligence concepts without needing a programming background. This course is designed specifically for non-technical professionals, career changers, students, and business users who want a clear path to certification. If you are looking for a structured, beginner-friendly way to prepare for the AI-900 exam by Microsoft, this course blueprint gives you a focused study journey built around the official exam objectives.

The course follows the core AI-900 domains published by Microsoft: Describe AI workloads; Fundamental principles of ML on Azure; Computer vision workloads on Azure; NLP workloads on Azure; and Generative AI workloads on Azure. Instead of overwhelming learners with technical implementation details, the course emphasizes concept clarity, Azure service recognition, scenario-based thinking, and exam-style practice. This makes it ideal for learners who need to pass the exam and also understand how Azure AI services fit into real business situations.

How the 6-Chapter Structure Supports Exam Success

Chapter 1 introduces the certification journey. Learners begin by understanding what the AI-900 exam is, how registration works, what kinds of questions appear on the test, how scoring works at a high level, and how to build a practical study plan. This chapter is especially useful for first-time certification candidates because it removes uncertainty and helps them prepare with confidence.

Chapters 2 through 5 map directly to the official exam domains. Each chapter is organized around clear milestones and internal sections that target what Microsoft expects candidates to recognize and explain. The content is designed to help learners identify common AI workload categories, understand foundational machine learning concepts on Azure, compare computer vision and natural language processing solutions, and explain modern generative AI scenarios using Azure services. Each domain chapter also includes exam-style question practice so learners can get used to the wording, pace, and distractors commonly found in certification exams.

  • Chapter 2 covers Describe AI workloads, including scenario recognition and responsible AI principles.
  • Chapter 3 covers Fundamental principles of ML on Azure, including regression, classification, clustering, model evaluation, and responsible machine learning.
  • Chapter 4 combines Computer vision workloads on Azure and NLP workloads on Azure for efficient, scenario-based learning.
  • Chapter 5 focuses on Generative AI workloads on Azure, including copilots, prompts, foundation models, and responsible use.
  • Chapter 6 provides a full mock exam, final domain review, and exam-day checklist.

Why This Course Works for Non-Technical Professionals

Many AI certification resources assume technical experience. This course does not. It is written for beginners with basic IT literacy who want practical explanations rather than code-heavy labs. The structure helps learners move from broad understanding to precise exam readiness. Topics are framed in business language where possible, while still using the exact domain names and service categories that matter for AI-900 success.

Because Microsoft exams often test your ability to choose the right AI approach for a given scenario, the course places special attention on service matching, use-case interpretation, and concept comparison. Learners are not just memorizing definitions; they are building the judgment needed to answer exam questions accurately. That includes understanding when to use computer vision versus language services, how machine learning categories differ, and what generative AI can and cannot do responsibly.

Start Your AI-900 Preparation Today

Whether you are preparing for your first Microsoft certification or refreshing your knowledge of Azure AI concepts, this course provides a complete and approachable roadmap. It helps you study the right topics, practice in the right format, and review with the exam in mind. The result is a more efficient preparation experience and a better chance of passing AI-900 on your first attempt.

Ready to begin? Register free to start your certification journey, or browse all courses to explore more exam-prep options on Edu AI.

What You Will Learn

  • Describe AI workloads and common AI solution scenarios tested on the AI-900 exam
  • Explain fundamental principles of machine learning on Azure, including supervised, unsupervised, and responsible AI concepts
  • Identify computer vision workloads on Azure and match them to the appropriate Azure AI services
  • Describe natural language processing workloads on Azure, including language understanding, translation, and speech capabilities
  • Explain generative AI workloads on Azure, including copilots, prompts, foundation models, and responsible use
  • Apply exam strategy, question analysis, and mock-test review methods to improve AI-900 exam readiness

Requirements

  • Basic IT literacy and comfort using a web browser and cloud-based tools
  • No prior certification experience is needed
  • No programming background is required
  • Interest in Microsoft Azure and AI concepts for business or career growth

Chapter 1: AI-900 Exam Foundations and Study Strategy

  • Understand the AI-900 exam format and objectives
  • Plan registration, scheduling, and test delivery options
  • Build a beginner-friendly study strategy
  • Use practice questions and review methods effectively

Chapter 2: Describe AI Workloads

  • Recognize common AI workload categories
  • Match business scenarios to AI solution types
  • Understand responsible AI at a foundational level
  • Practice exam-style scenario questions for AI workloads

Chapter 3: Fundamental Principles of ML on Azure

  • Understand machine learning fundamentals without coding
  • Differentiate supervised and unsupervised learning
  • Recognize Azure machine learning concepts and services
  • Practice exam-style ML and responsible AI questions

Chapter 4: Computer Vision and NLP Workloads on Azure

  • Understand computer vision workloads on Azure
  • Understand natural language processing workloads on Azure
  • Compare Azure AI services for vision and language scenarios
  • Practice mixed exam-style questions across both domains

Chapter 5: Generative AI Workloads on Azure

  • Understand generative AI workloads on Azure
  • Learn prompts, copilots, and foundation model concepts
  • Recognize responsible generative AI practices
  • Practice exam-style generative AI questions and reviews

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Fundamentals Specialist

Daniel Mercer is a Microsoft Certified Trainer who specializes in Azure AI, cloud fundamentals, and certification readiness. He has helped beginner learners prepare for Microsoft exams through structured domain-based study plans, practical explanations, and exam-style practice.

Chapter 1: AI-900 Exam Foundations and Study Strategy

The Microsoft Azure AI Fundamentals AI-900 exam is designed as an entry-level certification for learners who want to prove they understand core artificial intelligence concepts and the Azure services that support them. This is not a deep engineering exam, but it is also not a vocabulary-only test. Microsoft expects candidates to recognize AI workloads, identify the right Azure AI service for a scenario, understand basic machine learning ideas, and apply responsible AI principles in straightforward business contexts. In other words, the exam tests whether you can connect concepts to use cases, not whether you can build production systems.

That distinction matters because many candidates either over-prepare in the wrong direction or underestimate the exam. A common trap is spending too much time on code, SDK syntax, or advanced mathematics. AI-900 is not primarily about writing Python notebooks or tuning deep neural networks. Instead, it focuses on foundational reasoning: when computer vision is the right fit, what natural language processing can accomplish, how generative AI differs from traditional AI workloads, and why responsible AI appears across every domain. If you understand those patterns, you can answer many questions even when the wording changes.

This chapter gives you the framework for the rest of the course. You will learn how the exam is organized, how to interpret Microsoft objective language, what logistics to plan before test day, how scoring and question styles influence your pacing, and how to study efficiently as a beginner. Because AI-900 spans multiple AI domains, your preparation should be broad, structured, and scenario-driven. You should aim to recognize terms such as computer vision, natural language processing, conversational AI, generative AI, supervised learning, unsupervised learning, and responsible AI, then map those to Azure AI services and realistic business needs.

Exam Tip: Start your preparation by asking two questions for every topic: “What business problem does this solve?” and “Which Azure service or AI workload fits best?” That mindset mirrors how many AI-900 questions are framed.

Another important point is that this chapter is not just about passing the exam. A strong study strategy will also help you retain the foundations that appear in later Azure, AI, and data certifications. If you build the right habits now—careful reading, objective-based review, and error analysis from practice—you will improve both your exam score and your long-term understanding. Think of Chapter 1 as the operating manual for the entire course: it shows you what the exam values, how to avoid common traps, and how to study with purpose from your first day to your final review.

In the sections that follow, we will move from orientation to execution. First, you will see where AI-900 fits in the Microsoft certification landscape. Next, you will break down the official exam domains and decode how Microsoft writes objectives. Then you will cover registration and delivery logistics so nothing interferes with test day. After that, you will review scoring, question styles, and timing strategy. Finally, you will build a beginner-friendly study plan and learn how to use practice questions, notes, and revision methods effectively. That combination of exam knowledge and disciplined preparation is the foundation of AI-900 success.

Practice note for Understand the AI-900 exam format and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Plan registration, scheduling, and test delivery options: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a beginner-friendly study strategy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: Introduction to Microsoft Azure AI Fundamentals and AI-900

Section 1.1: Introduction to Microsoft Azure AI Fundamentals and AI-900

AI-900 is Microsoft’s foundational certification for artificial intelligence concepts and Azure AI services. It is intended for students, business stakeholders, technical newcomers, and career changers who need a broad understanding of AI on Microsoft Azure. Because it sits at the fundamentals level, the exam does not assume deep prior experience in software development, data science, or cloud architecture. However, it does expect you to interpret practical scenarios and identify which AI approach best fits a stated business goal.

The exam aligns closely with real-world AI workloads. You should expect content around common solution scenarios such as image classification, object detection, text analysis, translation, speech recognition, language understanding, and generative AI assistants. Microsoft wants candidates to see AI as a set of workload categories rather than a vague buzzword. That is why AI-900 repeatedly returns to the question of matching a problem to the correct kind of AI capability and the correct Azure service family.

At a high level, AI-900 covers five major idea groups that recur throughout the exam: AI workloads and considerations, machine learning principles on Azure, computer vision workloads, natural language processing workloads, and generative AI concepts. Responsible AI is not isolated in a single corner of the exam; it is woven throughout. If a scenario introduces fairness, privacy, transparency, reliability, or accountability concerns, you should immediately recognize that Microsoft is testing responsible AI awareness along with technical understanding.

A frequent beginner mistake is treating the exam as a memorization exercise about product names. Product recognition helps, but the stronger approach is concept first, service second. For example, understand what computer vision does before memorizing which Azure service provides image analysis features. Understand what supervised learning means before trying to recall specific Azure machine learning tooling. This makes it easier to answer unfamiliar question wording.

  • Know the difference between traditional AI workloads and generative AI workloads.
  • Recognize that AI-900 emphasizes scenario matching more than implementation detail.
  • Expect responsible AI to appear as a cross-cutting theme.
  • Study both concepts and Azure service alignment together.

Exam Tip: If you are unsure between two answer choices, prefer the one that matches the stated workload most directly. AI-900 often rewards clean scenario-to-service mapping rather than broad or overly technical answers.

This course outcome perspective is important from the beginning: your goal is not just to define AI terms, but to describe AI workloads and common AI solution scenarios tested on the exam. As you progress, keep translating every topic into a simple business statement such as “This service helps analyze text sentiment” or “This workload generates content from prompts.” That language mirrors exam logic.

Section 1.2: Official exam domains and how Microsoft frames objective language

Section 1.2: Official exam domains and how Microsoft frames objective language

Microsoft publishes a skills outline for AI-900, and your study plan should begin there. The outline tells you what the exam measures, but its wording matters. Microsoft often uses verbs such as describe, identify, recognize, select, and match. These verbs signal the depth of knowledge expected. On AI-900, “describe” usually means you should explain the idea in plain language and distinguish it from related concepts. “Identify” often means you must choose the correct service or workload when given a scenario. “Recognize” means spotting the right concept from clues. These are not accidental wording choices; they guide how questions are written.

The major domains typically include describing AI workloads and considerations, describing fundamental machine learning principles on Azure, describing computer vision workloads on Azure, describing natural language processing workloads on Azure, and describing generative AI workloads on Azure. Although the percentages may change over time, the exam consistently emphasizes broad coverage rather than deep specialization. That means weak spots in one area can hurt you even if you are strong in another.

When Microsoft says “describe AI workloads,” the exam may test whether you understand categories such as anomaly detection, forecasting, classification, regression, image analysis, optical character recognition, text analytics, and question answering. When it says “describe machine learning principles,” it often focuses on supervised versus unsupervised learning, training versus inference, and basic model evaluation concepts at a high level. For Azure-specific domains, objective language usually expects you to associate workloads with Azure AI services rather than design entire architectures.

Common traps appear when candidates read only the noun and ignore the verb. For example, if the objective says “describe,” do not over-invest in low-level implementation details. If the objective says “identify the appropriate service,” then you must spend time comparing similar Azure offerings and understanding what makes each one the best fit. The exam rewards precision in service selection, especially when distractors are plausible.

Exam Tip: Rewrite each official objective into your own plain-English checklist. If you can explain the topic simply, recognize it in a scenario, and eliminate close-but-wrong answer options, you are studying at the right depth for AI-900.

Another useful strategy is to sort objectives into three buckets: concept definitions, scenario recognition, and service matching. Concept definitions include terms like supervised learning or responsible AI. Scenario recognition involves identifying what type of workload a business needs. Service matching means mapping that workload to the appropriate Azure capability. This chapter’s study strategy will keep returning to those three buckets because they mirror how many AI-900 items are constructed.

Section 1.3: Registration process, exam delivery options, ID rules, and retake policy

Section 1.3: Registration process, exam delivery options, ID rules, and retake policy

Administrative mistakes can derail a well-prepared candidate, so exam logistics deserve attention early. Registration for AI-900 is typically handled through Microsoft’s certification platform, where you select the exam, choose a language if available, and schedule through an authorized delivery provider. Before booking, verify the current exam details, pricing, and any promotions, student discounts, or employer-sponsored vouchers that may apply. Fundamentals exams are sometimes included in training initiatives, so it is worth checking official Microsoft learning events and partner programs.

You will usually choose between testing at a physical test center or using an online proctored delivery option. Test centers provide a controlled environment and often reduce technical uncertainty. Online proctoring offers convenience, but it requires strict compliance with room, desk, camera, microphone, and identity verification rules. Candidates who choose online delivery should run system checks well in advance and again shortly before test day. A common trap is assuming a work laptop or secured corporate device will function properly; security software or policy restrictions can interfere with the exam application.

ID rules are critical. The name on your exam registration must match the name on your accepted identification. Even a minor mismatch can cause check-in problems. Review the accepted ID types for your region and confirm expiration dates ahead of time. If you have recently changed your name or your Microsoft certification profile contains an inconsistency, fix it before exam day rather than hoping the issue will be ignored.

Retake policy also matters for planning. If you do not pass, Microsoft generally imposes waiting periods before another attempt, and repeated retakes may have longer delays. Because policies can change, always verify the current rule directly from official sources. The key lesson is that a failed attempt is not just a score problem; it can delay your certification timeline.

  • Book your exam early enough to create a real study deadline.
  • Choose online delivery only if your environment is quiet and technically reliable.
  • Check your ID, name match, and regional requirements in advance.
  • Understand retake timing so you can avoid unnecessary delays.

Exam Tip: Schedule the exam for a date that forces momentum but still leaves room for one full practice-and-review cycle. Booking “someday” often leads to inconsistent study and weak retention.

Good exam readiness includes operational readiness. Treat registration, device checks, and identity verification as part of your study plan. On a fundamentals exam, you want all your focus available for reading and reasoning, not for solving preventable administrative problems.

Section 1.4: Scoring model, passing expectations, question types, and time management

Section 1.4: Scoring model, passing expectations, question types, and time management

AI-900 uses a scaled scoring model, and Microsoft commonly sets the passing score at 700 on a scale of 100 to 1000. A scaled score does not mean you must answer exactly 70 percent of questions correctly, because different forms can vary and item weighting may differ. The practical takeaway is that you need consistent competence across the tested domains rather than perfection. Fundamentals candidates sometimes panic if they encounter a few unfamiliar items, but the exam is designed to measure overall readiness, not flawless recall.

Question types may include traditional multiple choice, multiple select, matching, drag-and-drop style arrangements, and short case-style scenario items. Microsoft may also use question sets where one scenario leads to more than one prompt. The exact presentation can vary, but the core skill remains the same: read carefully, identify the workload, and choose the answer that best matches both the concept and the Azure service context.

Time management on AI-900 is usually manageable for prepared candidates, but careless reading causes more failures than time pressure. Many wrong answers result from ignoring one qualifying word such as best, most appropriate, classify, generate, detect, or analyze. If a question asks for the most suitable Azure AI service, broad answers are often distractors. If a question focuses on machine learning type, service names may be irrelevant. Let the wording tell you which layer of knowledge is being tested.

One trap is overthinking. Because AI-900 is a fundamentals exam, the correct answer is often the direct one, not the architecturally elaborate one. Another trap is treating every item as equally difficult and spending too long on one confusing scenario. A better approach is to answer confidently where you can, mark uncertain items if the platform allows review, and return with fresh attention later.

Exam Tip: Eliminate answers that solve a different problem than the one asked. On AI-900, distractors are often valid Azure technologies, just not the right fit for the specific workload in the question.

For pacing, aim to move steadily through the exam with time left for review. During review, focus on flagged questions where you can make a reasoned improvement, not on changing answers randomly. Your strongest scoring advantage comes from careful first-pass reading and disciplined elimination of wrong choices.

Section 1.5: Beginner study plan aligned to Describe AI workloads and Azure AI domains

Section 1.5: Beginner study plan aligned to Describe AI workloads and Azure AI domains

A beginner-friendly AI-900 plan should be objective-based, not resource-based. In other words, do not begin by collecting endless videos, articles, and practice tests. Begin with the exam domains, then assign study blocks to each outcome. A strong first pass is to divide your preparation into six tracks: AI workloads and common solution scenarios, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, generative AI workloads, and exam strategy plus review. This matches the course outcomes and keeps your preparation tied to what the exam actually measures.

Week by week, start with concepts before services. First learn what a workload is and how to identify it from business language. Then study the Azure service family that addresses it. For machine learning, focus on high-level distinctions such as supervised learning versus unsupervised learning, training data, features, labels, and responsible AI principles. For computer vision, learn how to distinguish image classification, object detection, facial analysis concepts where applicable, and optical character recognition. For natural language processing, focus on sentiment analysis, key phrase extraction, entity recognition, translation, speech, and language understanding. For generative AI, understand prompts, copilots, foundation models, content generation, and responsible use.

A very effective beginner method is the three-column note system. In column one, write the workload or concept. In column two, write the business problem it solves. In column three, write the Azure AI service or category most closely associated with it. This turns isolated facts into exam-ready relationships. When you later see a scenario on the test, you will be retrieving patterns rather than disconnected definitions.

  • Study concepts first, then map to Azure services.
  • Use scenario language: “A company wants to…” as your practice mindset.
  • Review responsible AI in every domain, not only in machine learning notes.
  • Revisit weak areas in short cycles instead of cramming once.

Exam Tip: If you are new to AI, avoid trying to master advanced model-building theory. AI-900 rewards broad clarity across workloads and Azure AI domains more than deep mathematical detail.

Consistency beats intensity. A daily 30 to 60 minute plan over multiple weeks is often better than a single long weekend of cramming. Your aim is to become fluent in the exam’s language: describe the workload, identify the service, explain the principle, and spot responsible AI implications. That fluency is what turns beginner knowledge into passing performance.

Section 1.6: How to use exam-style practice, note-taking, and final-week revision

Section 1.6: How to use exam-style practice, note-taking, and final-week revision

Practice questions are useful only when paired with review discipline. Many candidates make the mistake of measuring progress by the number of questions completed instead of the number of mistakes understood. For AI-900, your review process should focus on why an answer is correct, why the distractors are wrong, what keyword in the scenario signaled the right workload, and whether the question was testing concept recognition or Azure service matching. That is how practice becomes exam readiness.

Keep a structured error log. For each missed or guessed item, note the domain, the concept tested, the clue you missed, and the confusion pattern involved. Examples of confusion patterns include mixing up computer vision with OCR-specific tasks, confusing supervised and unsupervised learning, or choosing a service that is broadly related but not the best match. Over time, your error log reveals exactly where your exam risk sits. This is far more valuable than repeatedly taking new practice sets without reflection.

Your note-taking should support fast review. Use concise, comparison-based notes rather than long summaries. For instance, contrast similar concepts side by side: classification versus regression, language analysis versus speech, computer vision versus document text extraction, traditional AI outputs versus generative AI outputs. Comparison notes help because exam distractors often rely on near-neighbor confusion.

In the final week, shift from learning new material to reinforcing recognition. Review official objectives, revisit your error log, and complete timed practice in moderate blocks to strengthen pacing. Do not cram obscure details at the expense of the main domains. The highest-value revision is usually around service selection, workload identification, and responsible AI principles.

Exam Tip: In the final 48 hours, prioritize confidence and clarity over volume. Review your summary sheets, key comparisons, and common traps rather than attempting to consume large amounts of brand-new content.

On the day before the exam, confirm your logistics, reduce distractions, and avoid exhausting study sessions. If you are testing online, check your room and device setup. If you are going to a test center, verify travel time and required identification. Final performance on AI-900 often depends less on last-minute studying than on calm execution, careful reading, and trust in the patterns you have practiced. If you can identify the workload, link it to the right Azure AI domain, and avoid obvious distractors, you will be approaching the exam exactly as Microsoft intends.

Chapter milestones
  • Understand the AI-900 exam format and objectives
  • Plan registration, scheduling, and test delivery options
  • Build a beginner-friendly study strategy
  • Use practice questions and review methods effectively
Chapter quiz

1. You are beginning preparation for the Microsoft AI-900 exam. Which study approach best aligns with the exam's entry-level objectives?

Show answer
Correct answer: Focus on recognizing AI workloads, matching them to Azure AI services, and understanding responsible AI concepts in business scenarios
AI-900 measures foundational understanding of AI concepts, common workloads, Azure AI services, and responsible AI in practical scenarios. Option A matches that objective-driven, scenario-based focus. Option B is incorrect because AI-900 is not a deep engineering or coding exam. Option C is incorrect because the exam expects candidates to connect concepts to use cases, not just recite definitions.

2. A candidate is reviewing the published AI-900 skills outline and sees an objective about identifying appropriate AI workloads for business scenarios. How should the candidate interpret this objective most effectively?

Show answer
Correct answer: As a signal to practice mapping business problems to AI categories and the most suitable Azure service
Microsoft objectives for AI-900 often emphasize recognition and selection: understanding what problem a workload solves and which Azure service fits. Option B reflects the intended exam reasoning style. Option A is incorrect because pricing detail is not the core purpose of such an objective. Option C is incorrect because AI-900 does not expect production implementation or advanced engineering skills.

3. A learner wants to avoid preventable problems on exam day. Which action is most appropriate when planning registration, scheduling, and test delivery for AI-900?

Show answer
Correct answer: Choose a delivery option, confirm technical or test-center requirements, and schedule a time that supports focused performance
Chapter 1 emphasizes that exam success includes planning logistics such as registration, scheduling, delivery format, and readiness for test day. Option B is correct because it reduces avoidable disruptions. Option A is incorrect because late review of requirements can create preventable issues. Option C is incorrect because administrative and delivery problems can interfere with performance even when knowledge is strong.

4. A beginner has completed one pass through the AI-900 content and wants to use practice questions effectively. Which method is most likely to improve exam readiness?

Show answer
Correct answer: Review each missed question, identify the concept behind the error, and return to the related exam objective for targeted study
The most effective use of practice questions is error analysis tied to the exam objectives. Option C supports long-term understanding and helps candidates recognize patterns in AI workloads, services, and responsible AI concepts. Option A is incorrect because score alone does not reveal knowledge gaps. Option B is incorrect because memorizing answer positions does not build transferable understanding and fails when question wording changes.

5. A company wants its employees to prepare for AI-900 with limited study time. Which plan best reflects a beginner-friendly study strategy for this exam?

Show answer
Correct answer: Study broadly across AI topics, organize review by official objectives, and practice identifying which business problem each AI workload solves
AI-900 covers multiple foundational AI domains, so preparation should be broad, structured, and aligned to the official objectives. Option A is correct because it mirrors the exam's scenario-based nature and encourages mapping business needs to workloads and services. Option B is incorrect because advanced mathematics is not the focus of AI-900. Option C is incorrect because the exam is broad rather than specialized in a single AI domain.

Chapter 2: Describe AI Workloads

This chapter targets one of the most visible AI-900 exam domains: recognizing AI workload categories and matching them to realistic business scenarios. On the exam, Microsoft rarely asks you to build a model or write code. Instead, it tests whether you can identify what kind of AI problem an organization is trying to solve, distinguish one workload from another, and avoid confusing AI capabilities with standard reporting or rule-based automation. If you can read a short scenario and quickly classify it as prediction, anomaly detection, computer vision, natural language processing, or generative AI, you will gain a strong advantage.

The AI-900 objectives expect you to understand AI at a foundational level. That means knowing the purpose of common workloads, recognizing the language used in questions, and understanding responsible AI concepts that guide how solutions should be designed and used. In this chapter, you will learn to recognize common AI workload categories, match business scenarios to AI solution types, understand responsible AI at a foundational level, and practice exam-style thinking for workload questions.

A key challenge for many candidates is that the exam often presents similar-sounding answer choices. For example, a question may describe detecting unusual credit card activity and offer prediction, anomaly detection, and classification as options. Another may describe extracting text from scanned forms and offer natural language processing, computer vision, and generative AI. You must focus on the primary task being performed, not just broad AI buzzwords. Exam Tip: Ask yourself, “What is the system actually doing with the data?” If it is identifying unusual patterns, think anomaly detection. If it is understanding images, think computer vision. If it is processing human language, think NLP. If it is creating new content, think generative AI.

This chapter also helps with a common exam trap: assuming every intelligent-looking system is AI. Dashboards, fixed business rules, workflow triggers, and standard database queries may be useful, but they are not necessarily AI workloads. Microsoft wants candidates to understand where AI adds adaptive, inferential, perceptive, or generative capability beyond traditional automation and analytics. Keep that distinction in mind as you move through the sections.

  • Know the major workload families tested on AI-900.
  • Identify signal words in business scenarios.
  • Separate AI solutions from simple automation.
  • Apply responsible AI ideas in non-technical contexts.
  • Use elimination strategies when answer choices overlap.

By the end of this chapter, you should be able to read an exam scenario and confidently decide which AI workload is the best match, while also recognizing why the other choices are distractors. That is exactly the type of practical exam readiness this objective demands.

Practice note for Recognize common AI workload categories: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Match business scenarios to AI solution types: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand responsible AI at a foundational level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style scenario questions for AI workloads: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Recognize common AI workload categories: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Describe AI workloads as defined in the AI-900 objectives

Section 2.1: Describe AI workloads as defined in the AI-900 objectives

In AI-900, an AI workload is a category of problem that artificial intelligence techniques are designed to solve. The exam does not expect deep mathematical knowledge, but it does expect clear conceptual recognition. Microsoft commonly frames workloads around what the system can perceive, predict, understand, or generate. That is why the objective focuses on categories rather than implementation details.

At a high level, AI workloads include machine learning and prediction tasks, anomaly detection, computer vision, natural language processing, conversational AI, and generative AI. Although later chapters may go deeper into Azure services, this chapter is about workload recognition first. When you see an exam scenario, begin by identifying the type of input and the desired outcome. Is the input numerical or historical data, and the goal to forecast or classify? That suggests a predictive machine learning workload. Is the input an image or video stream? That points toward computer vision. Is the system expected to interpret text, translate speech, or determine sentiment? That is natural language processing.

Another important AI-900 skill is understanding that workload categories are broad. A single solution can involve more than one workload. For example, a retail application might use computer vision to detect products on shelves, anomaly detection to identify unusual purchasing behavior, and generative AI to create draft marketing copy. However, on the exam, the correct answer is typically the best primary match for the scenario described.

Exam Tip: If a question uses phrases like “recognize objects in images,” “extract text from receipts,” or “analyze video frames,” do not overthink it. Those are computer vision clues. If it mentions “understand customer messages,” “translate,” “transcribe speech,” or “detect key phrases,” that is NLP. If it says “generate,” “summarize,” “draft,” or “create responses,” think generative AI.

A common trap is confusing the umbrella term AI with a specific workload. The exam may ask what workload a business should use, not whether the solution uses AI in general. To score well, use precise language in your own thinking. AI is the broad domain; workloads are the practical categories inside it. Candidates who stay at the broadest level often miss scenario-based questions because multiple answers may technically involve AI, but only one is the correct workload category.

Section 2.2: Common AI workloads: prediction, anomaly detection, computer vision, NLP, and generative AI

Section 2.2: Common AI workloads: prediction, anomaly detection, computer vision, NLP, and generative AI

This section covers the workload types that appear repeatedly in AI-900 questions. Prediction refers to using historical data to estimate future outcomes or assign labels. Examples include forecasting sales, predicting customer churn, estimating delivery times, or classifying loan applications into approval categories. The exam may describe this as machine learning without naming the algorithm. Your task is to recognize that the system is learning patterns from data to make decisions or forecasts.

Anomaly detection is a specialized workload focused on finding unusual events, rare patterns, or deviations from normal behavior. Fraud detection, equipment failure alerts, unusual login activity, and abnormal sensor readings are classic examples. This is often confused with general prediction, but anomaly detection specifically emphasizes “unusual,” “outlier,” or “unexpected” behavior. Exam Tip: When those words appear, anomaly detection should move to the top of your shortlist.

Computer vision involves interpreting visual input such as images or video. Common tasks include image classification, object detection, face analysis, optical character recognition, and scene understanding. On AI-900, candidates often miss questions when scanned documents are involved. If the system is extracting text from an image, that is still primarily a computer vision workload, even though the extracted result may later be processed as language.

Natural language processing, or NLP, focuses on human language in text or speech. Typical tasks include sentiment analysis, key phrase extraction, entity recognition, translation, language detection, speech-to-text, text-to-speech, and intent recognition in conversational systems. If a scenario centers on understanding or generating meaning from human language input, NLP is the likely answer.

Generative AI is increasingly important on the exam. This workload uses foundation models to create new content such as text, images, code, or summaries based on prompts. Business examples include copilots, document drafting, summarization, knowledge-grounded question answering, and content generation. The exam may test whether you understand that generative AI creates original output rather than simply classifying or extracting existing information.

Common distractors arise because these workloads can work together. For instance, a voice assistant may use speech recognition, language understanding, and generative response creation. But if the question focuses on converting spoken words into text, the best answer is speech as an NLP capability, not generative AI. Always answer based on the core task explicitly described.

Section 2.3: AI workloads versus traditional automation and analytics

Section 2.3: AI workloads versus traditional automation and analytics

A frequent AI-900 objective is distinguishing genuine AI workloads from systems that are automated but not intelligent in the exam sense. Traditional automation follows fixed rules. For example, “if inventory falls below 20, send an alert” is useful automation, but it is not AI. Likewise, a report that summarizes last month’s sales from a database is analytics, not AI, unless machine learning is being used to infer, predict, or detect hidden patterns.

The exam may present a business scenario that sounds sophisticated but is actually rules-based. Suppose a company routes support tickets to a queue based on manually defined keywords. That is basic automation. If instead the system learns from past tickets and predicts the best queue assignment from patterns in language, that is AI. The difference is adaptability and inference rather than just scripted logic.

Analytics generally answers questions such as what happened, how much, or how often. AI often extends beyond this by answering what is likely to happen, what is unusual, what an image contains, what text means, or what new content should be generated. A dashboard showing average machine temperature is analytics. A model flagging likely machine failure before it happens is AI. A workflow that emails every customer on a schedule is automation. A system that personalizes the message based on predicted customer intent or generated text is AI-enabled.

Exam Tip: Watch for wording that indicates learning from data. Phrases like “trained on historical data,” “detect patterns,” “classify,” “forecast,” “identify anomalies,” or “understand natural language” point to AI. Phrases like “based on predefined thresholds,” “fixed rules,” “scheduled report,” or “manual logic” usually indicate non-AI approaches.

A common trap is assuming any chatbot is AI. Some bots simply follow decision trees with prewritten responses. Those are conversational systems, but not necessarily intelligent in a machine learning or generative sense. On the exam, if a bot identifies intent from language or creates context-aware replies, AI is involved. If it only follows a hard-coded script, the best answer may be automation rather than an advanced AI workload.

Section 2.4: Responsible AI principles for non-technical professionals

Section 2.4: Responsible AI principles for non-technical professionals

AI-900 includes foundational responsible AI concepts because Microsoft expects even non-technical professionals to recognize that AI solutions must be designed and used ethically. You are not expected to implement governance frameworks, but you should understand the core principles and how they affect business decisions. Common principles include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Fairness means AI systems should avoid unjust bias and should not systematically disadvantage individuals or groups. On the exam, this might appear in a hiring, lending, insurance, or admissions scenario. Reliability and safety mean systems should perform consistently and should be tested to reduce harmful or unstable behavior. Privacy and security focus on protecting personal data and ensuring proper access controls. Inclusiveness means designing for people with different backgrounds, abilities, and needs. Transparency means users should understand the system’s purpose, limitations, and, at a high level, how outputs are produced. Accountability means humans remain responsible for oversight and decisions.

For non-technical roles, the exam often tests whether you can identify a principle from a short example. If an organization needs to explain why an AI output was produced, think transparency. If a system must avoid disadvantaging one demographic group, think fairness. If sensitive customer records are involved, privacy and security are central. If a human should review AI-generated content before publication, accountability is relevant.

Exam Tip: Responsible AI questions often include two plausible principles. Focus on the primary risk being described. A biased hiring model concerns fairness first, even though transparency and accountability also matter. A system exposing confidential medical records is primarily a privacy and security issue.

Another exam trap is treating responsible AI as optional after deployment. Microsoft’s framework views it as part of the entire lifecycle: design, data selection, testing, deployment, monitoring, and human oversight. If an answer choice suggests “deploy first and review ethics later,” it is almost certainly wrong.

Section 2.5: Identifying the best AI workload from business use cases

Section 2.5: Identifying the best AI workload from business use cases

The most valuable exam skill in this chapter is translating business language into workload categories. Business stakeholders rarely say, “We need anomaly detection.” They say, “We want to catch suspicious activity in near real time.” Your job is to map that need to the correct AI workload. Start by identifying the input type, the desired output, and whether the system must learn, perceive, understand, or generate.

If a business wants to forecast revenue, estimate demand, score risk, or predict customer churn, choose prediction or machine learning. If it wants to spot fraud, network intrusions, or unusual equipment behavior, choose anomaly detection. If it needs to inspect photos, read scanned forms, count people in a store, or identify damaged products from images, choose computer vision. If it wants to analyze reviews, detect sentiment, translate text, summarize conversations, or transcribe audio, choose NLP. If it wants a copilot, draft content, generate answers, or create custom text from prompts, choose generative AI.

One practical strategy is to underline action verbs in your mind. “Predict,” “forecast,” “classify,” “detect unusual,” “recognize,” “extract from image,” “translate,” “transcribe,” “summarize,” and “generate” are all clues. Then check whether the answer choices align with the clue words. Exam Tip: When two options seem correct, prefer the more specific workload. For example, anomaly detection is usually a better answer than generic machine learning if the problem is explicitly about unusual events.

Also pay attention to whether the scenario asks for understanding existing data or creating new output. Extracting information from a document is not generative AI; drafting a new response from source material is. Similarly, displaying key performance indicators is not prediction; forecasting future values is. The exam rewards precision, not broad familiarity.

A final trap is selecting a service-driven answer when the objective is workload identification. If the question asks what kind of AI solution fits the scenario, answer with the workload category, not the Azure product name unless the item specifically asks for a service.

Section 2.6: Exam-style practice and distractor analysis for Describe AI workloads

Section 2.6: Exam-style practice and distractor analysis for Describe AI workloads

Success on AI-900 depends not only on knowing the content, but also on analyzing how Microsoft writes distractors. In workload questions, distractors are usually attractive because they are adjacent concepts. For example, a scenario about finding suspicious credit card transactions may tempt you toward prediction, because fraud models do predict likelihood. But if the wording emphasizes unusual activity or deviations from normal patterns, anomaly detection is the stronger answer. Likewise, extracting text from an invoice may tempt you toward NLP because text is involved, but the first task is reading text from an image, which is computer vision.

Use a three-step review method when practicing. First, identify the primary task in one short phrase, such as “find unusual behavior,” “understand text,” “analyze images,” or “generate new content.” Second, eliminate answers that solve a different type of problem. Third, justify why the correct answer is better than the runner-up. This last step is where real learning happens and is especially useful during mock-test review.

When reviewing missed questions, avoid saying only, “I chose the wrong one.” Instead ask: Did I miss the input type? Did I focus on a secondary feature rather than the main task? Did I confuse AI with automation? Did I overlook responsible AI language? This process strengthens pattern recognition across many scenarios.

Exam Tip: Microsoft often writes scenario questions with unnecessary business detail. Ignore the industry story unless it affects the task. A hospital, bank, retailer, and manufacturer can all need anomaly detection. The industry is often decoration; the workload is the scoring target.

Finally, do not memorize isolated keywords without understanding. Good distractor analysis depends on meaning. “Chatbot” does not automatically mean generative AI. “Text” does not automatically mean NLP. “Prediction” does not automatically mean anomaly detection. Read for purpose, map purpose to workload, and then confirm that the answer choice precisely matches that purpose. That disciplined approach is what turns content knowledge into exam performance.

Chapter milestones
  • Recognize common AI workload categories
  • Match business scenarios to AI solution types
  • Understand responsible AI at a foundational level
  • Practice exam-style scenario questions for AI workloads
Chapter quiz

1. A retail bank wants to identify credit card transactions that differ significantly from a customer's typical spending behavior so that potentially fraudulent activity can be reviewed. Which AI workload should the bank use?

Show answer
Correct answer: Anomaly detection
Anomaly detection is correct because the goal is to identify unusual patterns that deviate from expected behavior, which is a classic AI-900 workload category. Computer vision is incorrect because the scenario does not involve analyzing images or video. Generative AI is incorrect because the system is not creating new content such as text or images; it is detecting outliers in transaction data.

2. A company receives thousands of scanned paper invoices each week and wants to extract printed text, invoice numbers, and totals automatically. Which AI workload is the best match for this requirement?

Show answer
Correct answer: Computer vision
Computer vision is correct because the primary task is analyzing scanned document images and extracting information from them, which aligns with image-based document processing. Natural language processing is a distractor because although text is involved, the system must first interpret visual content from scanned forms. Prediction is incorrect because the objective is not to forecast a future value or outcome.

3. A customer support team wants a solution that can draft original email responses to common customer questions based on a short prompt entered by an agent. Which AI solution type should they choose?

Show answer
Correct answer: Generative AI
Generative AI is correct because the system is being asked to create new text content from a prompt. Anomaly detection is incorrect because there is no requirement to find unusual patterns or rare events. Classification is incorrect because the task is not limited to assigning content to predefined categories; it is generating original responses.

4. A shipping company uses a fixed rule that sends an alert whenever a package has not moved for 48 hours. A manager says this alert is an AI solution. How should you evaluate this statement for the AI-900 exam?

Show answer
Correct answer: It is not necessarily an AI workload because it is based on a fixed rule rather than adaptive inference
This is correct because AI-900 emphasizes distinguishing AI from standard automation. A fixed threshold-based rule can be useful, but it is not necessarily AI unless the system is learning patterns or making inferences from data. The first option is incorrect because automation alone does not make a solution AI. The third option is incorrect because nothing in the scenario indicates analysis of images or video.

5. A hiring team is evaluating an AI system that ranks job applicants. They discover the model consistently gives lower scores to candidates from certain groups, even when qualifications are similar. Which responsible AI principle is most directly being violated?

Show answer
Correct answer: Fairness
Fairness is correct because the system appears to produce unequal outcomes for similar candidates based on group membership, which is a core responsible AI concern in AI-900. Availability is incorrect because the issue is not whether the system is online or accessible. Scalability is incorrect because the problem is not related to handling more users or data; it is about biased decision-making.

Chapter 3: Fundamental Principles of ML on Azure

This chapter maps directly to one of the most tested AI-900 exam domains: understanding core machine learning principles on Azure without needing to write code. Microsoft expects you to recognize what machine learning is, when it should be used, how to distinguish supervised and unsupervised approaches, and which Azure services support common machine learning scenarios. For exam purposes, the emphasis is not on mathematics or implementation details. Instead, the exam tests whether you can identify the right machine learning approach from a business scenario and connect that approach to Azure capabilities.

A strong exam candidate can explain machine learning in plain language: systems learn patterns from data and use those patterns to make predictions, classifications, or groupings. On AI-900, machine learning is usually presented as a practical decision tool. You might see a scenario about predicting house prices, determining whether a loan application is risky, grouping customers with similar behaviors, or selecting a service for building and training models on Azure. Your job is to read carefully and identify the workload type before getting distracted by technical-sounding options.

The chapter begins with machine learning fundamentals without coding, because that is exactly the level AI-900 targets. You should be comfortable with data, features, labels, training, and evaluation as concepts, not as programming tasks. The exam often checks whether you know that supervised learning uses labeled data and unsupervised learning finds patterns in unlabeled data. It also expects you to recognize the purpose of Azure Machine Learning and to understand that responsible AI principles apply to machine learning solutions just as much as to generative AI and language workloads.

Another exam objective covered here is the ability to differentiate machine learning problem types. Regression, classification, and clustering appear repeatedly in AI-900 study materials because they are foundational categories. The exam may describe a business outcome in everyday words rather than use technical labels. For example, predicting a number points to regression, predicting a category points to classification, and grouping similar items without predefined labels points to clustering. The trap is that the wording can be subtle, so your best strategy is to ask: is the answer expected to be a numeric value, a category, or a similarity-based grouping?

Azure service recognition also matters. AI-900 expects you to know that Azure Machine Learning is the primary Azure platform service for building, training, managing, and deploying machine learning models. You should also understand the role of automated machine learning, which helps test algorithms and optimize model creation without deep coding knowledge. The exam does not require you to perform data science workflows, but it does expect you to know what Azure tools are designed to do.

Exam Tip: When a question asks for a service to train, manage, and deploy machine learning models, Azure Machine Learning is usually the target answer. Do not confuse it with Azure AI services that provide prebuilt capabilities for vision, language, or speech tasks.

Responsible AI is increasingly important in AI-900. Microsoft wants candidates to understand fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability at a foundational level. In machine learning scenarios, those principles often appear through concerns about biased predictions, poor model behavior, lack of explainability, or misuse of sensitive data. The exam is less about governance frameworks and more about recognizing why these issues matter and how Azure-based ML solutions should be designed responsibly.

This chapter closes with practical exam coaching. You will learn how to identify keywords, avoid common traps, and choose between plausible answers based on workload type and service purpose. Treat each section as both a conceptual lesson and an exam decoding guide. That is the right mindset for AI-900 success.

Practice note for Understand machine learning fundamentals without coding: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Fundamental principles of machine learning on Azure

Section 3.1: Fundamental principles of machine learning on Azure

Machine learning is a branch of AI in which systems learn patterns from data rather than being explicitly programmed with every rule. For the AI-900 exam, you should think of machine learning as a way to create predictions or discover patterns based on examples. A model is trained using data, and then that model is used to make decisions or forecasts about new data. You do not need to know programming syntax, but you do need to understand the workflow at a high level.

At the center of machine learning are data and patterns. Data contains examples, and those examples may include features and, in some cases, labels. Features are the input values the model uses, such as age, income, location, or temperature. Labels are the known answers in training data, such as approved or denied, fraudulent or legitimate, or a numeric value like sales amount. If labels are present, the learning process is typically supervised. If labels are absent and the system is trying to find structure on its own, the approach is unsupervised.

On Azure, the main service associated with building and managing machine learning solutions is Azure Machine Learning. This service supports the full machine learning lifecycle, including preparing data, training models, tracking experiments, evaluating results, and deploying models. AI-900 usually focuses on service recognition rather than operations. If a question describes a need to build custom predictive models from data, Azure Machine Learning is the correct conceptual match.

Exam Tip: If the scenario is about creating your own model from your own dataset, think Azure Machine Learning. If the scenario is about using a ready-made API for vision, speech, or language, think Azure AI services instead.

A common exam trap is confusing machine learning with traditional rule-based programming. If a scenario says the organization wants the system to improve from examples or identify patterns from historical data, that points to machine learning. Another trap is assuming all AI on Azure uses the same service. Microsoft separates custom model-building platforms from prebuilt cognitive capabilities, so read the wording carefully.

  • Machine learning learns from data.
  • Models use features to make predictions or identify structure.
  • Supervised learning uses labeled data.
  • Unsupervised learning uses unlabeled data.
  • Azure Machine Learning is the key Azure platform for ML lifecycle tasks.

For exam readiness, your goal is to identify the business need first, then connect it to the machine learning concept second, and only then choose the Azure service. That sequence helps reduce confusion when answer choices are intentionally similar.

Section 3.2: Regression, classification, and clustering in plain language

Section 3.2: Regression, classification, and clustering in plain language

The AI-900 exam frequently tests whether you can distinguish the major machine learning problem types in plain business language. The three most important are regression, classification, and clustering. Microsoft often avoids heavy technical wording and instead describes a real-world objective. Your task is to translate that objective into the correct machine learning category.

Regression is used when the output is a number. If a company wants to predict sales next month, estimate delivery time, forecast energy usage, or determine a product price, the answer is regression. The model is not choosing from categories; it is producing a continuous numeric value. A classic exam trap is offering classification as an option simply because the scenario sounds predictive. Remember: if the prediction is a number, it is regression.

Classification is used when the output is a category or label. Examples include spam versus not spam, approved versus denied, churn versus no churn, or defect versus no defect. Binary classification has two categories, while multiclass classification has more than two. On the exam, classification often appears in business scenarios related to decision-making, eligibility, sentiment, or risk levels.

Clustering is different because there are no predefined labels. The goal is to group similar data points together based on patterns in the data. Common examples include customer segmentation, grouping documents by similarity, or identifying naturally occurring patterns in purchasing behavior. Clustering is an unsupervised learning method. The exam may try to mislead you with words like classify customers into groups, but if the groups are discovered from unlabeled data, clustering is the correct answer.

Exam Tip: Ask one simple question: is the expected answer a number, a known category, or a discovered grouping? Number means regression, known category means classification, discovered grouping means clustering.

Another trap is mixing classification with anomaly detection or grouping concepts. If a model is trained with known labels for fraud and non-fraud, that is classification. If the system identifies unusual behavior without predefined labels, the scenario may be leaning toward unsupervised pattern discovery. On AI-900, focus on the clearest interpretation supported by the wording.

  • Predict house price: regression.
  • Decide if an email is spam: classification.
  • Group shoppers by behavior: clustering.

The exam tests your ability to understand these categories quickly. The strongest strategy is to ignore extra story details and identify the output type first. Once you know the output, the correct machine learning approach usually becomes obvious.

Section 3.3: Training, validation, testing, and model evaluation basics

Section 3.3: Training, validation, testing, and model evaluation basics

AI-900 expects you to understand the basic lifecycle of model development, even if you never build a model yourself. Training is the process of feeding data to an algorithm so it can learn patterns. In supervised learning, training data includes labels. The model learns relationships between features and the known outcomes. Once trained, the model can be used to make predictions on new data.

Validation and testing are used to judge how well a model performs. While exact workflows can vary, the exam-level idea is simple: you should not evaluate a model only on the same data used to train it. Doing so can make the model seem better than it really is. Instead, separate data is used to validate choices and test final performance. This helps determine whether the model generalizes well to new examples.

Evaluation metrics help compare models. AI-900 does not require deep statistical knowledge, but you should know that metrics exist to assess quality. For regression, the concern is how close predictions are to actual numeric values. For classification, the concern is how often the model predicts categories correctly and how errors are distributed. You may also encounter the idea of overfitting, where a model performs well on training data but poorly on new data because it learned the training examples too specifically.

Exam Tip: If a question describes a model doing very well on training data but badly on unseen data, think overfitting. The exam likes this concept because it tests whether you understand why test data matters.

A common trap is treating validation, testing, and training as interchangeable. They are related, but not identical. Training teaches the model. Validation helps assess and refine. Testing checks final performance on unseen data. At AI-900 level, a broad distinction is enough. Another trap is assuming a highly accurate model is automatically acceptable. Responsible AI concerns still apply if the model is biased, unreliable, or opaque.

When reading exam questions, look for language such as historical data, unseen data, evaluate model performance, improve generalization, or compare algorithms. Those keywords signal that the question is about the machine learning process rather than the business scenario alone. Knowing these terms helps you avoid selecting an answer that sounds practical but ignores basic model development principles.

Section 3.4: Core Azure ML concepts, data, models, and automated machine learning

Section 3.4: Core Azure ML concepts, data, models, and automated machine learning

Azure Machine Learning is Microsoft’s cloud service for creating, training, evaluating, deploying, and managing machine learning models. For AI-900, you should recognize it as the central Azure offering for custom machine learning solutions. The exam does not expect you to configure workspaces or write scripts, but it does expect you to understand the service at a functional level.

Data is the foundation of every ML solution. Organizations bring data into the machine learning process so models can learn patterns. After training, the result is a model, which is the learned representation used to make predictions. On Azure, model development is often part of a larger lifecycle that includes experiment tracking, deployment, monitoring, and retraining. At exam level, simply understanding that data becomes a trained model, and that Azure Machine Learning supports this lifecycle, is enough.

Automated machine learning, often called automated ML or AutoML, is especially important for AI-900. It helps users by automatically trying different algorithms and settings to identify a good model for a given dataset. This is valuable when you want to accelerate model development or reduce the need for deep algorithm expertise. The exam may describe a requirement to streamline model selection or optimize model creation with minimal manual effort. That wording strongly suggests automated machine learning.

Exam Tip: If the requirement is to compare multiple model approaches automatically and select the best-performing option, automated machine learning is the likely answer.

A common exam trap is confusing Azure Machine Learning with Azure AI services. Azure AI services provide prebuilt intelligence for tasks such as vision, speech, and language. Azure Machine Learning is used when you want to build or train custom models using your own data. Another trap is assuming automated ML means no human involvement at all. It automates many technical steps, but it does not remove the need for data quality checks, evaluation, and responsible oversight.

  • Azure Machine Learning supports custom ML workflows.
  • Data is used to train models.
  • Models are deployed to generate predictions.
  • Automated ML helps automate algorithm and configuration selection.

On the exam, service selection questions are often easier if you focus on whether the solution is custom or prebuilt. If custom modeling is required, Azure Machine Learning is usually the correct destination.

Section 3.5: Responsible machine learning on Azure: fairness, reliability, privacy, and transparency

Section 3.5: Responsible machine learning on Azure: fairness, reliability, privacy, and transparency

Responsible AI is a tested topic in AI-900, and machine learning is one of the clearest places where these principles matter. Microsoft emphasizes that AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable. For exam preparation, focus on understanding what these principles mean in practical terms rather than trying to memorize a policy document.

Fairness means a model should not produce systematically biased outcomes for different groups. In a hiring or lending scenario, for example, an unfair model might disadvantage certain populations because of biased training data or problematic feature selection. Reliability and safety mean the model should perform consistently and not create harmful outcomes when used in real-world conditions. Privacy and security mean sensitive data must be protected and handled appropriately. Transparency means users and stakeholders should be able to understand the model’s purpose, limitations, and, where possible, why it produced a result.

For AI-900, these ideas are usually assessed through scenario recognition. You may see a question about a model making uneven decisions across groups, exposing personal data, or being difficult to explain. You are expected to identify which responsible AI principle is most relevant. The exam often rewards practical understanding over exact wording.

Exam Tip: If the issue is unequal treatment across demographic groups, think fairness. If the issue is protecting personal or confidential information, think privacy and security. If the issue is explaining how or why a model reached an outcome, think transparency.

A common trap is choosing reliability when the real concern is fairness, simply because the model is producing poor outcomes. Ask why the outcomes are poor. If the problem is inconsistency or unsafe operation, reliability is the right lens. If the problem is biased impact across groups, fairness is the better answer. Another trap is overlooking accountability, which refers to human responsibility for AI systems even when decisions are automated.

On Azure, responsible machine learning is not a separate afterthought; it is part of sound solution design. The exam wants you to appreciate that good machine learning is not just accurate. It must also be trustworthy, explainable where appropriate, and aligned with ethical and organizational expectations.

Section 3.6: Exam-style practice for machine learning principles and Azure service selection

Section 3.6: Exam-style practice for machine learning principles and Azure service selection

Success on AI-900 comes from pattern recognition as much as content knowledge. Machine learning questions often include extra business context that can distract you from the tested objective. The best exam strategy is to identify the core requirement first. Is the organization predicting a number, assigning a category, discovering groups, building a custom model, or using a prebuilt AI service? Once you isolate that requirement, most answer choices become easier to eliminate.

For machine learning principle questions, pay close attention to labels such as historical data, labeled data, unlabeled data, model performance, unseen data, or automated model selection. These terms often reveal the correct concept directly. If the scenario is about customer segmentation without predefined categories, the answer is clustering. If the organization wants a service to train and deploy a custom predictive model, Azure Machine Learning is likely the right service. If the issue is bias across groups, responsible AI fairness is being tested.

Another smart tactic is to classify the answer choices by category before choosing one. Some options will describe a machine learning method, others an Azure service, and others a responsible AI principle. The exam sometimes mixes these together to see whether you can separate what the question is asking from what the options happen to mention.

Exam Tip: Never choose an Azure AI service just because it sounds intelligent. Match the answer to the exact need: custom model building points to Azure Machine Learning, while prebuilt APIs point to Azure AI services.

Common traps include confusing classification with clustering, confusing Azure Machine Learning with Azure AI services, and treating responsible AI principles as vague general ethics rather than practical design concerns. Review mistakes by asking what keyword you missed. Did the scenario mention labels? Did it ask for a numeric forecast? Did it require grouping based on similarity? Did it mention prebuilt versus custom capabilities?

As you prepare, practice summarizing each question in one sentence before looking at answer choices. That habit forces you to identify the real objective being tested. On AI-900, that often makes the correct answer stand out quickly and prevents overthinking.

Chapter milestones
  • Understand machine learning fundamentals without coding
  • Differentiate supervised and unsupervised learning
  • Recognize Azure machine learning concepts and services
  • Practice exam-style ML and responsible AI questions
Chapter quiz

1. A retail company wants to use historical sales data to predict next month's revenue for each store. Which type of machine learning should the company use?

Show answer
Correct answer: Regression
Regression is correct because the goal is to predict a numeric value, which is a core AI-900 machine learning concept. Classification would be used to predict a category such as high, medium, or low sales risk, not an exact revenue amount. Clustering is used to group similar data points when no labels are provided, so it does not fit a scenario where the required outcome is a number.

2. A bank is building a model to determine whether a loan application should be approved or denied based on past applications that are already labeled with outcomes. Which learning approach does this describe?

Show answer
Correct answer: Supervised learning
Supervised learning is correct because the model is trained with labeled historical data, in this case approved or denied outcomes. Unsupervised learning is incorrect because it is used when data does not include known labels and the goal is to discover patterns or groupings. Reinforcement learning is incorrect because it focuses on learning through rewards and penalties over time, which is not the scenario described in AI-900 foundational ML questions.

3. A marketing team wants to group customers into segments based on purchasing behavior, but the data has no predefined labels. Which machine learning technique is most appropriate?

Show answer
Correct answer: Clustering
Clustering is correct because the task is to group similar customers using unlabeled data. Classification is incorrect because it requires known categories for training, which the scenario explicitly says are not available. Regression is incorrect because it predicts numeric values rather than similarity-based groupings. This aligns with the AI-900 objective of distinguishing supervised and unsupervised learning problem types.

4. A company wants to build, train, manage, and deploy custom machine learning models in Azure. Which Azure service should you recommend?

Show answer
Correct answer: Azure Machine Learning
Azure Machine Learning is correct because it is the primary Azure platform for creating, training, managing, and deploying machine learning models, which is explicitly emphasized in the AI-900 exam domain. Azure AI services is incorrect because it provides prebuilt AI capabilities such as vision, speech, and language APIs rather than being the main service for end-to-end custom ML lifecycle management. Azure Bot Service is incorrect because it is designed for conversational bot solutions, not for training and managing machine learning models.

5. A healthcare organization discovers that its machine learning model gives less accurate predictions for patients in one demographic group than for others. Which responsible AI principle is the primary concern?

Show answer
Correct answer: Fairness
Fairness is correct because the scenario describes uneven model performance across demographic groups, which is a classic responsible AI concern in AI-900. Transparency is incorrect because it focuses on making model behavior understandable and explainable, not primarily on unequal outcomes. Inclusiveness is incorrect because it relates to designing systems that can be used effectively by people with diverse needs, whereas the issue here is bias or disparate performance in predictions.

Chapter 4: Computer Vision and NLP Workloads on Azure

This chapter focuses on two of the most heavily tested AI workload domains on the AI-900 exam: computer vision and natural language processing. Microsoft expects candidates to recognize common business scenarios, identify the correct Azure AI service for those scenarios, and distinguish between services that seem similar at first glance. The exam is not primarily about coding. Instead, it measures whether you can map a requirement such as extracting printed text from scanned forms, analyzing image content, translating speech, or detecting sentiment in customer feedback to the right Azure offering.

As you study this chapter, keep the exam objective in mind: you are being tested on foundational understanding of AI workloads on Azure, not deep implementation details. That means questions often describe a business need in plain language and ask which service best satisfies it. A common trap is choosing a familiar service name rather than the one that exactly matches the requirement. For example, analyzing objects in a photo is different from extracting fields from an invoice, and both are different from transcribing spoken words in a call center recording.

In the computer vision portion of the exam, you should be able to recognize image analysis, optical character recognition, face-related capabilities, and video-oriented scenarios. In the NLP portion, you should understand text analytics tasks such as sentiment analysis, key phrase extraction, entity recognition, question answering, translation, and speech services. You should also be prepared to compare Azure AI Vision, Azure AI Document Intelligence, Azure AI Language, and Azure AI Speech based on business requirements.

Exam Tip: On AI-900, service-selection questions are often solved by finding the noun in the requirement. If the noun is image, photo, object, OCR, receipt, invoice, text, sentiment, entity, speech, translation, or question answering, that usually points you toward the correct family of Azure AI services.

Another exam pattern is mixing capabilities from more than one service into a single scenario. A chatbot that speaks multiple languages may require both a language-oriented capability and a speech capability. A document-processing solution may require OCR plus structured field extraction. The best strategy is to break the scenario into workload types, then identify the Azure service best aligned to each one.

This chapter integrates the required lessons by first reviewing core computer vision workloads on Azure, then moving to natural language processing workloads, then comparing services side by side, and finally ending with exam-focused review techniques for mixed-domain questions. Read closely for distinctions, because that is where AI-900 questions are usually won or lost.

Practice note for Understand computer vision workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand natural language processing workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Compare Azure AI services for vision and language scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice mixed exam-style questions across both domains: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand computer vision workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Computer vision workloads on Azure: image analysis, OCR, face, and video scenarios

Section 4.1: Computer vision workloads on Azure: image analysis, OCR, face, and video scenarios

Computer vision workloads involve enabling systems to interpret visual input such as photos, scanned images, and video. On the AI-900 exam, Microsoft commonly tests whether you can identify the type of visual task being described. The first major category is image analysis. This includes describing image content, identifying objects, tagging visual features, detecting brands or landmarks, and generating captions or insights from a picture. If the requirement is to understand what appears in an image, think of Azure AI Vision capabilities.

The second common category is OCR, or optical character recognition. OCR is not about understanding the meaning of a full business document at a high level; at its most basic, it is about reading text from images or scanned pages. Exam questions may describe extracting printed or handwritten text from photographs, forms, posters, or receipts. When the requirement is simply to read text from visual content, OCR is the key phrase to recognize.

Face-related scenarios appear in foundational questions as well. These might involve detecting the presence of a face, identifying facial landmarks, or comparing one face with another. However, exam candidates should be careful: not every face scenario is framed as an approved or unrestricted use case. Microsoft also emphasizes responsible AI and limited access for some facial capabilities. If a question focuses on general awareness of what a face service can do, you should know the capability area; if the question touches on ethics, privacy, or responsible use, pause and think before assuming broad deployment is appropriate.

Video scenarios extend image analysis across time. Examples include analyzing frames from video streams, indexing video content, detecting actions or objects over time, and extracting text that appears within video. The exam may not require deep product-specific implementation knowledge, but you should recognize that video analysis often combines repeated image analysis and metadata extraction.

  • Image analysis: identify objects, captions, tags, scenes, and visible content in photos.
  • OCR: extract printed or handwritten text from images and scanned material.
  • Face scenarios: detect or compare faces, with awareness of responsible AI constraints.
  • Video scenarios: analyze content across sequences of frames rather than one still image.

Exam Tip: A frequent trap is confusing OCR with document understanding. OCR reads text. Document understanding goes further by identifying structure and fields such as invoice numbers, totals, and dates. If the exam wording stresses structured business documents, OCR alone is probably not the complete answer.

To answer correctly, identify whether the business problem is about seeing objects, reading text, recognizing faces, or analyzing motion and content over time. Those distinctions are exactly what AI-900 tests.

Section 4.2: Azure AI Vision and Document Intelligence service use cases

Section 4.2: Azure AI Vision and Document Intelligence service use cases

One of the most important exam skills in this chapter is distinguishing Azure AI Vision from Azure AI Document Intelligence. These services are related, but they are not interchangeable. Azure AI Vision is best matched to scenarios where the system needs to interpret visual content in images, such as identifying objects, generating image descriptions, analyzing scenes, or reading text through OCR. It is ideal when the input is a general image and the goal is to understand what is visible.

Azure AI Document Intelligence is best matched to structured or semi-structured documents, especially business documents. Think invoices, receipts, tax forms, ID documents, purchase orders, and custom business forms. This service goes beyond simply reading text. It can identify key-value pairs, tables, document structure, and specific fields that matter to business workflows. If an exam question describes automating data entry from forms, extracting totals from invoices, or pulling fields from documents at scale, Document Intelligence is usually the better choice.

Here is the distinction exam candidates must remember: Vision answers, “What is in this image?” Document Intelligence answers, “What structured information can I extract from this document?” That difference appears repeatedly in AI-900-style wording. A photo of a street scene is a Vision scenario. A scanned invoice with supplier name, due date, and total balance is a Document Intelligence scenario.

Another exam pattern is to include OCR in both answer choices and scenario language. Because both services can involve reading text, candidates sometimes choose only based on the phrase “extract text.” But the broader purpose matters. If the user wants plain text from an image, Vision OCR fits. If the user wants labeled document fields, tables, or business form extraction, Document Intelligence is the stronger match.

  • Use Azure AI Vision for image captions, tags, object identification, and OCR from general visual content.
  • Use Azure AI Document Intelligence for receipts, invoices, forms, and structured document extraction.
  • Watch for clues like “fields,” “forms,” “tables,” “business documents,” and “data entry automation.”

Exam Tip: If the requirement includes words such as invoice total, receipt line items, form fields, key-value pairs, or document layout, choose Document Intelligence over a generic vision service.

A common trap is thinking that because a document is an image file, Vision must be the answer. On the exam, input format matters less than the business outcome. A scanned invoice may technically be an image, but if the real goal is to extract structured accounting data, Document Intelligence is what the exam wants you to recognize.

Section 4.3: NLP workloads on Azure: sentiment analysis, key phrases, entity extraction, and question answering

Section 4.3: NLP workloads on Azure: sentiment analysis, key phrases, entity extraction, and question answering

Natural language processing workloads deal with understanding and extracting value from human language. On AI-900, you are expected to recognize core text analytics tasks and associate them with Azure AI Language. The most common tested tasks are sentiment analysis, key phrase extraction, entity recognition, and question answering.

Sentiment analysis determines whether text expresses a positive, negative, neutral, or mixed opinion. A classic exam scenario is analyzing product reviews, support feedback, or social media comments to determine customer attitude. If the requirement is to measure opinion or tone, sentiment analysis is the correct concept. Candidates sometimes confuse sentiment with classification. While both involve analyzing text, sentiment is specifically about emotional polarity or attitude.

Key phrase extraction identifies the most important words or short phrases in a text passage. This is useful for summarizing themes in support tickets, survey comments, or article content. If a scenario asks to identify the main topics mentioned across large volumes of text without building a custom machine learning model, key phrase extraction is a likely fit.

Entity extraction, often called named entity recognition, identifies people, places, organizations, dates, currencies, and other meaningful items in text. Some scenarios also involve categorizing entities into domain-relevant types. On the exam, clues such as “find company names,” “detect locations,” “extract dates,” or “identify monetary values” point toward entity recognition.

Question answering is another foundational NLP workload. In this pattern, users ask natural language questions and the system returns answers from a knowledge source. The exam may describe FAQ-style bots, support portals, or knowledge bases where users ask plain-language questions and expect concise answers. This is not the same as open-ended generative AI in every case. For AI-900 foundational questions, think of question answering as retrieving relevant responses from curated content.

  • Sentiment analysis: determine opinion or emotional tone.
  • Key phrase extraction: pull out major themes or important concepts.
  • Entity extraction: identify named items such as people, companies, locations, dates, and values.
  • Question answering: return answers from a knowledge base or curated information source.

Exam Tip: If the requirement asks what customers feel, think sentiment. If it asks what topics they mention, think key phrases. If it asks what specific things appear in the text, think entities.

A common exam trap is assuming that every chatbot scenario requires full conversational AI or speech. If the prompt only describes answering common questions from stored documentation, question answering in Azure AI Language may be all that is needed. Always separate text understanding from speech and from broad generative capabilities.

Section 4.4: Azure AI Language and Speech service scenarios for translation and conversational AI

Section 4.4: Azure AI Language and Speech service scenarios for translation and conversational AI

Azure AI Language and Azure AI Speech both support language-centered solutions, but they solve different kinds of business problems. Azure AI Language focuses mainly on analyzing and understanding text. As discussed in the previous section, that includes sentiment analysis, key phrase extraction, entity recognition, summarization-related patterns, and question answering. If the input is written text and the goal is to interpret meaning, Language is the first service to consider.

Azure AI Speech addresses spoken interactions. It supports speech-to-text, text-to-speech, speech translation, and speech-enabled conversational scenarios. If a company wants to transcribe meetings, convert recorded calls into text, generate spoken audio from text, or translate spoken language in real time, Speech is the stronger fit. This distinction is heavily testable because many candidates focus on the word language and overlook whether the data is spoken or written.

Translation scenarios can involve either text or speech. If the requirement is to translate written content from one language to another, the exam may point toward language translation capabilities. If the requirement is to translate what a person says during a live conversation, that is more clearly a Speech scenario. Pay attention to whether the scenario begins with a document, a chat message, a web page, an audio stream, or a live spoken interaction.

Conversational AI on the exam can mean several things. A text-based support assistant that answers common questions from documents may align with Language question answering. A voice bot that listens to callers, interprets spoken requests, and replies out loud involves Speech. Some realistic solutions combine both. For example, a multilingual voice assistant might use speech recognition, translation, language understanding, and speech synthesis together.

  • Azure AI Language: written text analysis and understanding.
  • Azure AI Speech: spoken input and spoken output.
  • Translation: distinguish text translation from speech translation.
  • Conversational AI: determine whether the bot is text-based, voice-based, or both.

Exam Tip: The fastest way to answer service-selection questions here is to ask, “Is the source content text or audio?” Text usually points to Language. Audio usually points to Speech.

A common trap is choosing Speech for any chatbot because chat feels conversational. But if the interaction is typed rather than spoken, Speech may not be required. Likewise, if a scenario mentions transcripts, recorded calls, subtitles, or spoken commands, Speech should move to the top of your answer list.

Section 4.5: Choosing between vision and NLP services based on business requirements

Section 4.5: Choosing between vision and NLP services based on business requirements

AI-900 does not simply test whether you know service names. It tests whether you can match services to real business requirements. That means you must look beyond technical buzzwords and identify the actual data type, outcome, and workflow being described. In vision scenarios, the input is usually an image, scanned document, or video. In NLP scenarios, the input is usually written or spoken language. That is the first split. After that, narrow the answer based on the desired result.

If the business wants to know what appears in photos uploaded by users, choose a vision-oriented service. If it wants to read and organize fields from invoices, choose Document Intelligence. If it wants to classify customer opinions in reviews, choose Language. If it wants to turn a phone conversation into text, choose Speech. If it wants a system to answer frequently asked questions from a knowledge source, use question answering capabilities in Language. If it wants to create spoken responses, Speech becomes important again.

The exam often presents answer choices that are all plausible in a general sense. Your job is to identify the most precise fit. For example, OCR can read text from a receipt image, but if the requirement is to extract merchant name, transaction date, and total cost into specific fields, Document Intelligence is more exact. Likewise, a text analytics service can process written transcripts, but if the organization first needs those transcripts created from audio, Speech is necessary.

Another useful strategy is to separate multimodal scenarios into components. A customer support solution might capture a spoken complaint, transcribe it, analyze its sentiment, and store the result. That is not one workload; it is a combination of Speech plus Language. A document workflow might scan forms, extract fields, and route them for approval. That points to Document Intelligence more than a generic image analysis service.

  • Start with the input type: image, document, text, or speech.
  • Then identify the business goal: describe, detect, extract, answer, translate, or transcribe.
  • Choose the service that best matches both the input and the outcome.

Exam Tip: The exam loves “best service” phrasing. When multiple services could do part of the job, pick the one that most directly satisfies the stated requirement with the least ambiguity.

Common traps include confusing image OCR with document extraction, text analysis with speech processing, and chatbot scenarios with question answering scenarios. The strongest candidates do not memorize only definitions; they practice matching requirements to services quickly and accurately.

Section 4.6: Exam-style practice for computer vision workloads on Azure and NLP workloads on Azure

Section 4.6: Exam-style practice for computer vision workloads on Azure and NLP workloads on Azure

To prepare effectively for mixed-domain AI-900 questions, focus on pattern recognition rather than rote memorization. Microsoft exam items often present short business cases and ask which Azure AI service should be used. Your task is to identify signal words, eliminate near-miss options, and select the service whose capabilities align most precisely with the requirement.

For computer vision, watch for phrases like identify objects in photos, analyze image content, detect visual features, read text from signs, process scanned receipts, or extract invoice fields. Those clues help you decide between Azure AI Vision and Azure AI Document Intelligence. For NLP, listen for terms such as customer opinion, key topics, named items, answer common questions, translate text, transcribe audio, or convert text to speech. These clues help you distinguish Azure AI Language from Azure AI Speech and related language capabilities.

A productive review method is to create a two-column comparison list for similar services. In one column, write the service. In the other, write the typical business outcomes it supports. For example, Vision: image understanding and OCR. Document Intelligence: document field extraction and layout analysis. Language: sentiment, entities, key phrases, and question answering. Speech: transcription, synthesis, and spoken translation. Repeatedly testing yourself on those pairings builds the exact recall AI-900 demands.

Another strong exam habit is reading the last line of a scenario first. If it asks for sentiment, entity recognition, OCR, speech transcription, or invoice field extraction, you immediately know what capability to search for in the rest of the paragraph. This reduces confusion when Microsoft includes extra details to distract you.

  • Underline the input type: image, document, text, or audio.
  • Circle the desired outcome: describe, read, extract, analyze sentiment, answer, translate, or speak.
  • Eliminate options that match only part of the requirement.
  • Select the service that most directly fits the complete scenario.

Exam Tip: If two answer choices both seem possible, ask which one is broader and which one is purpose-built. AI-900 usually prefers the purpose-built service for the specific business problem described.

Finally, do not overcomplicate foundational questions. AI-900 is an entry-level certification. The exam expects you to recognize core Azure AI workloads and common solution scenarios, not architect every implementation detail. If you can reliably classify whether a problem is about images, documents, text, or speech, and then map that problem to the most appropriate Azure AI service, you will be well prepared for this chapter’s exam objectives.

Chapter milestones
  • Understand computer vision workloads on Azure
  • Understand natural language processing workloads on Azure
  • Compare Azure AI services for vision and language scenarios
  • Practice mixed exam-style questions across both domains
Chapter quiz

1. A company wants to build a solution that identifies objects in photos uploaded by users and generates descriptive tags such as "outdoor," "vehicle," and "person." Which Azure AI service should they use?

Show answer
Correct answer: Azure AI Vision
Azure AI Vision is the correct choice because AI-900 expects you to associate image analysis, object detection, and image tagging with vision workloads. Azure AI Document Intelligence is designed for extracting text and structured fields from forms, invoices, and receipts rather than general photo analysis. Azure AI Language is used for natural language processing tasks such as sentiment analysis, entity recognition, and question answering, not interpreting visual content in images.

2. A finance department needs to process scanned invoices and automatically extract fields such as vendor name, invoice total, and due date. Which Azure AI service best fits this requirement?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is the best answer because this scenario requires OCR plus structured field extraction from business documents, which is a key exam distinction in AI-900. Azure AI Vision can perform image-related analysis and OCR scenarios, but it is not the best match when the requirement specifically involves extracting recognized fields from documents like invoices. Azure AI Speech is for speech-to-text, text-to-speech, and speech translation, so it does not fit a scanned document processing scenario.

3. A customer support team wants to analyze thousands of product reviews to determine whether each review expresses a positive, neutral, or negative opinion. Which Azure AI service should they use?

Show answer
Correct answer: Azure AI Language
Azure AI Language is correct because sentiment analysis is a core natural language processing capability covered in the AI-900 exam objectives. Azure AI Speech would be appropriate if the input were spoken audio that needed transcription or speech translation, but the requirement here is text opinion analysis. Azure AI Vision is for image and visual workloads, so it would not be used to classify sentiment in written reviews.

4. A company is creating a multilingual voice assistant. Users will speak into the app, and the solution must convert speech to text and translate the spoken content into another language. Which Azure AI service should be selected first for this requirement?

Show answer
Correct answer: Azure AI Speech
Azure AI Speech is the best initial service selection because the primary requirement involves spoken audio, including speech recognition and speech translation. On AI-900, speech-related nouns in the scenario usually point directly to Azure AI Speech. Azure AI Language handles text-based NLP tasks such as sentiment, entity recognition, and question answering, but it is not the core service for processing spoken input. Azure AI Document Intelligence is focused on forms and document extraction and is unrelated to voice assistant scenarios.

5. You need to recommend Azure AI services for a solution that scans customer-submitted claim forms, extracts printed text and key fields, and then analyzes any written comments for sentiment. Which combination of services should you recommend?

Show answer
Correct answer: Azure AI Document Intelligence and Azure AI Language
Azure AI Document Intelligence and Azure AI Language is the correct combination because the scenario contains two workload types: document processing and text analytics. Document Intelligence is used to extract printed text and structured fields from forms, while Language is used to analyze sentiment in written comments. Azure AI Vision and Azure AI Speech is incorrect because Speech is unrelated to written comments, and Vision alone is not the best choice for structured form-field extraction. Azure AI Language and Azure AI Speech is also incorrect because neither service is designed to extract fields from scanned claim forms.

Chapter 5: Generative AI Workloads on Azure

This chapter focuses on one of the most visible AI-900 exam domains: generative AI workloads on Azure. Microsoft expects candidates to recognize what generative AI does, how it differs from traditional predictive AI, and which Azure services support common business scenarios such as chat assistants, content generation, summarization, and knowledge-grounded copilots. On the exam, you are not being tested as a deep developer. Instead, you are being tested on service recognition, workload matching, basic terminology, and responsible use concepts.

Generative AI creates new content such as text, code, summaries, answers, images, or conversational responses. That makes it different from classic machine learning workloads that mostly classify, predict, detect, or recommend based on patterns in historical data. In AI-900 terms, this distinction matters because the exam may present a scenario and ask whether it is best solved by a generative AI capability, a traditional machine learning model, or another Azure AI service such as vision, speech, or language. You should be able to identify terms such as foundation model, large language model, prompt, completion, chat-based interface, grounding, and content filtering.

The chapter also connects directly to exam readiness. Questions in this area often sound simple but include wording traps. For example, a prompt that asks for “the most appropriate Azure service for building a conversational copilot” points toward Azure OpenAI Service rather than a general machine learning platform. A scenario that asks for “predicting whether a customer will churn” is not generative AI; it is predictive machine learning. The exam rewards careful reading and matching the requirement to the service.

Exam Tip: When you see verbs such as generate, summarize, rewrite, draft, answer in natural language, or converse, think generative AI. When you see classify, forecast, detect anomalies, or predict a numeric outcome, think traditional machine learning.

As you work through this chapter, focus on four outcomes that regularly appear on the AI-900 exam: understanding generative AI workloads on Azure, learning prompt and copilot concepts, recognizing responsible generative AI practices, and reviewing exam-style ways to analyze scenario wording. If you master those four areas, you will be well positioned for the generative AI questions on test day.

  • Know what generative AI workloads are designed to do.
  • Recognize the role of foundation models and large language models.
  • Understand how Azure OpenAI supports prompts, completions, and chat solutions.
  • Identify grounding and retrieval-augmented generation patterns at a conceptual level.
  • Remember responsible AI themes: safety, limitations, and human oversight.
  • Use elimination strategies to avoid common exam traps.

This chapter is written like an exam coaching session. Each section explains what the concept means, what the test is likely to ask, how to identify the correct answer, and which misunderstandings commonly cause candidates to miss points. Treat the material as both a content review and a strategy guide.

Practice note for Understand generative AI workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Learn prompts, copilots, and foundation model concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Recognize responsible generative AI practices: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice exam-style generative AI questions and reviews: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand generative AI workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Generative AI workloads on Azure and how they differ from predictive AI

Section 5.1: Generative AI workloads on Azure and how they differ from predictive AI

Generative AI workloads are designed to produce new output. That output may be a paragraph, an answer to a question, a summary, a draft email, a code suggestion, or a conversational response. On Azure, these workloads are commonly associated with Azure OpenAI Service and copilot-style solutions that interact with users through prompts. By contrast, predictive AI uses models trained to estimate outcomes or assign labels. Examples include predicting sales, classifying images, detecting spam, or forecasting equipment failure.

This distinction is a favorite exam objective because Microsoft wants you to classify the workload before choosing the service. If a question describes a solution that writes product descriptions from bullet points, summarizes support tickets, or answers employee questions in natural language, it is pointing to a generative AI workload. If the scenario focuses on whether a transaction is fraudulent or whether a loan should be approved, that is predictive AI. The trap is that both use AI models, but they solve different problem types.

Another difference is interaction style. Generative AI is often prompt-driven and iterative. Users can refine instructions and ask follow-up questions. Predictive AI is usually structured around input features and a model output such as a class label or score. The AI-900 exam does not expect deep algorithm knowledge here, but it does expect you to recognize that generated content is probabilistic and context-sensitive, while predictive systems are optimized for scoring or classification tasks.

Exam Tip: If the scenario emphasizes creating human-like text or dialogue, eliminate answers centered on classification, regression, or anomaly detection. If the scenario emphasizes making a business prediction from historical data, generative AI is probably the wrong fit.

Azure-related wording also matters. A broad “build and manage machine learning models” description can suggest Azure Machine Learning. A “generate text and build conversational copilots” description points more directly to Azure OpenAI Service. Read the business requirement first, then match it to the workload category. That habit prevents one of the most common traps in this chapter: picking a familiar Azure brand instead of the best service for the actual task.

Section 5.2: Foundation models, large language models, and copilots in business scenarios

Section 5.2: Foundation models, large language models, and copilots in business scenarios

A foundation model is a large pretrained model that can be adapted or prompted for many tasks. Large language models, or LLMs, are foundation models specialized for language tasks such as drafting text, summarizing content, answering questions, and carrying on conversations. On the AI-900 exam, you do not need architectural detail. You do need to know that foundation models provide broad capabilities across multiple scenarios and can power applications without training a narrow model from scratch.

Copilots are applications that use generative AI to assist users within a task or workflow. In business scenarios, a copilot might help customer service agents summarize cases, help employees search internal knowledge, help sellers draft outreach, or help analysts interpret documents. The key exam idea is that a copilot is not just a model. It is an assistant experience built around a model, prompts, data, and user interaction. This distinction matters because exam questions may describe the user-facing solution rather than the underlying technology.

Business scenarios often include terms like productivity, knowledge assistance, content drafting, and natural language interaction. Those are strong clues that a copilot powered by an LLM is appropriate. If the question instead focuses on extracting sentiment, identifying entities, or translating text, another Azure AI language capability may be a better fit. The exam may place multiple plausible options together, so your job is to identify whether the need is generation and assistance, or analysis and extraction.

Exam Tip: Remember that copilots enhance human work rather than replace all decision making. If a scenario mentions assisting users in context, recommending next steps, or drafting responses, “copilot” is a strong conceptual match.

A common trap is assuming any chatbot equals a copilot. Some chatbots use predefined rules or retrieval only. A copilot generally implies more capable generative assistance. Another trap is overthinking the term foundation model. For AI-900, think of it as a versatile pretrained base that supports many downstream tasks. Keep your answer selection practical: what kind of user problem is being solved, and does it require generated language or general prediction?

Section 5.3: Azure OpenAI concepts, prompts, completions, and chat-based solutions

Section 5.3: Azure OpenAI concepts, prompts, completions, and chat-based solutions

Azure OpenAI Service provides access to powerful generative AI models in Azure for tasks such as text generation, summarization, transformation, and conversational interaction. For AI-900, the important point is conceptual fit: this service is used when an organization wants to build applications that generate or reason over natural language content. It is not the same thing as general-purpose model training platforms or traditional NLP features that only analyze text.

A prompt is the instruction or context given to a model. It tells the model what to do, what style to use, what role to assume, or what information to consider. A completion is the model’s generated output in response to the prompt. In chat-based solutions, users and assistants exchange messages over multiple turns, allowing the model to respond in a more conversational format. On the exam, these terms may appear directly or be implied through scenario wording such as “ask questions in natural language” or “generate a draft response from user input.”

The quality of a generative solution often depends on prompt design. Clear instructions, relevant context, and constraints improve results. While AI-900 will not demand advanced prompt engineering, you should know that prompts influence output quality and behavior. The exam may test that prompts can ask a model to summarize, classify, rewrite, translate, or answer, even though the underlying workload remains generative. The presence of a prompt-driven interface is a clue.

Exam Tip: If a scenario emphasizes iterative user interaction, natural-language instructions, and generated responses, Azure OpenAI is typically the best match. Do not confuse it with Azure AI Language features that mainly analyze existing text rather than generate new content.

Common traps include treating chat as the same as speech, or assuming all language tasks require the same service. If the requirement is spoken audio, speech services may be involved. If the requirement is generated text or a chat assistant, Azure OpenAI is the key concept. The exam rewards recognizing the service by the business capability rather than memorizing technical setup details.

Section 5.4: Retrieval-augmented generation, grounding, and common generative AI patterns

Section 5.4: Retrieval-augmented generation, grounding, and common generative AI patterns

One challenge with generative AI is that models may produce answers that sound confident even when they are incomplete or incorrect. To reduce this risk, many business solutions use retrieval-augmented generation, often shortened to RAG. In this pattern, the application retrieves relevant information from trusted data sources and supplies that information to the model as context before the model generates a response. This process is also described as grounding the model in enterprise data.

For AI-900, you do not need to implement RAG, but you should understand why it exists. A general-purpose model knows many things from pretraining, but it may not know an organization’s current policies, private documents, pricing tables, or internal procedures. Grounding helps the generated answer reflect authoritative, up-to-date business data. If an exam scenario mentions a copilot that answers questions using company documents or internal knowledge bases, grounding or retrieval is the concept being tested.

Common generative AI patterns include question answering over documents, summarization of large text collections, drafting content with business context, and conversational assistants connected to trusted knowledge. The exam may test recognition rather than terminology. For example, “use company manuals to answer employee questions” is a retrieval-grounded solution pattern even if the phrase RAG is not used in the question.

Exam Tip: When the scenario requires accurate answers based on organizational content, look for wording that suggests grounding the model with retrieved data. A plain foundation model alone may be too generic for that requirement.

A frequent trap is assuming retraining the model is always necessary. In many AI-900-style scenarios, the better answer is not “train a custom model from scratch,” but “provide relevant context from enterprise data.” Another trap is believing grounding guarantees correctness. It improves relevance and trustworthiness, but human review and safety controls are still important. On the exam, choose the answer that best aligns generated output with trusted information, not the answer that sounds most technically complex.

Section 5.5: Responsible generative AI, content safety, limitations, and human oversight

Section 5.5: Responsible generative AI, content safety, limitations, and human oversight

Responsible generative AI is a core exam theme. Microsoft wants candidates to understand that generative systems can produce harmful, biased, unsafe, or inaccurate content if they are not designed and monitored carefully. In Azure-based solutions, organizations should consider content safety, acceptable use, transparency, privacy, and mechanisms for review and correction. For AI-900, you are being tested on awareness and principles rather than governance implementation details.

Content safety refers to controls that help detect or filter harmful outputs and, in some cases, problematic inputs. Limitations include hallucinations, outdated knowledge, lack of domain specificity, sensitivity to prompt wording, and the possibility of generating plausible but wrong answers. These limitations are especially important in high-impact scenarios such as healthcare, finance, legal guidance, or safety-critical operations. The exam may describe a generated response that must be reviewed before use. That is a clue pointing to human oversight.

Human oversight means keeping people involved where necessary to validate outputs, handle exceptions, and make final decisions. A copilot can draft, summarize, and suggest, but a person may still need to approve the result. This is both a practical design principle and a common exam answer pattern. Microsoft often frames responsible AI as augmenting people with guardrails rather than automating every decision end to end.

Exam Tip: If two choices both seem functional, prefer the one that includes safeguards, monitoring, review, or human validation. AI-900 often rewards the most responsible approach, not merely the most automated one.

Common traps include thinking that a powerful model is automatically reliable, or that grounding eliminates all risk. It does not. Another trap is treating responsible AI as a separate afterthought instead of part of solution design. On exam questions about generated content for customers or employees, always consider whether the answer choice includes safety filtering, oversight, and awareness of limitations. That is often the differentiator between a merely plausible answer and the best exam answer.

Section 5.6: Exam-style practice for generative AI workloads on Azure

Section 5.6: Exam-style practice for generative AI workloads on Azure

To prepare for generative AI questions on the AI-900 exam, use a structured review method. First, identify the business outcome in the scenario. Is the system supposed to generate new content, analyze existing content, make a prediction, or search trusted knowledge? Second, highlight keywords that point to Azure services or concepts: prompt, summarize, draft, copilot, chat, internal documents, responsible AI, filtering, or human approval. Third, eliminate answers that solve a different AI workload, even if they sound advanced or familiar.

A strong exam strategy is to separate “what the user wants” from “what the technology does.” If the user wants generated answers in natural language, generative AI is likely involved. If the user wants answers based on company files, think grounding or retrieval. If the user wants assistance inside a workflow, think copilot. If the question warns about harmful or inaccurate outputs, think responsible AI, content safety, and oversight. This mental checklist helps you move quickly without guessing.

During review, pay attention to Microsoft’s wording patterns. “Most appropriate” means there may be several possible tools, but only one best fit. “Conversational assistant” suggests chat-based generative AI. “Private organizational data” suggests grounding. “Review before sending” suggests human oversight. “Predict future values” or “classify records” indicates predictive machine learning rather than generative AI. The exam often tests precision, not complexity.

Exam Tip: In scenario questions, underline the verbs. Generate, draft, summarize, answer, and converse usually signal generative AI. Predict, classify, detect, and forecast usually signal traditional machine learning.

Finally, use mock-test review the right way. Do not only check whether you were right or wrong. Ask why the correct answer matched the workload better than the distractors. Build a one-line rule from each mistake, such as “chat with generated responses points to Azure OpenAI” or “company-document answers suggest grounding.” Those rules become fast recall tools on exam day and are especially effective for the generative AI portion of AI-900.

Chapter milestones
  • Understand generative AI workloads on Azure
  • Learn prompts, copilots, and foundation model concepts
  • Recognize responsible generative AI practices
  • Practice exam-style generative AI questions and reviews
Chapter quiz

1. A company wants to build a chat-based assistant that answers employee questions by generating natural language responses from a large language model. Which Azure service is the most appropriate choice?

Show answer
Correct answer: Azure OpenAI Service
Azure OpenAI Service is the best match because the scenario requires a chat-based generative AI solution that produces natural language responses using a large language model. Azure Machine Learning can be used to build and manage many machine learning models, but it is not the most direct exam answer for a conversational copilot scenario. Azure AI Vision is used for image analysis and related vision workloads, not for text-based conversational generation.

2. Which task is the best example of a generative AI workload rather than a traditional predictive machine learning workload?

Show answer
Correct answer: Generating a summary of a long support case in natural language
Generating a summary is a generative AI task because the model creates new natural language content from source material. Predicting customer churn and detecting fraud are classic predictive machine learning scenarios that identify patterns or outcomes from historical data rather than generate new content. On the AI-900 exam, verbs such as summarize, draft, rewrite, and answer usually indicate generative AI.

3. A business wants its copilot to answer questions by using both a foundation model and current information from the company's internal documents. Which concept best describes this design?

Show answer
Correct answer: Grounding with retrieval-augmented generation
Grounding with retrieval-augmented generation is the correct concept because it supplements the model with relevant enterprise data so responses are based on trusted sources. Image classification is a computer vision workload and does not apply to question answering over documents. Anomaly detection identifies unusual patterns in data and is unrelated to knowledge-grounded copilots.

4. You are reviewing a proposed generative AI solution that drafts customer email responses automatically. Which practice best aligns with responsible generative AI guidance for AI-900?

Show answer
Correct answer: Use human oversight and content filtering for generated responses
Using human oversight and content filtering reflects responsible generative AI practices emphasized in AI-900, including safety, limitations, and review of generated output. Automatically sending all responses without review increases the risk of harmful, incorrect, or inappropriate content. Removing safety controls is the opposite of responsible AI and would increase risk rather than improve trustworthiness.

5. A candidate reads the following requirement: 'Build a solution that predicts the numeric sales total for next quarter.' Which conclusion is most appropriate?

Show answer
Correct answer: This is primarily a predictive machine learning workload rather than generative AI
Predicting a numeric sales total is a forecasting problem, which falls under traditional predictive machine learning. It is not generative AI in the AI-900 sense, because the exam distinguishes generated natural language or media content from predictive tasks such as classification, forecasting, and anomaly detection. Speech AI would only apply if the requirement involved speech recognition or synthesis, which it does not.

Chapter 6: Full Mock Exam and Final Review

This chapter brings the entire Microsoft AI Fundamentals AI-900 course together into a final exam-prep system. By this point, you should already recognize the core tested areas: AI workloads and common solution scenarios, machine learning principles on Azure, computer vision workloads, natural language processing workloads, and generative AI concepts including responsible use. The purpose of this final chapter is not to teach brand-new theory. Instead, it is to help you perform under exam conditions, review weak areas intelligently, and avoid the traps that cause candidates to miss otherwise manageable questions.

The AI-900 exam is a fundamentals exam, which means Microsoft is not testing deep implementation or coding ability. It is testing whether you can identify the right AI workload, match a business need to an Azure AI capability, distinguish between similar services, and apply basic responsible AI reasoning. The most common scoring problem is not lack of intelligence or effort. It is misreading the scenario, overlooking a service limitation, or choosing an answer that sounds technically impressive but does not fit the requirement being tested.

In the first half of your final review, represented here as Mock Exam Part 1 and Mock Exam Part 2, your goal is to simulate realistic exam pacing. That means practicing domain switching: one moment you may need to identify a supervised learning scenario, and the next you must recognize whether a computer vision use case belongs to image classification, object detection, OCR, or face-related analysis. A strong mock review should therefore train both knowledge recall and answer selection discipline.

The next step is Weak Spot Analysis. This is where serious score improvement happens. Do not just mark answers as right or wrong. Group every miss into categories: misunderstood the workload, confused Azure services, ignored a keyword in the requirement, or guessed because two answers seemed plausible. This approach maps directly to exam objectives and helps you focus your final review time where it matters most.

Finally, the Exam Day Checklist turns preparation into execution. On exam day, fundamentals candidates often lose points because of speed anxiety, second-guessing, or failure to eliminate distractors. You should walk into the exam knowing how to read each prompt, how to identify the tested concept, and how to separate broad AI ideas from Azure-specific service names.

  • Use mock exams to practice recognition, not memorization.
  • Review by exam domain so that weak areas become visible.
  • Focus on service-purpose matching, not implementation details.
  • Watch for common traps involving similar-sounding Azure AI services.
  • Treat responsible AI as a tested principle, not an optional extra.

Exam Tip: On AI-900, the best answer is often the one that most directly satisfies the stated business need with the simplest correct Azure AI capability. Avoid selecting answers just because they sound more advanced.

As you work through this chapter, think like an exam coach and a candidate at the same time. Ask yourself what objective is being measured, what wording signals the correct topic area, and what distractors are likely designed to catch someone who studied only at a surface level. That mindset is what converts review into readiness.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full mock exam blueprint aligned to all official AI-900 domains

Section 6.1: Full mock exam blueprint aligned to all official AI-900 domains

Your full mock exam should reflect the way AI-900 blends concept recognition with Azure service awareness across all official domains. A good blueprint covers the complete objective set instead of overemphasizing one favorite topic. In practical terms, this means the mock should include balanced review of AI workloads and common solution scenarios, machine learning fundamentals on Azure, computer vision workloads, natural language processing workloads, and generative AI workloads. The exam does not require deep technical configuration, but it does require accurate identification of what each service or workload is designed to do.

Mock Exam Part 1 should emphasize foundational recognition. This is where you test whether you can quickly classify a business scenario into machine learning, computer vision, NLP, or generative AI. Many candidates lose easy points here because they rush past key requirement words such as classify, detect, predict, translate, summarize, extract text, or generate content. Mock Exam Part 2 should increase difficulty by mixing closely related answer choices, especially where Azure AI services appear similar on the surface. This simulates the real exam experience, where distractors are often credible but not precise enough.

Build your mock review around domain questions such as identifying supervised versus unsupervised learning scenarios, matching image analysis tasks to the correct Azure AI service, recognizing speech and translation workloads, and distinguishing classic AI from generative AI. Include responsible AI concepts throughout rather than treating them as a separate isolated topic. Microsoft often tests ethical and practical AI judgment in the context of real scenarios.

  • Domain 1: Describe AI workloads and common AI solution scenarios.
  • Domain 2: Describe fundamental principles of machine learning on Azure.
  • Domain 3: Describe features of computer vision workloads on Azure.
  • Domain 4: Describe features of NLP workloads on Azure.
  • Domain 5: Describe features of generative AI workloads on Azure.

Exam Tip: When reviewing a mock exam, always ask, "What exact exam objective was this item measuring?" If you cannot answer that, your review is too shallow. The point of the mock is not just scoring yourself. It is identifying whether your mistakes come from weak concepts, poor reading discipline, or confusion between Azure offerings.

A final blueprint rule: mimic test conditions. Use timed blocks, avoid checking notes, and practice staying calm when topics switch rapidly. AI-900 rewards broad, organized understanding. A realistic mock exam helps you prove to yourself that your knowledge holds together across the entire domain map.

Section 6.2: Review strategy for Describe AI workloads and machine learning questions

Section 6.2: Review strategy for Describe AI workloads and machine learning questions

This review area combines two high-value exam themes: recognizing general AI solution scenarios and understanding machine learning fundamentals on Azure. The exam expects you to distinguish among common workloads such as anomaly detection, forecasting, classification, clustering, recommendation, and conversational AI. It also expects you to know the difference between supervised learning, unsupervised learning, and responsible AI principles. You are not being tested as a data scientist; you are being tested on whether you can identify the right type of learning for the stated need.

A practical review strategy is to sort machine learning questions by verb. If a scenario asks to predict a known labeled outcome, think supervised learning. If it asks to discover groupings without predefined labels, think unsupervised learning. If it involves numerical value prediction, consider regression. If it involves assigning categories, think classification. When reviewing misses from your mock exam, note whether you confused the learning type or simply failed to map the business language to ML terminology.

Azure-specific awareness matters too. You should recognize Azure Machine Learning as the broader platform for building, training, and deploying models. At the fundamentals level, what matters most is understanding purpose, not deployment detail. Also review responsible AI concepts such as fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These are frequently tested through scenario wording rather than direct definition alone.

Common traps include choosing an answer based on a familiar buzzword instead of the requirement. For example, candidates sometimes select AI solutions that sound advanced even when a simpler analytics or classification approach fits better. Another trap is confusing predictive ML with rule-based automation. Read carefully for clues that indicate learned patterns from data versus predefined logic.

Exam Tip: If a question mentions historical labeled data and predicting future outcomes, supervised learning is often the anchor concept. If it emphasizes finding patterns or groups without labels, unsupervised learning is the safer direction.

For final review, make a one-page comparison chart of classification, regression, clustering, anomaly detection, and recommendation scenarios. If you can identify each from plain business language without relying on jargon, you are in strong shape for this domain.

Section 6.3: Review strategy for computer vision workloads on Azure questions

Section 6.3: Review strategy for computer vision workloads on Azure questions

Computer vision questions on AI-900 usually test whether you can match an image-based requirement to the correct Azure AI capability. The exam may describe analyzing an image, detecting objects, extracting printed or handwritten text, recognizing content in a video stream, or using facial attributes in a carefully framed scenario. Your task is not to remember implementation syntax. Your task is to identify what kind of vision problem is being described and choose the Azure service or feature that best aligns with it.

Start your review by separating the major workload types. Image classification is about assigning a label to an image. Object detection is about locating and identifying one or more objects within an image. Optical character recognition is about extracting text. Image analysis can include tags, captions, and general description. If a scenario involves reading receipts, documents, or forms, you should think about document and text extraction capabilities rather than generic image classification.

One of the biggest exam traps is confusing broad image analysis with specialized vision tasks. A candidate may see the word image and choose a general vision answer, even when the real requirement is text extraction or object location. Another common trap is overthinking face-related scenarios. At the fundamentals level, focus on what the service is intended to do and stay alert to responsible AI concerns and service policy boundaries.

When reviewing your mock results, rewrite each missed item in your own words as a plain requirement: "identify objects," "read text," "describe image contents," or "analyze video frames." This helps you build instant recognition. Azure exam questions often become much easier once the business wording is translated into the underlying computer vision task.

  • Use image analysis when the goal is general visual understanding.
  • Use OCR-related capabilities when the goal is extracting text.
  • Use object detection when location and identification both matter.
  • Use document-focused extraction when the input is structured forms or documents.

Exam Tip: Look for the noun in the requirement. If the question is really about text, the answer is rarely the most generic image service. If the question is about locating items inside an image, classification alone is not enough.

Strong candidates in this domain are not necessarily those who know every product name from memory. They are the ones who can separate similar visual tasks and refuse to be distracted by broad but imprecise answer choices.

Section 6.4: Review strategy for NLP workloads on Azure questions

Section 6.4: Review strategy for NLP workloads on Azure questions

Natural language processing is another domain where AI-900 often tests precision. Many candidates broadly understand that NLP works with text and speech, but the exam expects you to distinguish among several specific workloads. You should be ready to identify language detection, sentiment analysis, key phrase extraction, entity recognition, translation, question answering, speech-to-text, text-to-speech, and conversational language understanding at a fundamentals level. The wording of the requirement usually points directly to the correct category if you read carefully.

A strong review process starts by splitting NLP into text analysis, language understanding, translation, and speech. Text analysis covers things like sentiment and entity extraction. Translation is about converting text between languages. Speech services cover transcription and spoken output. Language understanding and question answering focus on interpreting user intent or retrieving useful responses from conversational content. During Weak Spot Analysis, note whether your misses happen because you confuse text services with speech services, or because you fail to distinguish one text-analysis task from another.

Common exam traps include choosing sentiment analysis when the task is actually entity extraction, or selecting translation when the scenario really involves understanding intent in the same language. Another trap is assuming any chatbot requirement means generative AI. AI-900 still expects you to recognize more traditional NLP capabilities and service patterns. Not every conversational scenario requires large language models.

To improve accuracy, underline action words during review: detect language, extract phrases, identify entities, translate content, transcribe speech, synthesize speech, or understand intent. These verbs usually reveal the tested service category faster than the surrounding business story. This is especially useful in mixed-topic mock exams, where the scenario may contain extra details that are not relevant to the actual objective.

Exam Tip: If the requirement is about converting spoken audio into text, focus on speech recognition. If the requirement is about deciding whether text expresses positive or negative emotion, that is sentiment analysis, not intent recognition.

Before exam day, create a compact comparison sheet for text analytics, language understanding, translation, and speech. If you can explain the difference between these in one sentence each, you will handle most AI-900 NLP questions with confidence.

Section 6.5: Review strategy for generative AI workloads on Azure questions

Section 6.5: Review strategy for generative AI workloads on Azure questions

Generative AI is a newer and highly visible part of the AI-900 exam, but candidates should approach it with the same fundamentals mindset used elsewhere in the blueprint. Microsoft is likely to test whether you understand what generative AI does, how prompts influence outputs, what foundation models are, how copilots support user tasks, and why responsible use matters. The exam is not asking you to be a prompt engineer at an expert level. It is asking whether you can describe the workload accurately and recognize safe, appropriate use cases.

Begin your review by separating generative AI from traditional predictive AI. Traditional models classify, predict, detect, or analyze. Generative AI creates new content such as text, summaries, answers, code suggestions, or images based on learned patterns from training data and prompt input. Foundation models are large pre-trained models that can be adapted or prompted for multiple tasks. Copilots are applications or interfaces that use generative AI to assist users with tasks, often grounded in specific context or enterprise data.

Prompting is frequently misunderstood. On the exam, think of prompts as instructions and context that guide the model toward useful output. Better prompts usually mean more relevant, constrained, and aligned results. However, the exam also expects you to recognize limitations: generative models can produce inaccurate or inappropriate content, so human review and responsible AI controls remain important.

Common traps include treating generative AI as automatically correct, confusing a copilot interface with the underlying model, or assuming that all AI-generated output is suitable for direct publication without validation. Another trap is choosing an answer that emphasizes creativity when the question is really about productivity assistance, summarization, or content drafting.

  • Foundation models provide broad pretrained capabilities.
  • Prompts shape model behavior and output quality.
  • Copilots help users complete tasks with AI assistance.
  • Responsible AI is essential because generated content may be biased, unsafe, or inaccurate.

Exam Tip: When a scenario mentions drafting, summarizing, answering, or generating content from instructions, generative AI should be high on your shortlist. Then check whether the answer choice also respects safety, grounding, and human oversight.

In your final review, practice explaining generative AI in plain language. If you can clearly contrast it with machine learning classification or NLP intent detection, you will avoid one of the most common modern AI-900 confusion points.

Section 6.6: Final exam-day tactics, confidence building, and last-minute review

Section 6.6: Final exam-day tactics, confidence building, and last-minute review

Your final preparation should now shift from study mode to execution mode. The exam-day goal is to convert what you know into points without giving them away to nerves, rushed reading, or avoidable second-guessing. Start with a simple exam day checklist: arrive early or log in early, verify your testing environment, know your identification requirements, and avoid trying to learn new material at the last minute. Your last-minute review should be limited to high-yield comparisons such as supervised versus unsupervised learning, OCR versus image analysis, sentiment versus entity recognition, and traditional AI versus generative AI.

Confidence building comes from process, not emotion. Remind yourself that AI-900 is a fundamentals exam. You do not need to know every feature detail of every Azure service. You need to recognize scenarios and eliminate wrong answers. Read each prompt carefully, identify the workload first, then match it to the Azure capability. If an answer sounds powerful but does not directly meet the requirement, discard it. If two answers seem close, look for the one with the narrowest and most accurate fit.

During the exam, use a three-step method. First, identify the domain: machine learning, vision, NLP, or generative AI. Second, identify the exact task: classify, detect, extract, translate, transcribe, summarize, and so on. Third, compare answer choices only after you know what task is being tested. This prevents distractors from steering your thinking. If you are unsure, eliminate obviously wrong options and make the best objective-based choice rather than guessing randomly.

Weak Spot Analysis still matters right up to the end. In the final 24 hours, review only the categories where your mock exam performance was inconsistent. Do not waste energy rereading sections you already dominate. A focused final review produces better retention and lower anxiety.

Exam Tip: If you feel stuck, slow down and restate the business need in one short sentence. Most AI-900 questions become clearer when translated into a simple requirement like "predict a category," "read text from an image," or "generate a summary."

Finish your preparation with a calm, compact review sheet and a clear mindset: identify the workload, match the service, respect responsible AI principles, and trust your preparation. That is how you turn a full course of study into an exam pass.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A company is preparing for the AI-900 exam. During a mock test review, a learner notices they frequently confuse OCR, image classification, and object detection questions. What is the MOST effective next step to improve exam readiness?

Show answer
Correct answer: Group missed questions by workload type and review the differences between similar Azure AI capabilities
The best answer is to group missed questions by workload type and review the differences between similar Azure AI capabilities. AI-900 measures the ability to match requirements to the correct AI workload or service, so weak spot analysis should focus on why confusion occurred. Memorizing sample questions is not the best strategy because the exam tests recognition and scenario matching rather than repeated wording. Focusing only on a high-scoring domain is also incorrect because unresolved weak areas, especially around similar-sounding services, commonly lead to avoidable mistakes.

2. A retail company wants to analyze photos from store shelves and identify each product in an image along with its location in the photo. Which capability should you select?

Show answer
Correct answer: Object detection
Object detection is correct because the requirement includes identifying items and locating them within the image. Image classification would label the overall image or assign categories, but it would not provide the position of each product. OCR is used to extract printed or handwritten text from images and does not meet the requirement to detect and locate products.

3. You are taking the AI-900 exam and see a question describing a business need in plain language. What is the BEST strategy to choose the correct answer?

Show answer
Correct answer: Identify the business requirement first, then match it to the simplest Azure AI capability that directly satisfies the need
The correct approach on AI-900 is to identify the stated requirement and match it to the simplest correct Azure AI capability. This aligns with the exam's focus on service-purpose matching rather than implementation complexity. Choosing the most advanced-sounding service is a common trap because a more complex service may not be the best fit. Ignoring Azure-specific names is also wrong because the exam explicitly tests knowledge of Azure AI workloads and services.

4. A student reviewing practice results marks every question only as correct or incorrect and then rereads the entire course from the beginning. Based on effective final review methods for AI-900, why is this approach suboptimal?

Show answer
Correct answer: Because it does not identify whether mistakes came from misunderstood workloads, service confusion, or missed keywords
This approach is suboptimal because it does not analyze the source of mistakes. Effective weak spot analysis groups misses into categories such as misunderstood workload, confused Azure services, ignored requirement wording, or poor elimination of distractors. AI-900 is a fundamentals exam and does not primarily test coding ability, so the first option is incorrect. The third option is also wrong because even passing-level mock results can hide repeatable weaknesses that may reduce performance on the real exam.

5. A company plans to deploy a generative AI solution that creates customer email drafts. During final review for AI-900, which additional principle should a candidate expect to be tested alongside the technical capability?

Show answer
Correct answer: Responsible AI considerations such as reviewing outputs for harmful or inappropriate content
Responsible AI considerations are correct because AI-900 includes generative AI concepts together with responsible use principles. Candidates are expected to recognize that generated outputs should be monitored for quality, safety, and appropriateness. Detailed model tuning and GPU optimization are beyond the fundamentals scope of AI-900. Advanced container orchestration practices are also not the focus of this exam, which emphasizes identifying use cases, services, and responsible AI principles rather than deep implementation details.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.