HELP

Microsoft AI Fundamentals for Non-Technical Pros AI-900

AI Certification Exam Prep — Beginner

Microsoft AI Fundamentals for Non-Technical Pros AI-900

Microsoft AI Fundamentals for Non-Technical Pros AI-900

Clear, beginner-friendly AI-900 prep for confident exam success

Beginner ai-900 · microsoft · azure ai fundamentals · ai certification

Prepare for Microsoft AI-900 with confidence

Microsoft Azure AI Fundamentals, also known as AI-900, is one of the most approachable entry points into artificial intelligence certification. It is designed for learners who want to understand core AI concepts and Azure AI services without needing a software engineering background. This course blueprint is built specifically for non-technical professionals who want a structured, beginner-friendly path to exam readiness while staying aligned to the official Microsoft exam domains.

The course covers the AI-900 exam by Microsoft through a clear six-chapter journey. Instead of overwhelming you with technical depth that is not required for the exam, the course emphasizes business-friendly explanations, service recognition, scenario matching, and exam-style reasoning. If you are new to certification study, this structure helps you know what to learn, how to study, and how to answer questions under timed conditions.

How the course maps to the official exam domains

The blueprint is organized to reflect the exam objectives listed by Microsoft:

  • Describe AI workloads
  • Fundamental principles of ML on Azure
  • Computer vision workloads on Azure
  • NLP workloads on Azure
  • Generative AI workloads on Azure

Chapter 1 introduces the certification itself, including exam format, registration options, scoring expectations, and practical study strategy for first-time candidates. This is especially useful if you have never taken a Microsoft certification exam before. Chapters 2 through 5 then focus on the official domains in a logical order, with each chapter including milestone goals and dedicated exam-style practice. Chapter 6 brings everything together with a full mock exam, weak-area review process, and final exam-day checklist.

What makes this course effective for beginners

Many learners preparing for AI-900 are business analysts, project coordinators, sales professionals, administrators, managers, or career changers who understand technology at a general level but do not write code. This course is intentionally designed for that audience. Concepts such as machine learning, computer vision, natural language processing, and generative AI are explained using plain language and practical business examples before connecting them to Azure services.

You will not just memorize definitions. You will learn how Microsoft frames questions, how to identify key wording in answer choices, and how to distinguish similar services based on common use cases. The course outline also builds progressive confidence, starting with AI basics and exam strategy before moving into the more service-oriented domains.

Course structure and study experience

Across six chapters and twenty-four lesson milestones, you will move from foundational orientation to targeted domain mastery and final exam simulation. Each chapter contains six internal sections so the learning path feels predictable and easy to follow. Practice is integrated throughout the blueprint because certification success depends on both understanding and recognition.

  • Chapter 1: exam overview, registration, scoring, and study planning
  • Chapter 2: Describe AI workloads
  • Chapter 3: Fundamental principles of ML on Azure
  • Chapter 4: Computer vision workloads on Azure and NLP workloads on Azure
  • Chapter 5: Generative AI workloads on Azure and mixed-domain review
  • Chapter 6: full mock exam, weak spot analysis, and final review

This approach helps you build a strong mental map of the exam rather than treating the topics as isolated facts. It is especially helpful for learners who prefer structured preparation over random video watching or last-minute cramming.

Why this blueprint helps you pass

Passing AI-900 requires more than general AI awareness. You need to recognize official objective language, understand the differences between major Azure AI solution categories, and stay calm when faced with scenario-based questions. This course blueprint is designed around those exact needs. It combines conceptual clarity, objective-by-objective coverage, and repeated exposure to exam-style thinking.

Whether your goal is to validate your knowledge, begin a Microsoft certification journey, or strengthen your professional credibility around AI topics, this course provides a practical launch point. To begin your prep, Register free. You can also browse all courses to continue your certification path after AI-900.

What You Will Learn

  • Describe AI workloads and common AI solutions in business scenarios for the AI-900 exam
  • Explain fundamental principles of machine learning on Azure, including model types and responsible AI concepts
  • Identify computer vision workloads on Azure and choose the right Azure AI services for image and video tasks
  • Describe natural language processing workloads on Azure, including text analytics, translation, and conversational AI
  • Understand generative AI workloads on Azure, including copilots, prompt concepts, and Azure OpenAI use cases
  • Apply exam strategies, interpret Microsoft-style questions, and complete a full AI-900 mock exam with confidence

Requirements

  • Basic IT literacy and comfort using web applications
  • No prior certification experience required
  • No programming background needed
  • Interest in Microsoft Azure and business uses of AI
  • Willingness to practice with exam-style questions and review key terms

Chapter 1: AI-900 Exam Foundations and Study Strategy

  • Understand the AI-900 exam structure and objectives
  • Set up registration, scheduling, and test delivery expectations
  • Learn scoring, question styles, and passing strategy
  • Build a beginner-friendly study plan and revision routine

Chapter 2: Describe AI Workloads and Core AI Concepts

  • Recognize major AI workloads tested on AI-900
  • Differentiate AI, machine learning, deep learning, and generative AI
  • Connect business problems to Azure AI solution types
  • Practice Microsoft-style questions on Describe AI workloads

Chapter 3: Fundamental Principles of Machine Learning on Azure

  • Understand supervised, unsupervised, and reinforcement learning
  • Compare classification, regression, and clustering scenarios
  • Explain Azure machine learning concepts and responsible AI
  • Practice exam-style questions on Fundamental principles of ML on Azure

Chapter 4: Computer Vision Workloads and NLP Workloads on Azure

  • Identify common computer vision workloads on Azure
  • Explain OCR, image analysis, face-related concepts, and video use cases
  • Describe core NLP workloads and language service scenarios
  • Practice exam-style questions on Computer vision and NLP workloads on Azure

Chapter 5: Generative AI Workloads on Azure and Cross-Domain Review

  • Understand generative AI workloads on Azure for the AI-900 exam
  • Explain prompts, copilots, foundation models, and Azure OpenAI concepts
  • Review domain overlaps and service selection across the exam
  • Practice exam-style questions on Generative AI workloads on Azure

Chapter 6: Full Mock Exam and Final Review

  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist

Daniel Mercer

Microsoft Certified Trainer and Azure AI Specialist

Daniel Mercer is a Microsoft Certified Trainer who specializes in Azure AI and cloud fundamentals instruction for beginner learners. He has guided certification candidates through Microsoft exam objectives with a practical, exam-focused teaching style built around clear explanations and realistic practice questions.

Chapter 1: AI-900 Exam Foundations and Study Strategy

The Microsoft Azure AI Fundamentals certification, also known as AI-900, is designed for learners who want to understand core artificial intelligence concepts and Microsoft Azure AI services without needing a deep technical background. This makes it especially suitable for business analysts, sales professionals, project managers, functional consultants, students, and decision-makers who need to speak confidently about AI workloads, Azure services, and business use cases. In exam-prep terms, this is an important distinction: the test does not expect you to build advanced machine learning pipelines or write production code, but it does expect you to recognize what each AI workload does, when an Azure service is appropriate, and how Microsoft frames responsible AI.

This chapter gives you the foundation for the entire course by helping you understand what the AI-900 exam is really testing, how the exam experience works, how scoring should shape your strategy, and how to build a study routine that fits non-technical learners. Many candidates make the mistake of rushing straight into tools and service names. Strong exam preparation starts earlier: first learn the blueprint, then learn the language of the exam, and only then begin memorizing details. That approach improves confidence and reduces the feeling that Azure terminology is overwhelming.

Across this course, you will prepare to describe AI workloads and common AI solutions in business scenarios, explain fundamental machine learning principles on Azure, identify computer vision workloads and services, describe natural language processing solutions, and understand generative AI use cases such as copilots, prompts, and Azure OpenAI. This first chapter connects those outcomes to the exam domains so you can study with purpose. Instead of seeing isolated topics, you will see how Microsoft organizes them into testable objectives.

The AI-900 exam typically rewards conceptual clarity more than memorization depth. For example, you may need to distinguish between machine learning and generative AI, between image classification and object detection, or between text analytics and conversational AI. You may also need to choose the best Azure service for a business scenario. That means your study strategy should focus on matching keywords in a question to the right workload and service category. Exam Tip: On AI-900, many wrong answers look plausible because they belong to the same broad family of AI solutions. The key is to identify the exact task being described, not just the general area.

As you work through this chapter, pay attention to three themes that will appear throughout the book. First, Microsoft likes scenario-based thinking, so always ask what business problem is being solved. Second, AI-900 is a fundamentals exam, so questions often test recognition and differentiation rather than implementation detail. Third, the exam includes responsible AI concepts, which means ethical and trustworthy AI principles are not optional background knowledge; they are part of the scored objective set.

By the end of this chapter, you should know how the exam is structured, what the different question experiences may feel like, how to schedule the test, how each official domain maps to the course lessons ahead, and how to prepare effectively as a beginner. You should also begin building an exam mindset: read carefully, look for service-purpose matches, avoid overthinking, and answer from Microsoft’s recommended approach rather than from personal preference or outside-platform experience.

Practice note for Understand the AI-900 exam structure and objectives: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set up registration, scheduling, and test delivery expectations: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Learn scoring, question styles, and passing strategy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: Overview of the Microsoft Azure AI Fundamentals certification

Section 1.1: Overview of the Microsoft Azure AI Fundamentals certification

AI-900 is Microsoft’s entry-level certification for artificial intelligence concepts on Azure. It is intentionally broad rather than deep. The exam is meant to validate that you understand what AI can do in business settings and which Azure offerings align to common workloads such as machine learning, computer vision, natural language processing, and generative AI. For non-technical professionals, this is a major advantage because the exam is less about engineering and more about informed decision-making, service recognition, and conceptual understanding.

From an exam coaching perspective, think of AI-900 as a “map the need to the solution” exam. You are likely to encounter business-oriented scenarios where a company wants to classify images, extract text, analyze customer sentiment, build a chatbot, forecast outcomes, or generate content. Your job is to identify the AI workload involved and select the Azure service or concept that best fits. This is why memorizing lists without understanding categories usually fails. You need a mental framework for what type of problem belongs to what type of AI capability.

The certification also establishes a vocabulary base. Terms like classification, regression, anomaly detection, object detection, optical character recognition, language understanding, prompt, copilot, and responsible AI are all exam-relevant. You do not need to become a data scientist, but you do need to know enough to tell these terms apart. Exam Tip: If two answer choices sound similar, ask yourself which one describes the outcome the business wants. AI-900 often tests the result of the workload, not the internal mechanics.

Another key point is that this exam reflects Microsoft’s Azure ecosystem. Candidates who have seen AI tools elsewhere sometimes choose answers based on generic industry knowledge. The exam, however, asks what Azure offers and how Microsoft names services and capabilities. Therefore, one of your first goals should be to become comfortable with Microsoft terminology and service groupings. This course will help you build that exam-specific familiarity.

Section 1.2: AI-900 exam format, timing, question types, and scoring model

Section 1.2: AI-900 exam format, timing, question types, and scoring model

The AI-900 exam commonly includes a mix of multiple-choice, multiple-select, drag-and-drop, matching, and scenario-based items. Microsoft exam experiences can vary slightly, so candidates should expect the format to evolve over time. The safest preparation strategy is to focus on understanding, not pattern memorization. If you truly know what a service does and what problem it solves, you can handle different question styles with much less stress.

Timing matters because fundamentals candidates sometimes spend too long on early questions. You may see a range of item types, and some can take longer to read because of business scenarios or service descriptions. Build the habit of extracting the key requirement quickly: is the question asking you to identify a workload, choose a service, apply a responsible AI principle, or recognize a machine learning model type? Once you identify the task, the answer becomes more manageable.

Microsoft exams are generally scored on a scale, and a passing score is typically 700. That score is not a simple percentage conversion, so avoid trying to calculate your result question by question during the test. Instead, focus on maximizing correct answers across all domains. Exam Tip: Do not panic if a few questions feel unfamiliar. Scaled scoring means your best strategy is to remain steady, answer what you know confidently, and avoid losing time through anxiety.

Question wording can be a trap. Fundamentals exams often include answer options that are technically related but not the best fit. For example, a question may describe extracting meaning from text, but one distractor might involve translation while another involves sentiment analysis. Both are natural language capabilities, but only one matches the actual requirement. Read for precision. Look for words such as classify, detect, extract, predict, generate, translate, summarize, or converse. These verbs often point directly to the correct Azure AI category.

Finally, remember that AI-900 is not intended to test memorized implementation steps. You are more likely to be asked what service should be used than to be asked how to code it. That is good news for first-time test takers, but it also creates a trap: some candidates overcomplicate straightforward questions. If the scenario is simple, the intended answer often is too.

Section 1.3: Registration process, test center vs online proctored options

Section 1.3: Registration process, test center vs online proctored options

Registering for AI-900 is part of your preparation strategy, not just an administrative step. When you schedule the exam, you create a deadline that helps structure your study plan. Most candidates register through the Microsoft certification portal and select an available delivery option. You will typically choose between taking the exam at a physical test center or through an online proctored session. Each option has advantages, and the right choice depends on your environment, comfort level, and ability to control distractions.

A test center can be the better option if you prefer a quiet, standardized setting and do not want to worry about internet stability, webcam checks, room scans, or interruptions at home. For many first-time test takers, this reduces stress because the environment is designed for exams. Online proctoring, on the other hand, offers convenience and scheduling flexibility. It can work very well if you have a reliable internet connection, a private room, and confidence that you can meet all check-in requirements.

Be practical about the delivery method. If you choose online proctoring, test your equipment early and review the rules carefully. Unexpected technical issues can disrupt concentration before the exam even starts. Exam Tip: Treat the online environment as part of your exam readiness. A weak setup can hurt performance just as much as weak content knowledge.

You should also plan identification, arrival or check-in timing, and any rescheduling policies in advance. Last-minute confusion creates avoidable anxiety. If you are balancing work and family responsibilities, choose a date that allows for final revision rather than forcing you into a rushed attempt. Strong candidates often schedule the exam for a time of day when they are mentally sharp. This sounds simple, but it matters. AI-900 is not deeply technical, yet it still rewards careful reading and clear thinking, and both are harder when you are tired or distracted.

Section 1.4: Official exam domains and how they map to this course

Section 1.4: Official exam domains and how they map to this course

The AI-900 exam is organized around major knowledge areas that align closely with the learning outcomes of this course. Understanding this mapping is one of the most effective ways to study efficiently. Instead of seeing the syllabus as a long list of disconnected topics, you should group material into the exam’s core domains: AI workloads and considerations, fundamental machine learning concepts on Azure, computer vision workloads, natural language processing workloads, and generative AI workloads on Azure.

This course is built to mirror that structure. Early lessons help you describe AI workloads in business scenarios, which supports the domain focused on common AI solutions and foundational concepts. The machine learning outcome in the course maps to the exam’s expectation that you understand basic model types, training ideas, and responsible AI principles. When you later study computer vision, natural language processing, and generative AI, you will be working directly in the exam’s service-and-use-case decision space.

For exam purposes, pay special attention to where concepts overlap. A business scenario might involve multiple technologies, but the question usually targets one primary requirement. For example, a customer support scenario might mention text, conversation, and summarization, yet the tested concept may be conversational AI or generative AI depending on the wording. Exam Tip: When a scenario sounds broad, identify the main action the system must perform. Microsoft typically writes questions so that one requirement is central and one service best addresses it.

Another important domain connection is responsible AI. Many candidates mistakenly treat it as a side topic, but it is part of the exam’s conceptual core. You should know that Microsoft expects AI systems to be fair, reliable and safe, private and secure, inclusive, transparent, and accountable. These principles can appear directly or indirectly in questions that ask about trustworthy deployment, minimizing harm, or improving interpretability. As you progress through the course, always connect technical capability with responsible use, because that is how the exam framework views modern AI.

Section 1.5: Study strategy for non-technical professionals and first-time test takers

Section 1.5: Study strategy for non-technical professionals and first-time test takers

If you are new to Azure or new to certification exams, the best study strategy is layered learning. Start with the big categories of AI workloads, then learn the Azure services associated with each category, and finally practice distinguishing similar-sounding capabilities. Non-technical learners often succeed when they avoid trying to master everything at once. Your goal is not to become an engineer in a week. Your goal is to become fluent in the exam’s language and logic.

A practical weekly routine works better than occasional marathon sessions. Begin by reading one topic area at a time, then create short notes in plain business language. For example, define what problem a service solves, what kind of input it uses, and what kind of output it produces. This keeps you grounded in business outcomes rather than technical jargon. Then review those notes with flashcards, summary sheets, or spaced repetition. Repeat frequently. Fundamentals knowledge becomes much easier when you revisit the same terms in small cycles.

Use a three-step revision method. First, learn the concept. Second, compare it with nearby concepts that might confuse you. Third, explain it aloud as if speaking to a coworker. If you can explain the difference between image classification and object detection or between sentiment analysis and translation in simple words, you are moving toward exam readiness. Exam Tip: The ability to explain a concept simply is often a stronger predictor of passing than the ability to repeat a technical definition.

For first-time test takers, confidence comes from familiarity with question style. Practice reading scenario wording slowly enough to catch details but quickly enough to manage time. Also, schedule review sessions specifically for weak domains rather than rereading your strongest topics. Many candidates waste time polishing areas they already know. A smarter strategy is balanced coverage across all official objectives because AI-900 rewards breadth.

Finally, keep your expectations realistic. You do not need perfection. You need consistent understanding across the domains and the ability to choose the best answer from Microsoft’s perspective. That mindset is especially helpful for business professionals who may have practical experience but limited exposure to Azure naming conventions.

Section 1.6: Common exam traps, glossary building, and revision checklist

Section 1.6: Common exam traps, glossary building, and revision checklist

One of the most common AI-900 traps is selecting an answer that belongs to the right broad category but the wrong specific task. For instance, candidates may see a text-related requirement and immediately think any language service must be acceptable. The exam is more precise than that. It expects you to know whether the task is detecting key phrases, translating language, recognizing named entities, answering questions in a conversational format, or generating new content. Similar traps appear in vision and machine learning topics as well.

A second trap is overvaluing outside experience. If you have used AI products from other vendors, be careful not to substitute generic industry terms for Microsoft’s service names and concepts. Answer based on Azure. A third trap is ignoring responsible AI because it seems less technical. In reality, responsible AI questions are often straightforward points if you have learned the principles well. Skipping them is an unnecessary risk.

To strengthen retention, build a personal glossary as you study. Include the term, a one-line definition, a business example, and any confusing look-alikes. This is especially powerful for non-technical learners because it converts abstract terminology into decision-ready knowledge. Your glossary should include workload names, service names, model types, and responsible AI principles. Exam Tip: Add “how this appears in a question” notes to your glossary. For example, write that “predict a number” often signals regression, while “sort into categories” often signals classification.

  • Know the exam domains and the weight of each broad topic area.
  • Be able to distinguish AI workloads by business outcome.
  • Recognize core Azure AI services and their primary use cases.
  • Understand basic machine learning model types and responsible AI principles.
  • Review testing logistics, identification requirements, and delivery rules.
  • Practice reading carefully to identify keywords and eliminate distractors.
  • Create a final glossary sheet for rapid revision before exam day.

As a final revision habit, use a checklist in the last week before the exam. Confirm that you can explain each domain in plain language, identify common distractors, and stay calm under timed conditions. This chapter gives you the exam foundation; the rest of the course will now build the specific knowledge you need to pass with confidence.

Chapter milestones
  • Understand the AI-900 exam structure and objectives
  • Set up registration, scheduling, and test delivery expectations
  • Learn scoring, question styles, and passing strategy
  • Build a beginner-friendly study plan and revision routine
Chapter quiz

1. A sales manager with no development background wants to earn the AI-900 certification. Which expectation best aligns with the purpose of the exam?

Show answer
Correct answer: The exam focuses on recognizing AI workloads, Azure AI services, and business use cases at a foundational level
AI-900 is a fundamentals exam that measures conceptual understanding of AI workloads, Azure AI services, and responsible AI principles. It is designed for non-technical and technical learners alike. Option A is incorrect because AI-900 does not expect advanced implementation or production deployment skills. Option C is incorrect because the certification is specifically suitable for business users, students, and decision-makers, not only technical specialists.

2. A learner begins studying by memorizing Azure service names before reviewing the exam objectives. Based on recommended AI-900 preparation strategy, what should the learner do first?

Show answer
Correct answer: Start by understanding the exam blueprint and objectives, then connect later study topics to those domains
A strong AI-900 study approach starts with the exam structure and objective domains so candidates know what Microsoft is actually testing. This helps organize later study and reduces confusion. Option B is incorrect because practice questions are useful but should not replace understanding the official objectives. Option C is incorrect because AI-900 rewards matching services to workloads and scenarios more than memorizing isolated product details.

3. A candidate sees a scenario-based AI-900 question with several Azure services listed, and two answers seem related to language solutions. Which test-taking strategy is most appropriate?

Show answer
Correct answer: Identify the exact task in the scenario and match keywords to the most appropriate workload or service
AI-900 commonly includes plausible distractors from the same general AI family, so success depends on identifying the exact business task described and selecting the best-matching Azure workload or service. Option A is incorrect because even on a fundamentals exam, candidates must differentiate between related tasks such as text analytics versus conversational AI. Option C is incorrect because answers should align with Microsoft's recommended framing and Azure service purposes, not outside-platform habits.

4. A project coordinator asks what kinds of knowledge are scored on the AI-900 exam. Which statement is most accurate?

Show answer
Correct answer: Responsible AI principles are part of the official objectives alongside AI workloads and Azure AI services
Microsoft includes responsible AI as part of the scored AI-900 objective set, so candidates should study ethical and trustworthy AI principles along with service categories and use cases. Option A is incorrect because the chapter emphasizes that responsible AI is not optional background knowledge. Option B is incorrect because AI-900 is not primarily an implementation exam and does not center on coding or advanced deployment procedures.

5. A beginner is planning the final week before the AI-900 exam. Which revision plan best fits the exam's style and passing strategy?

Show answer
Correct answer: Review domain objectives, practice distinguishing similar AI workloads, and focus on reading scenario wording carefully
AI-900 rewards conceptual clarity, recognition of differences between similar workloads, and careful reading of scenario-based questions. A revision plan centered on domain review and service-purpose matching is most effective. Option B is incorrect because advanced tuning and SDK syntax are beyond the expected level for this fundamentals exam. Option C is incorrect because AI-900 questions often include plausible distractors, so careful reading and deliberate matching are important for scoring well.

Chapter 2: Describe AI Workloads and Core AI Concepts

This chapter maps directly to a core AI-900 exam objective: describing AI workloads and recognizing the kinds of business problems AI can solve. For non-technical candidates, this domain is often one of the most approachable, but it also includes some of the most common exam traps. Microsoft frequently presents short business scenarios and expects you to identify the workload category first, then connect it to the most suitable Azure AI solution at a high level. That means your success depends less on coding knowledge and more on pattern recognition.

At this stage of the course, your goal is to recognize major AI workloads tested on AI-900, differentiate broad concepts such as AI, machine learning, deep learning, and generative AI, and connect business problems to solution types you are likely to see in Microsoft-style questions. The exam is not asking you to build models, tune algorithms, or write prompts in production. Instead, it tests whether you understand what a given workload is intended to do and whether you can identify the right category of Azure capability for the scenario.

Think of AI as the umbrella term. Under that umbrella, machine learning is a method that learns patterns from data to make predictions or decisions. Deep learning is a subset of machine learning that uses layered neural networks and is especially common in image, audio, and language tasks. Generative AI is a newer class of AI systems that creates new content such as text, images, code, or summaries based on patterns learned from very large datasets. The exam often checks whether you can separate traditional predictive use cases from generative ones. Predicting customer churn from historical data is not the same as generating a product description from a prompt.

A strong exam approach is to read scenario questions and ask three things. First, what is the business trying to accomplish: predict, classify, detect, extract, converse, generate, rank, or recommend? Second, what type of data is involved: tabular business data, images, video, text, speech, or documents? Third, is the scenario asking for analysis of existing content or creation of new content? These three filters quickly eliminate distractors.

Exam Tip: The AI-900 exam usually rewards category-level understanding. If a scenario says a company wants to identify defects in product images, focus on the workload category of computer vision before worrying about exact product names. If it says a bot should answer employee questions in natural language, think conversational AI and natural language processing before choosing a service.

Another common trap is confusing technologies that seem related. For example, document intelligence is not just generic text analytics; it focuses on extracting structured information from forms, invoices, receipts, and similar documents. Recommendations are not the same as ranking. Ranking orders options by relevance, while recommendations suggest items a user is likely to prefer based on behavior or patterns. Anomaly detection is not a prediction of a future number; it identifies unusual patterns or outliers in current or historical data.

As you work through this chapter, remember the exam objective language. Microsoft wants you to describe AI workloads and common AI solutions in business scenarios. That means you should practice translating plain-language business needs into AI categories. When the wording is vague, choose the answer that best matches the stated business outcome rather than the most advanced-sounding technology. On AI-900, simple and direct is often correct.

This chapter also prepares you for later chapters on machine learning, computer vision, natural language processing, and generative AI by building the classification skills that those sections assume. If you can identify the workload cleanly, you will answer service-selection questions much faster and with more confidence.

Practice note for Recognize major AI workloads tested on AI-900: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Official domain overview - Describe AI workloads

Section 2.1: Official domain overview - Describe AI workloads

In the official AI-900 skills outline, describing AI workloads is one of the foundational objectives because it establishes the vocabulary used throughout the rest of the exam. Microsoft expects you to understand what kinds of problems AI can address and to distinguish between broad workload areas such as machine learning, computer vision, natural language processing, conversational AI, document intelligence, and generative AI. This is not a coding objective. It is an interpretation objective.

When a question begins with a business need, the first exam skill being tested is often your ability to classify the workload correctly. For example, if a company wants to forecast sales, that points toward a predictive machine learning workload. If it wants to extract invoice totals and vendor names from scanned documents, that indicates document intelligence. If it wants to create a chatbot for support requests, that is conversational AI. If it wants to generate marketing copy from a prompt, that is generative AI.

A useful mental model is to divide AI workloads into two broad groups. The first group analyzes existing data to classify, predict, detect, extract, or understand. The second group generates new content such as text, images, or summaries. The AI-900 exam uses both groups, and your job is to avoid mixing them up. Traditional machine learning usually predicts or classifies based on patterns in historical data. Generative AI creates new outputs based on prompts and learned patterns.

Exam Tip: Pay close attention to action verbs in the scenario. Words like predict, classify, detect, identify, extract, rank, and recommend usually point to traditional AI or machine learning workloads. Words like generate, draft, summarize, rewrite, answer from prompts, and create often point to generative AI.

The exam also expects you to differentiate AI, machine learning, deep learning, and generative AI. AI is the broadest term. Machine learning is a subset focused on learning from data. Deep learning is a subset of machine learning that is especially powerful for complex pattern recognition in images, speech, and language. Generative AI is often built with advanced deep learning architectures and focuses on content creation. A common trap is assuming deep learning and generative AI are interchangeable. They are related, but not the same. Deep learning can power image classification without generating anything new.

Finally, remember that AI-900 is a fundamentals exam. You do not need detailed mathematical knowledge. You do need strong concept matching. If you can identify what the business wants, what kind of data is involved, and whether the task is analysis or generation, you will perform well on this domain.

Section 2.2: Common AI workloads: prediction, anomaly detection, ranking, and recommendations

Section 2.2: Common AI workloads: prediction, anomaly detection, ranking, and recommendations

Several classic AI workloads appear repeatedly in AI-900 scenarios because they map cleanly to common business problems. Four especially important ones are prediction, anomaly detection, ranking, and recommendations. These sound similar at first glance, which is why they are often used to create distractors in exam questions.

Prediction usually means using historical data to estimate a future value or outcome. Examples include forecasting sales, estimating delivery delays, predicting customer churn, or determining whether a loan application is likely to default. In business settings, prediction often supports planning, budgeting, risk management, and operations. On the exam, prediction is usually the right category when the organization wants to use past patterns to estimate something not yet known.

Anomaly detection focuses on identifying unusual behavior, outliers, or events that differ from expected patterns. Typical examples include fraudulent transactions, unusual equipment sensor readings, suspicious account activity, or sudden changes in website traffic. The key clue is that the business wants to spot something abnormal rather than forecast a future business metric. If the scenario emphasizes irregular, suspicious, rare, or out-of-pattern activity, anomaly detection is likely the correct interpretation.

Ranking is the process of ordering items by relevance, priority, or likely usefulness. Search results are a classic example. A system may rank articles, products, or support tickets so that the most relevant ones appear first. Recommendations, by contrast, suggest items a user may want based on preferences, behavior, or similarity to other users. Streaming platforms recommending movies and online stores suggesting products are standard recommendation examples.

Exam Tip: If the scenario says “show the most relevant result first,” think ranking. If it says “suggest products the customer may also like,” think recommendations. Both involve ordering or choosing items, but ranking optimizes relevance within a known set, while recommendations personalize choices for a user.

A common exam trap is reading too quickly and treating all these workloads as generic machine learning. While they do all fit under machine learning, the exam often asks for the specific workload category rather than the general umbrella. Another trap is confusing anomaly detection with classification. Classification assigns items to known categories such as approved or denied, spam or not spam. Anomaly detection looks for rare or unusual cases that may not fit normal patterns.

For non-technical candidates, the best strategy is to anchor each workload to a business verb: prediction forecasts, anomaly detection flags unusual behavior, ranking orders by relevance, and recommendations suggest likely preferences. If you memorize those four pairings, many scenario questions become much easier to decode.

Section 2.3: Conversational AI, computer vision, natural language processing, and document intelligence

Section 2.3: Conversational AI, computer vision, natural language processing, and document intelligence

This section covers some of the most recognizable AI workloads on the AI-900 exam because they align to familiar business use cases. Conversational AI enables systems such as chatbots and virtual agents to interact with users through natural language. Computer vision helps systems interpret images and video. Natural language processing, or NLP, helps systems understand, analyze, and work with text and speech. Document intelligence extracts structured information from forms and documents.

Conversational AI appears in scenarios where an organization wants automated customer support, internal help desks, virtual assistants, or question-answering experiences. The key signal is interaction. The system is expected to respond to a user, often in a dialogue format. On the exam, conversational AI may overlap with NLP because bots need language understanding, but the workload emphasis is the conversation experience itself.

Computer vision focuses on images and video. Common tasks include image classification, object detection, facial analysis concepts, optical character recognition, and image tagging. If a scenario involves camera feeds, product photos, scanned labels, or medical images, computer vision is likely relevant. The exam often tests whether you can identify image-based understanding as distinct from text-based understanding.

NLP focuses on language tasks such as sentiment analysis, key phrase extraction, entity recognition, translation, summarization, and speech-related interactions. If the scenario involves emails, reviews, transcripts, social media posts, or multilingual content, NLP should be high on your list. The exam may also blend NLP with conversational AI, especially when bots must understand user intents or provide language-based responses.

Document intelligence deserves special attention because candidates often confuse it with generic OCR or text analytics. Document intelligence is used when organizations need to pull fields, values, tables, and structure from business documents such as receipts, invoices, tax forms, ID cards, and contracts. The task is not just reading text, but turning document content into usable structured data.

Exam Tip: If the scenario emphasizes forms, receipts, invoices, or extracting named fields from scanned business documents, choose document intelligence over generic NLP. If it emphasizes understanding free-form customer feedback, choose NLP. If it emphasizes visual content like photos or video frames, choose computer vision.

Common traps include selecting conversational AI just because a user is involved, even when the real workload is text analysis, or choosing NLP when the core challenge is extracting layout-aware document fields. Always focus on the primary business outcome. Is the company trying to converse, see, understand language, or extract document structure? That is the distinction the exam wants you to make.

Section 2.4: Generative AI basics and how it differs from predictive AI

Section 2.4: Generative AI basics and how it differs from predictive AI

Generative AI is now a major part of the AI-900 learning path, and Microsoft expects candidates to understand it at a conceptual level. Generative AI systems create new content, such as text, summaries, code, images, or chat responses, based on prompts and learned patterns from large datasets. In Azure scenarios, you may see examples related to copilots, content drafting, summarization, question answering, and natural language generation.

The most important distinction is between generative AI and predictive AI. Predictive AI uses historical data to estimate outcomes, assign categories, or detect patterns. It answers questions like “Will this customer churn?” or “Is this transaction unusual?” Generative AI answers questions like “Can you draft an email?” or “Summarize these notes into action items.” One predicts or classifies; the other creates.

Prompting is central to generative AI. A prompt is the instruction or context given to the model to guide its output. On the exam, you do not need prompt engineering depth, but you should know that prompts influence output quality, tone, structure, and relevance. You should also understand that copilots are generative AI assistants embedded in applications or workflows to help users perform tasks more efficiently.

Microsoft also expects awareness of responsible AI concerns in generative scenarios. Because models generate original-looking content, they can produce inaccurate, biased, or inappropriate responses. This is why grounding, content filtering, human oversight, and transparency matter. Even on a fundamentals exam, responsible AI is not optional background material. It is a tested concept.

Exam Tip: If a question describes creating new text from user instructions, summarizing long documents into concise output, drafting responses, or powering a copilot experience, think generative AI. If it describes making a forecast or assigning a label based on historical records, think predictive machine learning.

A frequent trap is choosing generative AI just because the scenario includes text. Not all text workloads are generative. Sentiment analysis, translation, and entity recognition are NLP analysis tasks, not generative tasks. Another trap is assuming generative AI is always the best solution. On the exam, choose the technology that directly matches the requirement, not the one that sounds newest or most advanced.

At a high level, Azure OpenAI is commonly associated with generative AI use cases on Azure. For AI-900, you mainly need to recognize use cases and understand the difference between generating content and analyzing existing data.

Section 2.5: Matching business scenarios to Azure AI services at a high level

Section 2.5: Matching business scenarios to Azure AI services at a high level

The AI-900 exam often moves from workload identification to service matching. You are not expected to know every configuration detail, but you should be able to connect common scenarios to the correct high-level Azure AI service family. The key is to avoid overcomplicating the choice.

If the business wants to build, train, and manage machine learning models using data, Azure Machine Learning is the broad platform to know. If the scenario is about vision tasks such as analyzing images, reading text from images, or detecting visual features, think Azure AI Vision. If the scenario is about extracting fields and structure from forms and business documents, think Azure AI Document Intelligence. If the scenario is about text analysis, translation, summarization, or language understanding, think Azure AI Language. If the scenario is about bots or question-answering interactions, conversational AI and Azure AI language-related capabilities may be part of the answer. If the scenario is about creating content with prompts, copilots, or large language model experiences, think Azure OpenAI.

This is where business language matters. A retailer wanting “product suggestions based on shopping behavior” points toward a recommendation-style machine learning solution, not computer vision. A logistics company wanting “to read shipping labels from images” points toward vision. An accounts-payable team wanting “to pull invoice numbers and totals from scanned PDFs” points toward document intelligence. A global support center wanting “to translate customer messages and detect sentiment” points toward language services. An internal productivity initiative wanting “an assistant to draft summaries and answer questions from prompts” points toward Azure OpenAI.

Exam Tip: First identify the workload, then choose the service. Candidates often reverse this process and get distracted by similar product names. Workload-first thinking is more reliable under time pressure.

A common trap is choosing Azure Machine Learning for every data-related scenario. While it is a core machine learning platform, many AI-900 questions target prebuilt AI services rather than custom model development. Another trap is mixing up Azure AI Vision with Azure AI Document Intelligence. Vision handles image and visual analysis broadly; Document Intelligence specializes in extracting structured content from documents.

At this exam level, service matching is about fit, not architecture design. Ask yourself: does the scenario involve images, text, documents, predictions, conversations, or generated content? Once you answer that, the Azure service family usually becomes clear.

Section 2.6: Exam-style practice set and answer logic for AI workloads

Section 2.6: Exam-style practice set and answer logic for AI workloads

Although this chapter does not include full quiz items in the text, you should still practice the reasoning pattern Microsoft uses in exam questions. Most AI workload questions are scenario-based and test whether you can identify the primary need from a few keywords. The wrong answers are usually plausible because they relate to AI broadly, but only one aligns best with the business objective described.

When reviewing practice material, start by underlining the business outcome. Is the organization trying to predict an outcome, detect unusual events, rank information, recommend products, understand language, analyze images, extract document fields, support a conversation, or generate new content? Then identify the data type involved. This step prevents common mistakes, especially when multiple answer choices seem partially correct.

For example, if a scenario mentions customer reviews, tone, and satisfaction, the answer logic points toward NLP and sentiment analysis. If it mentions scanned invoices and extracting totals, the logic points toward document intelligence. If it mentions a chat interface answering questions, conversational AI is central. If it mentions drafting content from instructions, generative AI is the best fit. If it mentions estimating future sales, predictive machine learning is the likely answer.

Exam Tip: On Microsoft-style questions, the best answer is usually the one that most directly solves the stated requirement with the least unnecessary complexity. Do not choose a custom machine learning solution if a prebuilt AI service matches exactly.

Another smart strategy is to watch for distractor wording. Terms like “image,” “text,” “document,” “conversation,” and “generate” are often the clue words that separate similar options. Also pay attention to whether the task is analyzing existing content or producing new content. That single distinction eliminates many wrong answers in this chapter’s domain.

  • Prediction: future outcome or estimate from historical data.
  • Anomaly detection: unusual pattern, fraud, fault, or outlier.
  • Ranking: order results by relevance.
  • Recommendations: suggest likely preferred items.
  • Computer vision: understand images or video.
  • NLP: understand or process text and language.
  • Document intelligence: extract structured data from forms and documents.
  • Conversational AI: interactive bot or assistant experience.
  • Generative AI: create new text, summaries, images, or responses from prompts.

Use that checklist during final review. If you can quickly classify scenarios using these patterns, you will be well prepared for this part of the AI-900 exam and ready to build on the more detailed Azure service topics that follow.

Chapter milestones
  • Recognize major AI workloads tested on AI-900
  • Differentiate AI, machine learning, deep learning, and generative AI
  • Connect business problems to Azure AI solution types
  • Practice Microsoft-style questions on Describe AI workloads
Chapter quiz

1. A retail company wants to analyze historical sales data to predict which customers are most likely to stop buying within the next 30 days. Which type of AI workload does this scenario describe?

Show answer
Correct answer: Machine learning for predictive analysis
This scenario is a classic predictive machine learning use case because the goal is to learn patterns from historical data and predict future customer churn. Generative AI is incorrect because the company is not asking the system to create new text, images, or other content. Computer vision is also incorrect because there is no image or video data involved. On AI-900, predicting an outcome from past business data is typically identified as a machine learning workload.

2. A manufacturer wants a solution that reviews photos of products on an assembly line and identifies damaged items before shipment. Which AI workload should you choose first?

Show answer
Correct answer: Computer vision
Computer vision is correct because the business problem involves analyzing images to detect defects. Natural language processing is used for understanding or analyzing text and speech, not product photos. Conversational AI is used for chatbots and natural language interactions, which does not match this inspection scenario. Microsoft-style AI-900 questions often expect you to identify the workload category before thinking about specific Azure services.

3. A company wants an application that can create first-draft product descriptions from short prompts entered by marketing staff. Which statement best describes this requirement?

Show answer
Correct answer: It is a generative AI scenario because the system creates new content
Generative AI is correct because the system is being asked to produce new text based on prompts. The first option is incorrect because image recognition is a computer vision task, and the scenario is about generating marketing text, not analyzing images. The anomaly detection option is also incorrect because anomaly detection identifies unusual patterns or outliers in data; it does not generate draft content. AI-900 commonly tests the distinction between predictive workloads and content-generation workloads.

4. An insurance company wants to extract policy numbers, customer names, and total amounts from scanned claim forms. Which AI solution type best fits this business need?

Show answer
Correct answer: Document intelligence
Document intelligence is correct because the goal is to extract structured fields from forms and scanned documents. Text translation is incorrect because the scenario does not involve converting content from one language to another. A recommendation engine is also incorrect because the company is not trying to suggest products or choices based on user behavior. A common AI-900 exam trap is confusing document extraction with general text analytics; forms processing is more specifically aligned to document intelligence.

5. A human resources department wants an internal bot that can answer employee questions such as vacation policy, benefits, and office hours by understanding natural language queries. Which workload category is the best match?

Show answer
Correct answer: Conversational AI
Conversational AI is correct because the requirement is for a bot that interacts with users in natural language and provides answers. Anomaly detection is incorrect because it identifies unusual patterns in data rather than handling user conversations. Regression is also incorrect because regression predicts numeric values, such as sales or cost, and does not support chatbot-style question answering. On AI-900, chatbot and question-answer scenarios typically map to conversational AI and natural language processing.

Chapter 3: Fundamental Principles of Machine Learning on Azure

This chapter maps directly to the AI-900 exam objective that expects you to explain the fundamental principles of machine learning on Azure in clear, business-friendly language. For this exam, Microsoft is not testing whether you can code models or tune algorithms by hand. Instead, you need to recognize common machine learning workloads, distinguish model types, understand how data is used in training, and connect those ideas to Azure services and responsible AI principles. Many candidates overcomplicate this section because they assume machine learning means mathematics first. On AI-900, the test is more focused on deciding what kind of machine learning problem a scenario describes and choosing the best conceptual approach.

You should begin with the three broad machine learning styles: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning uses labeled data, meaning the historical examples already include the answer the model is trying to predict. If a business wants to predict whether a customer will cancel a subscription, and past records show which customers actually canceled, that is a supervised learning scenario. Unsupervised learning uses unlabeled data and looks for patterns or groupings on its own. If a retailer wants to discover natural customer segments based on purchasing behavior without predefined categories, that points to clustering and unsupervised learning. Reinforcement learning is different from both because the model learns through rewards and penalties as it interacts with an environment. Although reinforcement learning appears less often in business examples on AI-900, you should still be able to recognize it in robotics, game play, route optimization, or dynamic decision systems.

One of the highest-yield exam skills is telling the difference between classification, regression, and clustering. Classification predicts a category or class, such as approve or deny, fraud or not fraud, churn or retain. Regression predicts a numeric value, such as sales amount next month, delivery time, or house price. Clustering groups similar items together when no label is provided in advance, such as segmenting shoppers into behavioral groups. The exam often hides these simple distinctions inside business wording. If the outcome is a named group, it is usually classification. If the outcome is a number, it is usually regression. If the goal is to discover groups without predefined labels, it is clustering.

Exam Tip: Read the last sentence of a scenario carefully. Microsoft often signals the answer in the business goal. Words like predict, estimate, approve, categorize, group, detect patterns, and optimize are clues to the machine learning approach being tested.

You also need a practical understanding of features, labels, training, validation, and inference. Features are the input variables used by a model, such as age, income, location, or purchase history. A label is the known outcome in supervised learning, such as whether a loan defaulted. Training is the process of using historical data to teach the model patterns. Validation is used to check how well the model performs on data beyond the training examples. Inference happens after training, when the model is used to make predictions on new data. A common exam trap is confusing training with inference. Training builds the model; inference uses the model.

Model quality is another exam target. You should know overfitting and underfitting at a conceptual level. An overfit model memorizes training data too closely and performs poorly on new data. An underfit model is too simple and fails even on the training data. The exam may describe a model with excellent training performance but poor real-world predictions; that indicates overfitting. It may also describe a model that misses obvious patterns; that suggests underfitting. You are not expected to calculate advanced metrics, but you should know that evaluation checks how useful the model is and whether it generalizes beyond the training dataset.

Azure Machine Learning appears on the exam at a platform-awareness level. You should understand that Azure Machine Learning is a cloud-based service for building, training, deploying, and managing machine learning solutions. It supports data preparation, automated machine learning, model training, deployment, and monitoring. AI-900 usually tests whether you can recognize when Azure Machine Learning is the right service for custom predictive models rather than prebuilt AI capabilities. If the scenario requires creating a tailored model from an organization's own business data, Azure Machine Learning is often the correct conceptual choice.

Responsible AI is part of this chapter and is highly testable. Microsoft expects you to know the major principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. These are not abstract extras; they are core exam content. If a question asks how an organization should reduce bias, explain model decisions, or protect sensitive customer information, it is testing responsible AI thinking. You should be able to match concerns such as unequal treatment, lack of explainability, or misuse of personal data to the appropriate principle.

Exam Tip: On AI-900, when two answers both sound technically possible, choose the one that best aligns with business goals, ethical use, and the most appropriate Azure service category. The exam rewards correct conceptual fit more than technical detail.

As you work through this chapter, focus on pattern recognition rather than memorizing isolated terms. Ask yourself: Is the data labeled? Is the output a category, number, or hidden grouping? Is the task custom model creation or a prebuilt AI service? Is the issue model accuracy, generalization, or ethical use? Those are exactly the distinctions Microsoft-style questions are designed to test. By the end of this chapter, you should be able to interpret machine learning scenarios with confidence, avoid common wording traps, and explain the Azure-focused fundamentals needed for the AI-900 exam.

Sections in this chapter
Section 3.1: Official domain overview - Fundamental principles of ML on Azure

Section 3.1: Official domain overview - Fundamental principles of ML on Azure

This exam domain focuses on whether you can identify what machine learning is, when it should be used, and how Azure supports machine learning solutions at a high level. For AI-900, Microsoft expects conceptual understanding, not implementation expertise. That means you should be comfortable reading a short business scenario and determining whether the problem involves supervised learning, unsupervised learning, or reinforcement learning. You should also be able to tell whether the task is prediction, grouping, or decision optimization.

Supervised learning uses historical data that includes both inputs and correct outputs. The model learns the relationship between the two so it can make future predictions. This is common in business because many organizations have historical records with outcomes attached, such as approved claims, completed sales, or customer churn outcomes. Unsupervised learning uses data without known outcomes and is used to detect patterns, segments, or unusual behavior. Reinforcement learning learns through interaction and feedback, usually in a changing environment where actions produce rewards or penalties over time.

The Azure connection is also important. In this chapter’s scope, Azure Machine Learning is the key platform to know. It is designed for building and operationalizing machine learning models in the cloud. The exam may contrast Azure Machine Learning with Azure AI services. A useful way to think about this is simple: if you need a custom model trained with your own data, Azure Machine Learning is likely relevant. If you need a prebuilt capability like vision, speech, or language analysis without creating a model from scratch, Azure AI services are often the better fit.

Exam Tip: If the question asks for a service to create a custom prediction model using an organization’s historical data, do not choose a prebuilt AI service just because it sounds intelligent. Look for Azure Machine Learning.

Common traps include confusing automation with machine learning type. For example, a scenario about automatically sorting incoming requests into categories is still classification if the categories are known in advance. Another trap is assuming all AI is machine learning. Some Azure AI solutions expose prebuilt APIs and do not require custom model training. Read carefully to see whether the organization wants to use an existing capability or train something unique to its data.

The exam tests your ability to classify business needs correctly. You are not being asked to defend an algorithm choice. Stay at the right level: problem type, data type, business objective, and Azure service category.

Section 3.2: Core ML concepts: features, labels, training, validation, and inference

Section 3.2: Core ML concepts: features, labels, training, validation, and inference

These terms appear repeatedly in Microsoft learning content and are frequent foundations behind exam questions. Features are the input values the model uses to make a prediction. In a loan scenario, features might include applicant income, credit score, debt level, and employment status. Labels are the known outputs in supervised learning, such as whether a past loan was repaid or defaulted. If you remember only one distinction, remember this: features go in, labels are the correct answers used to teach the model.

Training is the stage in which the model learns patterns from data. In supervised learning, the model looks at features and labels together. In unsupervised learning, it looks for structure in the features without labels. Validation is the process of checking how well the model performs on data beyond the examples it learned from. Even if the exam does not go deep into data splits, you should understand that validation helps determine whether the model generalizes well. Inference is what happens after a model is trained and deployed. New data is supplied, and the model returns a prediction or classification.

A common exam trap is reversing training and inference. If a scenario says a company is using an already-trained model to score incoming transactions in real time, that is inference, not training. Another trap is confusing labels with categories discovered by clustering. Clustering does not start with labels; it creates groupings based on similarity.

  • Features: input variables used by the model
  • Labels: known outcomes in supervised learning
  • Training: teaching the model from historical data
  • Validation: checking how well the model performs on separate data
  • Inference: using the trained model to make predictions on new data

Exam Tip: When you see phrases like historical records with known outcomes, think supervised learning with labels. When you see phrases like score a new customer, detect a new fraud attempt, or predict next month’s sales, think inference on new data.

The AI-900 exam often rewards precise vocabulary. If an answer option mentions labels in a scenario that clearly involves no predefined outcomes, it is likely wrong. If an answer describes grouping similar customers without a target variable, that points away from supervised learning. The best strategy is to identify what the organization already knows from its data before you decide how the model works.

Section 3.3: Classification, regression, and clustering with business-friendly examples

Section 3.3: Classification, regression, and clustering with business-friendly examples

This is one of the most testable parts of the chapter because Microsoft frequently frames machine learning through business scenarios. Classification predicts one of several categories. Examples include whether an email is spam, whether a patient is high risk, whether a support ticket should be routed to billing or technical support, or whether a transaction is fraudulent. The key clue is that the output is a discrete label or class.

Regression predicts a numeric value. A company may want to forecast revenue, estimate delivery time, predict product demand, or calculate the likely cost of a claim. If the answer is a number on a continuous scale, the scenario is usually regression. Candidates sometimes miss this because they focus on the word predict and assume classification. Prediction alone does not tell you the model type; the output format does.

Clustering is used when the business wants to group similar records without preassigned labels. Examples include segmenting customers by behavior, organizing stores by sales patterns, or identifying natural usage groups among app users. The organization does not begin by telling the model what each group should be called. Instead, the model finds patterns in the data structure itself.

Exam Tip: Ask one quick question: Is the result a category, a number, or a discovered group? Category equals classification, number equals regression, discovered group equals clustering.

There are also wording traps. Fraud detection can mean different things depending on how the scenario is written. If past fraud labels exist and the model is predicting fraud versus not fraud, that is classification. If the question emphasizes unusual patterns with no labeled outcomes, it may be closer to anomaly detection or unsupervised methods. Likewise, customer segmentation almost always signals clustering unless the segments already exist as labeled classes.

On AI-900, you do not need to identify every specialized algorithm. You need to map the business ask to the correct machine learning pattern. Keep your focus on business intent: categorize, estimate, or group. That is the level at which the exam usually tests this content.

Section 3.4: Model quality concepts: overfitting, underfitting, and evaluation basics

Section 3.4: Model quality concepts: overfitting, underfitting, and evaluation basics

Machine learning is not only about producing a model; it is also about determining whether that model is useful. The AI-900 exam expects you to understand quality concepts at a high level. Overfitting happens when a model learns the training data too specifically, including noise and accidental patterns, so it performs poorly on new data. Underfitting happens when a model is too simple to capture the true pattern, so it performs poorly even on the data it was trained on.

Think about overfitting as memorization and underfitting as oversimplification. If an exam question says a model performs extremely well during training but badly after deployment, overfitting is the best conceptual explanation. If a model fails to identify obvious relationships and produces weak results everywhere, underfitting is more likely. These descriptions are more important than any formula because AI-900 focuses on recognition, not advanced metrics.

Evaluation is the broader process of measuring model performance. While the exam may mention ideas like accuracy or error, you typically do not need to calculate anything. Instead, you should know why evaluation matters: it helps confirm whether the model generalizes and whether it is suitable for business use. Validation data or test data is used so the model is assessed on examples it did not simply memorize.

Exam Tip: Strong training performance alone is not proof of a good model. If the model fails on new data, the real issue is generalization, which often signals overfitting.

A common trap is assuming more complexity always means better results. In exam language, a more complex model can increase overfitting risk. Another trap is confusing poor performance with bad data only. Data quality matters, but if the scenario emphasizes mismatch between training success and real-world failure, overfitting is the likely target concept. If the scenario emphasizes the model being too simplistic or missing patterns from the start, think underfitting.

The exam may also connect quality to business risk. A poorly evaluated model can lead to incorrect approvals, inaccurate forecasts, or unfair outcomes. That is why model evaluation is not just technical housekeeping; it is part of trustworthy AI operations.

Section 3.5: Responsible AI principles and Azure Machine Learning at a conceptual level

Section 3.5: Responsible AI principles and Azure Machine Learning at a conceptual level

Responsible AI is a core part of Microsoft’s AI story and is absolutely exam relevant. You should know the six principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Fairness means AI systems should treat people equitably and avoid harmful bias. Reliability and safety mean the system should perform consistently and avoid causing harm. Privacy and security mean personal data must be protected and handled appropriately. Inclusiveness means systems should work for people with diverse needs and abilities. Transparency means stakeholders should understand how AI is used and, at an appropriate level, how decisions are made. Accountability means humans remain responsible for oversight and governance.

Exam questions often describe a business concern and ask which principle is most relevant. If a model disadvantages one demographic group, that points to fairness. If users cannot understand why a system produced a recommendation, that points to transparency. If customer records must be protected from unauthorized access, privacy and security is the right match. These are often easier than they appear if you map the wording to the principle directly.

Azure Machine Learning fits here because responsible AI is not separate from the machine learning lifecycle. Azure Machine Learning provides a platform to build, train, deploy, and manage models, and organizations are expected to apply responsible AI thinking throughout that lifecycle. At the AI-900 level, you should know Azure Machine Learning as the service for custom model development and operational management in Azure.

Exam Tip: If a question asks about creating and deploying a custom machine learning model in Azure, choose Azure Machine Learning. If it asks about a responsible AI concern, choose the principle that best matches the specific risk described.

Common traps include mixing transparency with accountability. Transparency is about explainability and openness; accountability is about responsibility and governance. Another trap is treating inclusiveness as the same as fairness. Inclusiveness focuses on designing AI that can be used effectively by people with a broad range of needs, while fairness focuses on equitable outcomes and avoiding discriminatory impact.

For the exam, connect responsible AI to practical business outcomes: trustworthy predictions, explainable decisions, protected data, and human oversight. Microsoft wants candidates to recognize that machine learning success is not just accuracy. It also includes ethical and responsible deployment.

Section 3.6: Exam-style practice set and answer logic for ML on Azure

Section 3.6: Exam-style practice set and answer logic for ML on Azure

Although this section does not present actual quiz items, it will help you think the way the exam expects. Microsoft-style questions in this domain usually test classification of scenarios rather than technical depth. The best approach is to break each prompt into three parts: what data is available, what output is needed, and whether the solution should be prebuilt or custom. This simple framework eliminates many distractors.

Start by looking for evidence of labels. If historical examples include known outcomes, supervised learning is likely. Then inspect the desired output. If the business wants a category, the model type is probably classification. If it wants a numerical forecast, think regression. If it wants to discover naturally similar groups, think clustering and unsupervised learning. If the scenario involves an agent improving decisions based on rewards over time, that indicates reinforcement learning.

Next, decide whether Azure Machine Learning is relevant. If the organization wants to train a unique model using its own data, Azure Machine Learning is the exam-friendly answer. If the task is a standard AI capability already available as a managed service, a prebuilt Azure AI service could be more appropriate. This distinction appears often in distractor options.

Exam Tip: Eliminate answers that solve a different problem type. A regression tool cannot be the best answer if the scenario requires category labels. A clustering approach is not correct if the business already has known classes.

Also watch for responsible AI wording embedded in machine learning questions. A prompt about reducing biased outcomes, explaining predictions, or protecting sensitive information is testing fairness, transparency, or privacy and security, even if the question begins with model training language. Microsoft likes to blend technical and ethical clues together.

Final strategy for this domain:

  • Identify whether the data has labels.
  • Identify whether the output is a category, number, or grouping.
  • Distinguish training from inference.
  • Recognize overfitting and underfitting from performance descriptions.
  • Choose Azure Machine Learning for custom ML lifecycle scenarios.
  • Map ethical concerns to the correct responsible AI principle.

If you use this logic consistently, most AI-900 machine learning questions become manageable. The exam is less about deep technical construction and more about selecting the correct conceptual answer with confidence.

Chapter milestones
  • Understand supervised, unsupervised, and reinforcement learning
  • Compare classification, regression, and clustering scenarios
  • Explain Azure machine learning concepts and responsible AI
  • Practice exam-style questions on Fundamental principles of ML on Azure
Chapter quiz

1. A company wants to build a model that predicts whether a customer will cancel a subscription next month based on historical records that include age, usage patterns, support history, and whether the customer actually canceled. Which type of machine learning workload does this describe?

Show answer
Correct answer: Classification using supervised learning
This is classification using supervised learning because the historical data includes a known label: whether the customer canceled. The outcome is a category such as cancel or not cancel, not a numeric value. Regression is incorrect because regression predicts a number, such as monthly spend or delivery time. Clustering is incorrect because clustering is used to discover groups when no predefined label exists.

2. A retailer wants to analyze customer purchase behavior to discover natural groups of shoppers for targeted marketing. The company does not already know the segment names. Which approach should it use?

Show answer
Correct answer: Clustering
Clustering is correct because the goal is to find natural groupings in unlabeled data. This is a classic unsupervised learning scenario. Classification is incorrect because classification requires predefined categories or labels to predict. Regression is incorrect because regression predicts a continuous numeric value rather than grouping similar records.

3. You are reviewing an Azure machine learning project. During one phase, the team uses historical data to build the model. In a later phase, the finished model is used to generate predictions for new customer records. What is the later phase called?

Show answer
Correct answer: Inference
Inference is correct because it refers to using a trained model to make predictions on new data. Training is incorrect because training is the phase where the model learns patterns from historical data. Validation is incorrect because validation checks model performance on data outside the training set, but it is not the same as production prediction on new records.

4. A delivery company creates a machine learning model that performs extremely well on training data but produces poor predictions when tested on new routes. Which issue does this most likely indicate?

Show answer
Correct answer: Overfitting
Overfitting is correct because the model appears to have learned the training data too closely and does not generalize well to new data. Underfitting is incorrect because an underfit model usually performs poorly even on the training data because it is too simple to capture important patterns. Responsible AI compliance is incorrect because the scenario is describing model quality and generalization, not fairness, transparency, or accountability concerns.

5. A business wants an AI system to improve warehouse robot movements by receiving rewards for efficient paths and penalties for collisions. Which machine learning approach best matches this scenario?

Show answer
Correct answer: Reinforcement learning
Reinforcement learning is correct because the system learns by interacting with an environment and receiving rewards or penalties based on its actions. Unsupervised learning is incorrect because that approach is used to discover patterns in unlabeled data, not to learn through feedback from actions. Supervised classification is incorrect because there is no labeled dataset of correct answers being used to predict categories; instead, the robot improves behavior through trial-and-feedback.

Chapter 4: Computer Vision Workloads and NLP Workloads on Azure

This chapter covers two of the most heavily tested AI-900 topic areas for non-technical candidates: computer vision workloads and natural language processing workloads on Azure. Microsoft expects you to recognize common business scenarios, match those scenarios to the correct Azure AI services, and avoid confusing similar-sounding capabilities. The exam is less about coding and more about choosing the right service for a stated requirement. That means your success depends on understanding what each workload does, what data it works with, and where the common answer traps appear.

For computer vision, the AI-900 exam focuses on image and video understanding tasks such as image analysis, optical character recognition, object detection, facial analysis concepts, and document extraction. Questions often describe a business outcome first, such as reading invoice fields, identifying products in images, monitoring a camera feed, or extracting printed text from scanned forms. Your job is to map that scenario to the correct Azure service category. In many cases, the key to the answer is not the most advanced-sounding service, but the one designed for that exact workload.

For NLP, you need to identify text and speech scenarios: sentiment analysis, key phrase extraction, named entity recognition, language detection, translation, speech-to-text, text-to-speech, and conversational experiences. The exam often tests your ability to distinguish analysis of text from generation of text, and language understanding from translation. It also expects you to recognize that Azure AI Language brings together several text-based capabilities under one family of services.

Exam Tip: In AI-900, always read the business requirement carefully before looking at the answer choices. Microsoft frequently includes plausible distractors from neighboring workloads. If the requirement is to read text from an image, think OCR. If it is to extract structured fields from a form, think document intelligence. If it is to determine whether a customer review is positive or negative, think sentiment analysis. If it is to convert live speech into written text, think speech services.

This chapter integrates the core exam objectives for identifying computer vision workloads on Azure, explaining OCR, image analysis, face-related concepts, and video use cases, describing NLP workloads and language service scenarios, and building exam confidence through answer logic. Treat this chapter as a decision guide: when you see a scenario on test day, you should be able to classify the workload first, then select the best Azure AI service second.

  • Computer vision workload recognition: image analysis, OCR, object detection, face-related analysis, and video understanding.
  • NLP workload recognition: text analytics, translation, speech, and conversational AI.
  • Service selection strategy: choose the service that directly matches the input type and business goal.
  • Exam trap awareness: avoid mixing custom model training, prebuilt analysis, OCR, and generative AI concepts.

As you work through the sections, focus on the exam pattern behind the content. Microsoft often tests whether you can identify the simplest correct service, not whether you know every product detail. A common trap is overcomplicating the solution. If a requirement can be satisfied with a prebuilt Azure AI service, that is often the right answer in AI-900. Another trap is confusing what a service analyzes with what it creates. Computer vision analyzes visual content; OCR converts visual text into machine-readable text; NLP analyzes and transforms language; generative AI produces new content based on prompts. Keep those boundaries clear.

By the end of this chapter, you should be comfortable recognizing the main Azure options for image, document, text, translation, and speech workloads, and you should know how to eliminate wrong answers even when more than one choice seems related to AI. That elimination skill is crucial on the exam because Microsoft-style questions frequently present multiple technologies from the same family.

Practice note for Identify common computer vision workloads on Azure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: Official domain overview - Computer vision workloads on Azure

Section 4.1: Official domain overview - Computer vision workloads on Azure

Computer vision workloads use AI to interpret images and video. On the AI-900 exam, this domain is tested at the scenario level. You are not expected to build models, but you are expected to understand what kinds of business problems computer vision solves and which Azure services align to those problems. Typical scenarios include detecting objects in photos, reading text from receipts or signs, analyzing image content for tags and descriptions, and processing video streams from cameras.

The exam objective usually starts with the business need. For example, a retailer may want to identify products in shelf images, a bank may want to scan forms, or a security team may want to process video footage. You should first determine the input type: still image, scanned document, or video. Then determine the intended output: labels, objects, text, extracted fields, or a visual summary. This input-output mindset is one of the fastest ways to choose correctly.

Azure supports computer vision workloads through services in the Azure AI family. In exam questions, the most common concepts include image analysis, OCR, facial analysis concepts, and document intelligence. Image analysis is used to extract visual information from images. OCR is used when the main goal is to read text that appears inside an image. Document extraction goes a step further by identifying fields and structured values from forms such as invoices, IDs, or receipts.

Exam Tip: If the scenario emphasizes forms, invoices, receipts, or structured business documents, think beyond plain OCR. OCR reads text, but document intelligence extracts meaning and fields from the layout. That distinction appears often in exam answer choices.

Another tested idea is that video workloads are often extensions of image analysis applied across frames over time. The exam may describe a video monitoring or media indexing scenario, but the tested skill is still recognizing visual AI usage rather than memorizing deep implementation detail. Watch for verbs such as detect, analyze, extract, identify, and track. These verbs signal what kind of computer vision output is required.

Common traps in this domain include selecting Azure Machine Learning when a prebuilt Azure AI Vision capability is enough, or choosing an NLP service simply because the output includes text. If the source content is an image or video, the workload begins as computer vision even if the result is text. Always anchor your answer to the nature of the input and the business objective.

Section 4.2: Image classification, object detection, OCR, and document data extraction

Section 4.2: Image classification, object detection, OCR, and document data extraction

This section covers some of the easiest terms to mix up on the AI-900 exam. Image classification assigns a label to an entire image, such as identifying whether an image contains a dog, a car, or food. Object detection goes further by identifying specific objects within the image and locating them. In exam language, if the scenario says the company wants to know what is in the image overall, that points toward classification or image analysis. If it says the company must locate multiple items inside the image, that points toward object detection.

OCR, or optical character recognition, is used when text is embedded in visual content such as photos, scanned pages, receipts, street signs, or screenshots. The key phrase to remember is that OCR converts text in images into machine-readable text. This is very likely to appear in practical business scenarios because many organizations digitize printed or photographed information. If a question asks how to read serial numbers from product images or extract words from a scan, OCR is the central concept.

Document data extraction is related but more specialized. Instead of only reading raw text, the service identifies structure and returns meaningful fields such as invoice number, total amount, due date, customer name, or line items. On the exam, this is the right direction whenever the scenario mentions forms, receipts, tax documents, or document processing automation. The trap is choosing OCR alone when the requirement clearly asks for structured values rather than just text.

Exam Tip: Use this shortcut: image classification tells you what the image is about, object detection tells you where things are, OCR tells you what text appears, and document extraction tells you what business fields the document contains.

Another frequent misunderstanding involves custom versus prebuilt solutions. AI-900 is foundational, so Microsoft often favors a managed, prebuilt Azure AI service in the correct answer unless the scenario explicitly says the organization needs to train a unique model on its own labeled data. When the question says invoices, receipts, ID cards, or known document types, a prebuilt document-focused capability is usually the intended answer.

To answer these questions accurately, isolate the primary output required. If the desired output is labels, think image analysis. If it is bounding boxes around items, think object detection. If it is readable text from an image, think OCR. If it is specific fields from business documents, think document intelligence. This output-first logic is highly reliable on AI-900.

Section 4.3: Azure AI Vision, Face-related capabilities, and responsible use considerations

Section 4.3: Azure AI Vision, Face-related capabilities, and responsible use considerations

Azure AI Vision is the service family most commonly associated with image analysis tasks on the exam. It supports capabilities such as analyzing images, describing visual content, detecting objects, and reading text. In AI-900, you are expected to recognize that Azure AI Vision can help organizations process image content at scale without building a custom model from scratch. If a scenario involves understanding visual content in photographs, screenshots, or camera images, Azure AI Vision is often the service family to consider first.

Face-related capabilities are tested more carefully because Microsoft also expects awareness of responsible AI. At the foundational level, candidates should know that AI can detect and analyze certain facial attributes or compare faces in limited scenarios, but that face-related technologies require thoughtful governance, privacy protections, and careful consideration of fairness and risk. The exam may frame this as a responsible AI issue rather than a purely technical one.

You should be especially cautious with broad claims about face recognition. AI-900 questions often reward the candidate who understands limitations and responsible use. Microsoft wants you to appreciate that not every facial analysis scenario should be implemented simply because the technology exists. Privacy, consent, security, potential bias, and regulatory requirements are part of the decision process.

Exam Tip: When face-related answers appear, look for the option that balances capability with responsible use. If a question references sensitive use cases, governance or ethical considerations may be just as important as the technical feature.

Video use cases may also appear in this section because video is essentially a stream of images processed over time. Typical examples include surveillance review, media indexing, and event detection. On the exam, you do not need advanced architecture knowledge; you need to recognize that visual AI can be applied to video frames to extract insights such as objects, actions, scenes, or text overlays.

A common trap is assuming Azure AI Vision is only for static photos. In Microsoft-style scenarios, the same visual understanding concepts can support broader image and video workflows. Another trap is ignoring responsible AI principles when a question includes face analysis. If the answers include one technically possible but ethically careless option and one more controlled, policy-aware option, the responsible choice is often the better exam answer.

Section 4.4: Official domain overview - NLP workloads on Azure

Section 4.4: Official domain overview - NLP workloads on Azure

Natural language processing, or NLP, focuses on working with human language in text and speech. On AI-900, this means understanding what business tasks can be solved by analyzing written language, translating it, converting speech to text, generating speech from text, or enabling conversational experiences. As with computer vision, Microsoft tests recognition of scenarios more than implementation detail. Your job is to connect a language-based business requirement to the correct Azure AI capability.

Common NLP scenarios include analyzing customer reviews, extracting important terms from support cases, identifying names of people or organizations in documents, detecting the language of input text, translating content between languages, converting meeting audio into transcripts, and building a bot that can interact with users. These scenarios often appear in industries such as retail, customer service, healthcare, education, and internal enterprise productivity.

Azure AI Language is central to many text-based tasks. Exam questions may mention text analysis without requiring you to know every subfeature by name, but you should recognize the major categories: sentiment analysis, key phrase extraction, entity recognition, and language detection. Translation scenarios align with Azure AI Translator. Speech-to-text and text-to-speech scenarios align with Azure AI Speech. Conversational bot scenarios may involve Azure AI services that support language understanding and interaction patterns.

Exam Tip: Separate text analytics from translation and from speech. These are related language workloads, but they solve different problems. A service that analyzes sentiment does not translate. A translation service does not perform speech synthesis unless paired with speech capabilities.

The exam also tests whether you can distinguish NLP from generative AI. If the scenario is about classification, extraction, or conversion of language, think NLP. If the scenario is about producing new content based on prompts, that belongs to generative AI, which is covered later in the course. Do not let answer choices that mention large language models distract you from a basic NLP task.

One final exam pattern to watch for is input modality. If the input is written text, start with Language or Translator. If the input is audio, start with Speech. If the scenario is a conversational application, identify whether the main need is understanding text, speaking back to the user, or orchestrating a bot experience. Those distinctions help you eliminate tempting but incomplete answer choices.

Section 4.5: Sentiment analysis, key phrase extraction, entity recognition, translation, and speech

Section 4.5: Sentiment analysis, key phrase extraction, entity recognition, translation, and speech

Sentiment analysis determines the emotional tone or opinion in text, such as positive, negative, or neutral. This is commonly used for customer feedback, product reviews, employee surveys, and social media monitoring. On the exam, when the scenario asks whether comments are favorable or unfavorable, sentiment analysis is the right concept. The trap is choosing key phrase extraction simply because the text is being analyzed. Key phrases summarize important terms; sentiment measures attitude.

Key phrase extraction identifies the most important words or phrases in a body of text. For example, from a support ticket, it may return terms such as billing error, delivery delay, or password reset. This is useful when the organization wants to summarize themes or index large text collections. If the requirement is to identify the main topics rather than emotional tone, key phrase extraction is a stronger match than sentiment analysis.

Entity recognition identifies real-world items in text, such as people, places, organizations, dates, phone numbers, or currency values. This is especially important in documents, contracts, support cases, or healthcare narratives where extracting named items creates business value. On AI-900, the wording may say identify company names, locations, or dates from text. That language strongly signals entity recognition.

Translation converts text from one language to another. The exam may also describe multilingual websites, translated customer support messages, or internal documents needing cross-language access. If the goal is preserving meaning across languages, Translator is the likely answer. Do not confuse translation with language detection; language detection identifies what language the text is in, while translation changes it.

Speech services cover speech-to-text, text-to-speech, and speech translation scenarios. Speech-to-text is used for transcriptions, captions, voice notes, and call analytics. Text-to-speech is used when applications need to speak written content aloud, such as virtual assistants or accessibility tools. If the question mentions microphones, audio files, spoken commands, or synthesized voice output, Speech should be top of mind.

Exam Tip: Memorize these distinctions: sentiment equals opinion, key phrases equal topics, entities equal named items, translation equals language conversion, and speech equals audio input or output. These pairings help you answer quickly under time pressure.

A recurring exam trap is selecting a chatbot-related answer when the real need is only one language feature. If the scenario simply requires translating text or extracting sentiment, the solution does not need a bot. Likewise, if the need is speech transcription, it does not require text analytics unless the question adds a second requirement such as analyzing the transcript afterward.

Section 4.6: Exam-style practice set and answer logic for vision and NLP domains

Section 4.6: Exam-style practice set and answer logic for vision and NLP domains

Although this chapter does not include full quiz items in the text, you should finish with a clear method for solving Microsoft-style questions on vision and NLP. The exam often presents a short business scenario and asks you to choose the most appropriate service or workload. The strongest candidates do not rush to match keywords only; they identify the input, required output, and whether the task is prebuilt analysis, structured extraction, language transformation, or speech processing.

Start with the input. If the source is an image, scanned page, camera feed, or video, you are in the vision domain. If the source is written text or audio speech, you are in the NLP domain. Next, determine the output. Labels and descriptions suggest image analysis. Text pulled from an image suggests OCR. Structured fields from forms suggest document extraction. Positive or negative opinion suggests sentiment analysis. Topics suggest key phrase extraction. Names, dates, or places suggest entity recognition. Language conversion suggests translation. Audio conversion suggests speech services.

Exam Tip: Eliminate answers that solve a broader problem than the question asks. AI-900 frequently rewards the simplest service that meets the exact stated need. If the scenario only needs OCR, do not choose a more complex custom training workflow unless the question explicitly requires it.

Watch for double-requirement scenarios. A question might describe extracting text from an image and then analyzing its sentiment. In that case, more than one AI capability is involved. Microsoft sometimes tests whether you can recognize a workflow rather than a single service. Read for sequential verbs such as read, then analyze, or transcribe, then translate. Those signal multi-step reasoning.

Another good test strategy is to compare answer choices by modality. If one option is clearly about images, another about text, and another about speech, the scenario usually contains enough clues to narrow it quickly. Terms such as scanned invoice, customer review, multilingual website, call recording, face analysis, and object location are all high-value exam clues.

Finally, remember that responsible AI can influence the correct answer, especially in face-related scenarios. A technically capable solution is not always the best exam answer if the scenario raises concerns about privacy, fairness, or sensitive identification. For AI-900, passing this domain is about disciplined categorization, service matching, and trap avoidance. If you can identify the workload type in the first few seconds of a question, you will answer far more confidently and accurately.

Chapter milestones
  • Identify common computer vision workloads on Azure
  • Explain OCR, image analysis, face-related concepts, and video use cases
  • Describe core NLP workloads and language service scenarios
  • Practice exam-style questions on Computer vision and NLP workloads on Azure
Chapter quiz

1. A retail company wants to process scanned invoices and automatically extract fields such as invoice number, vendor name, and total amount into a business system. Which Azure AI service should they choose?

Show answer
Correct answer: Azure AI Document Intelligence
Azure AI Document Intelligence is the best choice because the requirement is not just to read text, but to extract structured fields from forms and invoices. OCR is a related capability, but Azure AI Vision OCR focuses primarily on reading text from images rather than identifying document structure and labeled fields. Azure AI Language is incorrect because it analyzes text content after text is available; it does not extract fields from scanned documents.

2. A business wants to determine whether customer review comments are positive, negative, or neutral. Which Azure AI capability best matches this requirement?

Show answer
Correct answer: Sentiment analysis in Azure AI Language
Sentiment analysis in Azure AI Language is designed to evaluate opinion-based text and classify sentiment. OCR is incorrect because it is used to read text from images or scanned documents, not to analyze whether text expresses a positive or negative opinion. Face detection is also incorrect because it analyzes visual facial content, not written reviews.

3. A museum is building a mobile app that lets visitors point their phone camera at an exhibit sign and immediately convert the printed words into digital text. Which Azure AI service should the company use?

Show answer
Correct answer: Azure AI Vision OCR
Azure AI Vision OCR is the correct choice because the requirement is to read printed text from an image captured by a camera. Azure AI Speech is wrong because it works with spoken audio, such as speech-to-text and text-to-speech, not text in images. Azure AI Translator is also wrong because translation changes text from one language to another; it does not first detect and extract the text from the image.

4. A call center wants to convert live customer phone conversations into written transcripts for later review. Which Azure AI service should they use?

Show answer
Correct answer: Azure AI Speech speech-to-text
Azure AI Speech speech-to-text is the correct answer because the input is live spoken audio and the desired output is written text. Azure AI Translator would be used if the business needed to translate between languages, not simply transcribe speech. Azure AI Language key phrase extraction is also incorrect because it analyzes text that already exists and identifies important phrases; it does not convert audio into text.

5. A security team wants to analyze images from facility cameras to identify general objects such as vehicles, packages, and people appearing in each frame. Which Azure AI workload is the best match?

Show answer
Correct answer: Image analysis in Azure AI Vision
Image analysis in Azure AI Vision is the best fit because the scenario is about understanding visual content in images and identifying objects that appear in camera frames. Sentiment analysis is wrong because it applies to opinion in text, not visual scenes. Text-to-speech is also wrong because it generates audio from text and has no role in recognizing objects in images.

Chapter 5: Generative AI Workloads on Azure and Cross-Domain Review

This chapter prepares you for one of the most current and testable areas of the AI-900 exam: generative AI workloads on Azure. Microsoft expects candidates to recognize what generative AI is, when an organization would use it, and how Azure services support common business scenarios such as content generation, summarization, question answering, and copilots. Because this course is designed for non-technical professionals, the exam does not require coding knowledge, but it does require clear service-selection judgment. You should be able to read a business scenario and identify whether the need is generative AI, natural language processing, computer vision, or traditional machine learning.

The AI-900 exam often rewards candidates who focus on workload recognition rather than implementation detail. In other words, you are not being tested as a developer building a model from scratch. Instead, you are being tested on whether you can identify the correct Azure approach for a stated goal. For generative AI, that usually means understanding prompts, foundation models, copilots, and Azure OpenAI at a conceptual level. The exam also expects awareness of responsible AI principles, especially because generative systems can create convincing but incorrect, biased, or unsafe output.

This chapter connects generative AI to the rest of the exam domains. That cross-domain review matters because Microsoft-style questions frequently mix categories. A scenario may mention text, documents, images, predictions, or a chatbot, and the trap is assuming every text-related scenario is generative AI. Some are classic NLP. Some are search. Some are translation. Some are machine learning classification problems. The strongest exam strategy is to ask: what is the system actually being asked to do?

As you read, focus on the distinctions among foundation models, large language models, prompts, copilots, and Azure OpenAI concepts. Also pay attention to the boundaries between content generation and content analysis. Those boundaries are where many test takers lose points.

  • Generative AI creates new content such as text, summaries, answers, or code-like responses.
  • Traditional NLP often analyzes existing text, such as sentiment, key phrases, language detection, or named entity recognition.
  • Traditional machine learning predicts labels or numeric outcomes from data.
  • Computer vision analyzes images or video, although some newer multimodal systems can bridge domains.

Exam Tip: When a question describes drafting emails, summarizing reports, answering in natural conversational language, or creating a business copilot, think generative AI first. When it describes extracting sentiment, identifying entities, or translating text, think Azure AI Language capabilities rather than generative AI by default.

Another common exam trap is confusing the idea of a chatbot with the underlying capability. Not every bot is generative AI. A bot that follows fixed rules, a FAQ knowledge base, or scripted intents is different from a copilot that generates flexible responses from prompts and context. The exam may test whether you can tell the difference.

Finally, remember that AI-900 is a fundamentals exam. Microsoft wants you to show conceptual literacy: what these tools do, when they fit, and how to think responsibly about their use in business settings. If you can map the scenario to the right Azure service family and explain the basic reason, you are well aligned with the exam objective for this chapter.

Practice note for Understand generative AI workloads on Azure for the AI-900 exam: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Explain prompts, copilots, foundation models, and Azure OpenAI concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Review domain overlaps and service selection across the exam: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Official domain overview - Generative AI workloads on Azure

Section 5.1: Official domain overview - Generative AI workloads on Azure

Generative AI workloads focus on creating original content based on user input, instructions, or context. On the AI-900 exam, Microsoft typically frames this in business language: drafting marketing copy, summarizing meeting notes, answering questions over company documents, generating product descriptions, or assisting employees through a copilot experience. Your task is to identify that these are generative AI scenarios rather than only analytics scenarios.

In Azure, the conceptual center of this objective is Azure OpenAI and the broader idea of using powerful pretrained models for content generation. The exam may refer to large language models, chat-based solutions, prompt-driven workflows, or copilots. All of these relate to generative AI workloads. You are not expected to know deep architecture details, but you should understand that these systems take a prompt or conversation context and generate a response based on patterns learned from massive training data.

The exam objective also expects you to recognize common workload categories. These include text generation, summarization, question answering, conversational assistance, and content transformation. A question may describe helping customer support agents draft responses or helping analysts summarize lengthy documents. Both point to generative AI because the system is producing useful new language rather than merely tagging or extracting existing text.

Exam Tip: Look for verbs such as generate, draft, summarize, rewrite, answer conversationally, and assist. These often signal a generative AI workload. Verbs such as classify, detect, extract, identify, or score often indicate a different AI service family.

A classic exam trap is overthinking the implementation. AI-900 does not require you to choose model parameters, training techniques, or deployment code. Instead, it tests whether you can connect the workload to Azure’s generative AI offerings and explain the fit. Another trap is assuming every advanced text workload belongs to generative AI. If the requirement is simply to detect sentiment, extract key phrases, or determine language, that is generally Azure AI Language rather than a generative workload.

Think of this domain as the exam’s “create and converse” category. If the system must produce helpful natural language output tailored to the user’s request, generative AI is likely the correct answer.

Section 5.2: Foundation models, large language models, and prompt engineering basics

Section 5.2: Foundation models, large language models, and prompt engineering basics

A foundation model is a broad pretrained model that can be adapted to many tasks. A large language model, or LLM, is a type of foundation model specialized for understanding and generating human-like language. For exam purposes, the key idea is transferability: these models are trained once at large scale and then used across multiple business scenarios without requiring every organization to train from scratch.

Prompt engineering is the practice of designing the input so the model produces more useful output. The AI-900 exam treats prompts conceptually. You should know that a prompt can include an instruction, context, examples, formatting expectations, and sometimes safety or role guidance. A stronger prompt usually leads to more targeted output. For example, telling the model to summarize a document for executives in bullet points is more precise than simply saying summarize this.

Microsoft may test your understanding through scenario language. If users want more accurate, structured, or role-specific responses, improving the prompt is often the first conceptually correct action. You do not need to memorize specialized prompting frameworks, but you should know why clarity matters. Prompts shape tone, length, format, and task alignment.

Exam Tip: If a question asks how to improve the relevance of generated output without retraining a model, the best conceptual answer is often to refine the prompt or provide clearer context.

Another important distinction is that LLMs generate probable next-token sequences based on patterns in data. That means their outputs can sound fluent even when they are wrong. This leads to hallucinations, a core responsible AI concern. The exam may not use heavy technical wording, but it may describe a model producing confident yet inaccurate content. You should recognize this as a limitation of generative AI systems.

Common traps include confusing prompts with training data, or assuming prompt engineering is the same as model retraining. It is not. Prompting influences how an existing model responds during use. Training changes the model itself. On AI-900, keep the distinction simple: prompts guide inference; training builds model behavior over time.

Also remember that foundation models are broader than one narrow task. If the question emphasizes flexibility across writing, summarization, question answering, and conversation, that is a clue you are dealing with a foundation model or LLM-based approach.

Section 5.3: Copilots, chat experiences, content generation, and summarization use cases

Section 5.3: Copilots, chat experiences, content generation, and summarization use cases

A copilot is an AI assistant embedded into a business process or application to help users complete tasks faster. For AI-900, think of copilots as practical generative AI experiences rather than just technical components. They can answer questions, draft responses, summarize information, generate first drafts, or guide users through workflows. The exam may present a scenario where employees need help working with documents, email, customer cases, or internal knowledge. If the assistant is interactive and generates useful language output, a copilot pattern is likely being described.

Chat experiences are a common form of generative AI because they let users interact through natural conversational prompts. The system may maintain conversational context, respond in plain language, and adapt to follow-up questions. This differs from rigid menu-driven bots or scripted decision trees. Microsoft often expects you to spot this difference in scenario-based questions.

Content generation use cases include marketing text, product descriptions, proposal drafts, knowledge article drafting, and response suggestions for service teams. Summarization use cases include condensing meeting notes, reports, long emails, support histories, or research documents. These are highly testable because they map neatly to common business value statements.

Exam Tip: If the requirement is to save time by producing first drafts or concise summaries from large volumes of text, generative AI is usually the best conceptual answer. If the requirement is to route tickets based on labels or detect sentiment, that is probably not generative AI.

A common exam trap is choosing a traditional chatbot solution when the scenario clearly needs flexible language generation. Another trap is selecting translation services for summarization or question answering simply because text is involved. Always identify the core task: create new content, summarize content, or translate existing content unchanged in meaning.

To answer correctly, ask yourself whether the system needs to understand user intent in a broad conversational way and generate responses dynamically. If yes, generative AI and copilots should be high on your list. Microsoft is testing whether you understand not only what these systems are, but why businesses adopt them: productivity, scalability, user support, and faster access to information.

Section 5.4: Azure OpenAI concepts, responsible AI, and safety-focused deployment thinking

Section 5.4: Azure OpenAI concepts, responsible AI, and safety-focused deployment thinking

Azure OpenAI gives organizations access to advanced generative models within Azure’s enterprise environment. On the AI-900 exam, you should understand this service at a high level: it supports generative AI use cases such as text generation, summarization, and conversational assistance while fitting into Azure governance and enterprise workflows. You do not need coding details, but you should know it is the Azure path for using powerful generative models in business solutions.

Responsible AI is especially important here. Generative models can produce harmful, biased, misleading, or inaccurate output. They can also generate content that sounds authoritative when it is false. Microsoft expects candidates to recognize these risks and think about mitigation. This includes human oversight, clear usage boundaries, content filtering, prompt design, access controls, and monitoring outputs for quality and safety.

The AI-900 exam often tests principles rather than policies. If a scenario raises concerns about inappropriate responses, unsafe generated content, or the need for trustworthy deployment, the right answer usually includes responsible AI controls and safety-focused design. Think in terms of fairness, reliability, privacy, inclusiveness, transparency, and accountability. These principles show up across the exam and are especially relevant to generative AI.

Exam Tip: If a question asks how to reduce the risk of harmful or inaccurate output from a generative AI solution, the best answer will usually involve responsible AI practices and human review rather than assuming the model will always be correct.

A common trap is assuming Azure OpenAI guarantees perfect truthfulness. It does not. Another trap is believing responsible AI is only about legal compliance. On the exam, it is broader: designing and using AI systems in a safe, fair, explainable, and accountable way. You may also see scenarios that imply sensitive business data. In those cases, think about privacy, controlled access, and deployment governance alongside model capability.

For AI-900, your goal is not to become a risk specialist. Your goal is to show that you understand generative AI must be deployed thoughtfully. Microsoft wants certified candidates to recognize value and risk together.

Section 5.5: Comparing generative AI with computer vision, NLP, and traditional ML scenarios

Section 5.5: Comparing generative AI with computer vision, NLP, and traditional ML scenarios

This section is where many AI-900 candidates gain or lose points. Microsoft frequently writes questions that sound similar on the surface but belong to different AI domains. The winning strategy is to identify the workload type before choosing a service. Generative AI creates content. Traditional NLP analyzes language. Computer vision analyzes visual inputs. Traditional machine learning predicts outcomes from data patterns.

Suppose a company wants to summarize customer emails into short case notes. That is generative AI because the system creates a condensed version. If the company wants to detect whether each email is positive or negative, that is sentiment analysis, which fits Azure AI Language. If the company wants to predict whether a customer will churn based on transaction history, that is traditional machine learning. If it wants to detect objects in warehouse images, that is computer vision.

Mixed-domain scenarios can create traps. A document workflow might involve optical character recognition, text extraction, summarization, and classification. These are not all the same thing. OCR is about reading text from images or files. Summarization is generative AI. Classification may be a language or machine learning task depending on context. The exam is testing whether you can decompose the business requirement into the correct service types.

  • Use generative AI when the output should be newly composed text or a flexible conversational answer.
  • Use NLP services when the goal is to analyze existing language, such as sentiment, entities, key phrases, or translation.
  • Use computer vision when the input is primarily images or video and the goal is analysis of visual content.
  • Use machine learning when the task is prediction, forecasting, classification, clustering, or anomaly detection from structured or historical data.

Exam Tip: The phrase “best service” usually means the most direct managed Azure service for the workload, not the most powerful tool in general. Do not choose generative AI for a simpler task if a dedicated service matches exactly.

If you stay disciplined about the underlying task, mixed-domain questions become much easier. Always ask: Is the system creating, analyzing, seeing, or predicting?

Section 5.6: Exam-style practice set and answer logic for generative AI and mixed-domain questions

Section 5.6: Exam-style practice set and answer logic for generative AI and mixed-domain questions

When you face exam-style questions in this domain, avoid rushing to the first familiar keyword. Microsoft often includes distractors that sound plausible because they are real AI services, just not the best fit. Your job is to identify the exact business need and eliminate answers that solve a related but different problem.

Start with the output type. If the scenario needs a first draft, a summary, a conversational response, or a natural-language assistant, generative AI is likely correct. If the scenario needs extraction, detection, or labeling, consider whether a dedicated NLP, vision, or ML service is a better match. This simple rule eliminates many wrong answers quickly.

Next, watch for wording that signals user interaction. Terms such as assistant, copilot, conversational, ask questions, and follow-up requests usually point toward chat-based generative AI. By contrast, terms such as classify, train on historical data, predict sales, detect anomalies, or identify objects point away from generative AI and toward other domains.

Exam Tip: On AI-900, the exam often tests “which service should you choose?” rather than “how would you build it?” Read the business scenario carefully and choose the service category that solves the requirement most directly.

Another effective strategy is to look for hidden constraints. If the scenario emphasizes responsible use, risk reduction, or harmful content concerns, Azure OpenAI with responsible AI thinking becomes more likely in generative scenarios. If the scenario emphasizes multilingual translation, speech, or visual recognition, a different Azure AI service family may be more appropriate.

Common mistakes include choosing machine learning because the question mentions data, choosing NLP because the question mentions text, or choosing computer vision because a document is scanned. Remember that many real solutions combine services. The exam, however, usually asks for the primary capability needed to satisfy the stated goal. Choose the answer that best aligns with the core task.

As a final review mindset, connect this chapter to the full course outcomes. You now should be able to describe generative AI workloads on Azure, explain prompts and foundation models, identify when copilots fit a scenario, compare generative AI with other AI domains, and apply Microsoft-style answer logic with confidence. That cross-domain judgment is exactly what the AI-900 exam is designed to measure.

Chapter milestones
  • Understand generative AI workloads on Azure for the AI-900 exam
  • Explain prompts, copilots, foundation models, and Azure OpenAI concepts
  • Review domain overlaps and service selection across the exam
  • Practice exam-style questions on Generative AI workloads on Azure
Chapter quiz

1. A company wants to provide employees with a tool that can draft email replies, summarize meeting notes, and answer follow-up questions in natural language based on user prompts. Which Azure AI approach best fits this requirement?

Show answer
Correct answer: Use Azure OpenAI Service to build a generative AI copilot experience
Azure OpenAI Service is the best fit because the scenario requires generating new content, summarizing information, and responding conversationally to prompts, which are core generative AI workloads on the AI-900 exam. Azure AI Language is designed primarily for analyzing existing text, such as sentiment, key phrases, and entities, not for drafting flexible natural-language responses. Azure Machine Learning can support custom model development, but a regression model predicts numeric values and does not match a business copilot or content-generation scenario.

2. A retail organization needs to analyze thousands of customer reviews to determine whether each review is positive, negative, or neutral. The solution should classify existing text rather than create new text. Which service family should you select?

Show answer
Correct answer: Azure AI Language for sentiment analysis
Azure AI Language is correct because sentiment analysis is a classic natural language processing task focused on analyzing existing text. Azure OpenAI Service is used when the goal is to generate or summarize content, not when the primary need is structured text analysis. Azure AI Vision is incorrect because the scenario involves text reviews rather than images. This reflects a common AI-900 distinction between content generation and content analysis.

3. A manager asks what a prompt is in the context of generative AI on Azure. Which statement is most accurate?

Show answer
Correct answer: A prompt is the user instruction or input given to a model to guide the response it generates
A prompt is the input or instruction provided to a generative AI model to influence the output, which is a core AI-900 concept for Azure OpenAI and copilots. The training dataset for building a model is not called a prompt; that describes model training, not inference. A sentiment label is an output of text analysis, not an instruction to a generative model. The exam often checks whether candidates understand prompts conceptually rather than technically.

4. A business wants to create an internal assistant that can answer employee questions using company documents and generate helpful responses in a conversational style. The assistant should not be limited to a fixed decision tree. What is the best description of this solution?

Show answer
Correct answer: A generative AI copilot that uses prompts and context from organizational content
A generative AI copilot is correct because the assistant must answer questions conversationally and generate flexible responses using prompts and company context. A traditional rule-based bot is wrong because the scenario specifically says the solution should not be limited to fixed decision trees or scripted intents. Computer vision may help extract text from images in some cases, but it does not describe the main requirement here, which is question answering and response generation. This is a common AI-900 distinction between bots and copilots.

5. A team is reviewing several proposed AI solutions for Azure. Which scenario is the clearest example of a generative AI workload rather than traditional NLP, computer vision, or predictive machine learning?

Show answer
Correct answer: Generate a first draft of a product description from a short list of features
Generating a product description from features is a generative AI workload because the system is creating new text content. Predicting whether a customer will cancel is a traditional machine learning prediction scenario, typically classification, not generative AI. Detecting faces and objects in images is a computer vision workload. AI-900 frequently tests the ability to identify the actual workload goal before selecting an Azure service.

Chapter 6: Full Mock Exam and Final Review

This chapter is the bridge between studying and performing. Up to this point, you have reviewed the major AI-900 domains: AI workloads and common solution scenarios, machine learning principles on Azure, computer vision, natural language processing, and generative AI workloads on Azure. Now the objective changes. Instead of learning topics in isolation, you must demonstrate that you can recognize Microsoft-style wording, separate similar Azure services, and choose the answer that best matches the business requirement rather than the most technically impressive option.

The AI-900 exam is designed for broad conceptual understanding, not deep engineering implementation. That means many questions test whether you can identify the right category of AI workload, distinguish between service capabilities, and apply responsible AI principles in business scenarios. A common trap is overthinking the question and assuming advanced design details are required. In most cases, the exam expects you to map a scenario to the most appropriate Azure AI capability using straightforward logic.

This chapter integrates the final lessons of the course: Mock Exam Part 1, Mock Exam Part 2, Weak Spot Analysis, and Exam Day Checklist. The mock exam sections are meant to simulate the mental pattern of the real test. As you work through them, focus less on memorizing isolated facts and more on developing elimination skills. Ask yourself: What exact workload is being described? Which Azure service is purpose-built for that task? Which answer choice sounds plausible but solves a different problem?

You should also treat this chapter as a final review guide. High-scoring candidates usually do three things well. First, they recognize keywords quickly, such as classification, prediction, anomaly detection, object detection, sentiment analysis, translation, question answering, prompt, copilot, and grounding. Second, they avoid distractors built from adjacent services. Third, they manage time calmly and do not let one uncertain question affect the next five. This is especially important on a fundamentals exam, where confidence and pattern recognition matter almost as much as recall.

Exam Tip: On AI-900, when two answers both sound technically possible, choose the one that most directly matches the stated business need with the least complexity. Microsoft fundamentals exams often reward the simplest accurate mapping.

As you read the sections that follow, imagine yourself in the final hour before the exam. You are not trying to relearn everything. You are tightening your decision-making. Use the mock-focused sections to rehearse pacing, use the weak-spot review to repair common gaps, and use the checklist to enter the exam with a clear process. That combination is how candidates turn familiarity into a pass.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Full-length AI-900 mock exam instructions and pacing plan

Section 6.1: Full-length AI-900 mock exam instructions and pacing plan

Your first goal in a full mock exam is to simulate exam conditions honestly. Sit down without notes, set a timer, and answer in one uninterrupted block if possible. Because AI-900 is a fundamentals exam, many candidates assume pacing will be easy. However, the real challenge is not reading speed but decision quality. Questions often use familiar words in slightly different ways, and that can slow you down if you have not practiced identifying the tested objective quickly.

A practical pacing plan is to divide the exam into three passes. On pass one, answer every question that feels clear within roughly a minute. On pass two, revisit the uncertain questions and eliminate distractors. On pass three, review any flagged items and check for wording such as best, most appropriate, or responsible use. This process keeps one difficult scenario from consuming too much time early in the exam.

Map each question to an objective area. If a scenario is about predicting a numeric value such as future sales, think regression. If it is about deciding between categories such as approved or denied, think classification. If it is about identifying unusual behavior, think anomaly detection. If it describes extracting meaning from text, consider NLP services such as sentiment analysis, key phrase extraction, or language detection. If it asks about images or video, move toward computer vision services. If it describes generating new content from prompts, summarize that as generative AI and then narrow to copilots, Azure OpenAI, or prompt engineering concepts.

Exam Tip: Build a habit of translating business language into exam language. For example, “find suspicious transactions” usually means anomaly detection, not generic prediction; “tag products in photos” suggests image classification or object detection depending on whether location in the image matters.

During your mock, do not just score yourself. Record why you missed items. Did you confuse similar services? Did you misread a requirement? Did you choose a technically correct answer that was not the best answer? That error analysis is more valuable than the raw percentage because it reveals exactly what the exam is testing: correct service matching, concept recognition, and careful interpretation of scope.

Section 6.2: Mock exam question set covering Describe AI workloads

Section 6.2: Mock exam question set covering Describe AI workloads

This domain is often underestimated because it sounds introductory, but it sets the foundation for many questions across the exam. The test expects you to identify common AI workloads and match them to realistic business scenarios. That includes machine learning, computer vision, natural language processing, conversational AI, anomaly detection, and generative AI. In a mock exam set for this domain, focus on how the scenario is framed. Microsoft often describes the business outcome first and leaves you to infer the workload type.

For example, if a company wants software to read invoices, extract values, and speed up document processing, the tested concept is not generic AI in the abstract. It is document intelligence, optical character recognition, and structured information extraction. If a retailer wants a system to answer customer questions interactively, the workload points to conversational AI or question answering. If a manufacturer wants to detect unusual sensor behavior, that maps to anomaly detection. The exam tests whether you can identify the primary AI workload before choosing any tool.

Common distractors in this area involve overlapping language. Predicting a category is classification, while predicting a continuous number is regression. Detecting unusual cases is not the same as assigning labels. Similarly, a chatbot that responds to user questions is different from a text analytics system that simply extracts sentiment or key phrases from existing text.

  • Classification: assigns items to categories such as spam or not spam.
  • Regression: predicts numeric values such as price or demand.
  • Clustering: groups similar items when categories are not predefined.
  • Anomaly detection: finds unusual patterns or outliers.
  • Computer vision: interprets images or video.
  • NLP: processes and understands human language.
  • Conversational AI: interacts through chat or voice.

Exam Tip: If the question asks what kind of AI workload is being described, answer at the workload level before thinking about specific services. Many wrong answers are valid services for a different workload.

When reviewing your mock responses here, ask whether you selected answers because they sounded modern or because they matched the stated requirement exactly. Fundamentals questions reward clear business-to-capability mapping. The candidate who keeps that discipline avoids many early mistakes.

Section 6.3: Mock exam question set covering ML on Azure, computer vision, and NLP

Section 6.3: Mock exam question set covering ML on Azure, computer vision, and NLP

This section covers one of the highest-yield clusters on the exam because it combines core concepts with Azure service recognition. For machine learning, expect the exam to test basic model types, training concepts, and responsible AI ideas rather than coding details. You should know supervised learning versus unsupervised learning, and recognize that classification and regression are supervised while clustering is unsupervised. You should also understand that training uses data to learn patterns, while inferencing is the use of a trained model to make predictions on new data.

On Azure, the exam commonly expects awareness of Azure Machine Learning as the platform for building, training, and managing models. Do not confuse broad ML platform capabilities with prebuilt Azure AI services. A frequent trap is selecting Azure Machine Learning for a scenario that only needs a prebuilt vision or language feature. If the need is custom model development, Azure Machine Learning becomes stronger. If the need is ready-made image tagging, OCR, or sentiment analysis, the Azure AI service is usually the better fit.

In computer vision, separate image classification from object detection. Classification determines what is in an image overall; object detection identifies and locates objects within the image. OCR extracts printed or handwritten text. Face-related scenarios may sound simple, but be careful: the exam may focus on detecting facial attributes or recognizing presence rather than identity-based use cases, especially in the context of responsible AI boundaries.

In NLP, know the difference between sentiment analysis, key phrase extraction, named entity recognition, language detection, translation, speech capabilities, and conversational language understanding. The exam likes to describe a business process in plain language, such as “identify whether customer feedback is positive or negative,” which clearly maps to sentiment analysis. Another common pattern is to mention converting speech to text or text to speech. Do not mix those with translation or chat generation.

Exam Tip: Read nouns and verbs carefully. “Detect,” “classify,” “extract,” “translate,” and “transcribe” signal different services and tasks. Microsoft questions often hinge on one precise verb.

Responsible AI can also appear here. Be ready to recognize fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. If a question asks which principle applies when users should understand how an AI system reaches outcomes, that points to transparency. If it concerns reducing bias across groups, think fairness. These are easy marks if you keep the principles distinct.

Section 6.4: Mock exam question set covering Generative AI workloads on Azure

Section 6.4: Mock exam question set covering Generative AI workloads on Azure

Generative AI is now a major exam objective, and it is one area where candidates sometimes mix marketing language with tested concepts. The AI-900 exam expects a practical understanding of what generative AI does, how prompts guide outputs, and where Azure OpenAI fits in Azure’s AI ecosystem. Generative AI creates new content such as text, code, or images based on patterns learned from training data. On the exam, the most likely focus is text-oriented use cases: drafting, summarizing, extracting, transforming, and assisting users through copilots.

A copilot is an AI-powered assistant embedded into workflows to help users complete tasks more efficiently. Questions may describe drafting email replies, summarizing long documents, producing meeting summaries, or helping employees search internal knowledge. In those cases, the exam is testing whether you recognize the generative AI workload and understand that copilots are productivity-oriented user experiences built on large language model capabilities.

You should also understand prompt basics. A prompt is the instruction given to the model. Strong prompts are clear, specific, and contextual. While AI-900 is not a prompt engineering exam, it may test whether better instructions improve relevance and reduce ambiguity. Another key idea is grounding, where model responses are constrained or informed by trusted business data. This helps reduce vague or unsupported outputs and is a common business use case on Azure.

Azure OpenAI is the Azure service associated with access to advanced generative AI models in an enterprise environment. The exam may contrast it with traditional NLP or search solutions. If the requirement is to generate, summarize, or transform content dynamically based on prompts, Azure OpenAI is a likely fit. If the requirement is to run classic sentiment analysis or entity extraction, a traditional Azure AI language service may be more appropriate.

Exam Tip: Do not assume generative AI is always the answer when a question mentions text. If the task is analysis of existing text, such as sentiment or key phrases, that is usually a language analytics workload, not content generation.

Common distractors include confusing a chatbot with a copilot, or assuming all conversational scenarios require generative AI. Some conversational solutions are rule-based or based on question answering over known content. The correct answer depends on whether the system must generate novel responses, summarize content, or simply route and answer predictable questions.

Section 6.5: Final review of high-yield terms, service comparisons, and common distractors

Section 6.5: Final review of high-yield terms, service comparisons, and common distractors

Your final review should focus on distinctions, because distinctions are where points are won. Start with high-yield terms that repeatedly appear in AI-900 style questions: classification, regression, clustering, anomaly detection, OCR, object detection, sentiment analysis, named entity recognition, translation, speech-to-text, text-to-speech, prompt, grounding, copilot, responsible AI, fairness, transparency, and accountability. If you can define each in one sentence and map each to a basic business scenario, you are in strong shape.

Now compare services and concepts that are often confused. Azure Machine Learning is for building and managing machine learning solutions; Azure AI services provide prebuilt intelligence for common tasks. Computer vision tasks differ from language tasks even when both involve extraction. OCR pulls text from images, while key phrase extraction pulls important phrases from text that is already available digitally. Translation converts between languages; speech transcription converts spoken words into text in the same language unless translation is explicitly requested.

Generative AI versus traditional NLP is another important comparison. Traditional NLP usually analyzes or labels existing text. Generative AI creates or reformulates content. If the requirement is “determine whether reviews are positive,” that is sentiment analysis. If it is “create a concise summary of the reviews,” that is generative AI. If it is “extract product names and company names,” that is named entity recognition.

  • Best fit beats broad capability.
  • Prebuilt service beats custom development when the question asks for the simplest implementation.
  • Analyze existing content versus generate new content is a key decision point.
  • Category prediction versus numeric prediction is a standard ML distinction.

Exam Tip: Beware of answer choices that are true statements but do not answer the question asked. Microsoft often includes one choice that is technically valid in general yet mismatched to the scenario.

Finally, review your weak spots from the mock exam. Group missed items by pattern rather than by chapter. For example: “I confuse object detection and image classification,” or “I choose ML platform answers when a prebuilt service would do.” This targeted review is far more effective than rereading every chapter.

Section 6.6: Exam day readiness checklist, confidence tactics, and next certification steps

Section 6.6: Exam day readiness checklist, confidence tactics, and next certification steps

On exam day, your objective is not to feel perfect. It is to execute a repeatable process calmly. Begin with readiness basics: confirm your exam appointment, identification requirements, testing environment, internet stability if online, and any software checks required by the proctoring platform. Remove preventable stress before the exam starts. A surprising number of candidates lose focus because of logistics, not content.

Use a short confidence routine before beginning. Remind yourself that AI-900 is a fundamentals exam, and the test rewards recognition of common workloads and Azure service purposes. You do not need to design complex architectures. You need to read carefully, identify the requirement, and select the best match. When anxiety rises, return to that simple framework.

A practical exam day checklist includes reviewing high-yield comparisons, sleeping adequately, arriving early, and planning your pacing. During the test, read the last sentence of the question first if you tend to get lost in scenario text. Then read the full question and underline the core task mentally: classify, detect, extract, summarize, predict, translate, or generate. This keeps you aligned with what the item is really measuring.

Exam Tip: If you are stuck between two answers, ask which one most directly satisfies the stated business need with the least assumption. Fundamentals exams rarely require you to infer hidden requirements.

After the exam, think ahead. If you pass, you have established a strong baseline in Azure AI concepts and can consider role-based or more technical certifications depending on your goals. If your role is business-oriented, this credential validates practical AI literacy. If you plan to continue, courses related to Azure AI Engineer or data and machine learning pathways may be the next step. If you do not pass on the first attempt, use the score report and your mock-exam notes to target weak domains. Many successful candidates improve simply by sharpening service comparisons and question interpretation.

The final message of this chapter is simple: confidence comes from pattern recognition. You have already studied the content. Use the mock exam process to recognize how the exam asks about that content, trust the fundamentals, and choose the answer that matches the requirement most precisely.

Chapter milestones
  • Mock Exam Part 1
  • Mock Exam Part 2
  • Weak Spot Analysis
  • Exam Day Checklist
Chapter quiz

1. A company wants to build a solution that reads customer reviews and determines whether each review expresses a positive, negative, or neutral opinion. Which Azure AI capability should you identify as the best match?

Show answer
Correct answer: Sentiment analysis in Azure AI Language
Sentiment analysis in Azure AI Language is the correct choice because the requirement is to evaluate opinion in text. Object detection is used to locate and classify items in images, so it does not apply to customer review text. Anomaly detection is for identifying unusual patterns in data such as metrics or sensor readings, not for determining emotional tone in written language.

2. You see the following exam question: 'A retailer wants to identify which product category a new sales record belongs to based on historical labeled examples.' Which AI concept should you choose?

Show answer
Correct answer: Supervised classification
Supervised classification is correct because the scenario describes using historical labeled examples to assign a category to new data. Unsupervised learning is used when labels are not provided and the goal is often clustering or pattern discovery. OCR is a computer vision capability for extracting text from images, which is unrelated to categorizing sales records from labeled data.

3. A business user asks for an AI solution that can answer employee questions by using the company's internal policy documents as source material. On AI-900, which concept most directly matches this requirement?

Show answer
Correct answer: Grounded generative AI that uses enterprise data
Grounded generative AI is the best answer because the requirement is to answer questions using the company's own documents rather than generating responses without reference material. Facial recognition addresses identity verification, which is a different workload. Speech synthesis converts text to spoken audio and does not provide question answering over internal knowledge sources.

4. During a mock exam, you notice two answer choices both seem technically possible. Based on Microsoft fundamentals exam strategy emphasized in final review, what should you do?

Show answer
Correct answer: Choose the option that most directly meets the stated business requirement with the least complexity
The correct strategy is to choose the answer that most directly matches the business need with the least complexity. AI-900 focuses on straightforward mapping of scenarios to appropriate Azure AI capabilities, not advanced engineering design. Choosing the most complex architecture is a common trap. Skipping every ambiguous question is poor exam technique and does not reflect the expected decision-making approach.

5. A team is reviewing missed practice questions and finds that they often confuse translation, sentiment analysis, and key phrase extraction. Which final-review action is most appropriate?

Show answer
Correct answer: Perform weak spot analysis and compare similar Azure AI Language capabilities by keyword and use case
Weak spot analysis is correct because the issue is confusion between closely related capabilities, which is exactly the type of gap final review should address. Fundamentals exams do test similar services with carefully worded distractors, so ignoring the pattern would be a mistake. Memorizing service names alone is insufficient because AI-900 questions are scenario-based and require mapping business wording to the correct capability.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.