AI Certification Exam Prep — Beginner
200+ AZ-900 questions with clear rationales to help you pass fast
This course is a focused exam-prep blueprint for the Microsoft AZ-900: Azure Fundamentals certification. If you’re new to certifications (or new to Azure), you’ll get a structured path that mirrors the official exam domains: Describe cloud concepts, Describe Azure architecture and services, and Describe Azure management and governance. The goal is simple: help you build the right mental models, practice with realistic questions, and learn why each answer is correct so you can perform confidently on exam day.
Instead of overwhelming you with product deep-dives, this blueprint emphasizes the AZ-900 skills Microsoft actually tests: definitions, comparisons (for example IaaS vs PaaS vs SaaS), and scenario-based choices (for example which service best fits a requirement). Each practice set is organized to reinforce one objective area, making it easier to spot patterns and close gaps fast.
Chapter 1 orients you to the AZ-900 exam: registration, scheduling, scoring expectations, common question formats, and a study strategy that fits busy schedules. You’ll also learn how to use the test bank effectively (timed vs tutor mode) and how to keep an error log that improves results quickly.
Chapters 2–5 map directly to Microsoft’s official domains. You’ll start with Describe cloud concepts (cloud models, shared responsibility, benefits), then move into Describe Azure architecture and services (regions, subscriptions, networking, compute, storage, identity), and finish with Describe Azure management and governance (cost management, monitoring, policy, compliance). Each chapter includes dedicated practice milestones so you can verify understanding immediately.
Chapter 6 is your full mock exam and final review. You’ll take a mixed-domain, exam-style practice test, analyze weak areas by objective, and follow a final checklist that covers exam-day readiness for both test center and online proctoring.
To begin, create your free Edu AI account and follow the chapter sequence from orientation to targeted practice to the final mock exam. Register free or browse all courses to compare learning paths.
By the end, you’ll be able to explain key cloud and Azure fundamentals in plain language, choose the right Azure services for basic scenarios, and approach AZ-900 questions with a repeatable method—exactly what Microsoft expects from an Azure Fundamentals candidate.
Microsoft Certified Trainer (MCT)
Jordan Whitaker is a Microsoft Certified Trainer who helps beginners pass Microsoft certification exams through clear explanations and exam-first practice. He has coached hundreds of learners on Azure Fundamentals with a focus on mapping every question back to official objectives.
AZ-900 (Microsoft Azure Fundamentals) is often described as “foundational,” but the exam is not casual. It tests whether you can recognize core cloud ideas, navigate Azure’s major building blocks, and interpret governance and cost controls at a conceptual level. This chapter orients you to the exam’s format, how to schedule it, what your score means, and how to build a focused 14-day plan tied to the actual objective domains. You’ll also learn how to use this practice test bank effectively—when to go timed, when to go tutor mode, and how to convert mistakes into predictable points on exam day.
The most common trap for first-time candidates is studying Azure like a product tour: memorizing service names without understanding what problem each service solves. The exam rewards “fit-for-purpose” thinking: given a scenario, can you choose the right model (IaaS/PaaS/SaaS), the right resilience concept (regions/availability zones), the right identity principle (authentication vs authorization), or the right governance tool (Policy vs RBAC vs Blueprints/initiatives)? This chapter sets the strategy so the rest of the course becomes targeted practice—not random repetition.
Practice note for Understand the AZ-900 exam format and question styles: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Registering with Microsoft and scheduling with Pearson VUE: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Scoring, passing expectations, and what the score report means: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Build your 14-day study plan using domain mapping: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for How to use this test bank (timed vs tutor mode): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand the AZ-900 exam format and question styles: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Registering with Microsoft and scheduling with Pearson VUE: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Scoring, passing expectations, and what the score report means: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Build your 14-day study plan using domain mapping: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for How to use this test bank (timed vs tutor mode): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
AZ-900 validates baseline literacy in cloud concepts and Azure’s core services. It is designed for candidates who need a broad understanding of cloud and Azure without being required to deploy production workloads. Typical audiences include business stakeholders, students, new IT staff, project managers, sales/partner roles, and technical professionals transitioning into cloud. The exam is intentionally breadth-first: you must recognize what each service category does and why you’d choose it, more than you must know “how to click through” the portal.
On the test, “fundamentals” does not mean “definition-only.” Expect questions that compare options and ask you to select the best match for a requirement (cost, scalability, security, compliance, or management). Many wrong answers are “almost true.” For example, candidates confuse what Microsoft manages versus what you manage under the shared responsibility model, or they mix up governance tools (Azure Policy) with access control tools (RBAC). A strong AZ-900 candidate reads the scenario, identifies the objective being tested, and eliminates distractors that belong to a different domain.
Exam Tip: When you feel two answers look correct, ask “Which objective is this question targeting?” If it’s governance, the best answer is often a governance control (Policy/Locks/Cost Management) rather than a security feature or a compute service.
In this course, you’ll use practice questions to build pattern recognition: mapping common requirements (availability, latency, identity, compliance, cost predictability) to the correct Azure concept or service family. That mapping skill is the fastest path to consistent scores.
Scheduling the AZ-900 is part of your study strategy because it creates a fixed deadline. You typically register with your Microsoft account, then schedule the exam through Pearson VUE (online proctored or in a test center, depending on availability). Plan early: popular time slots fill quickly, and last-minute rescheduling can disrupt your revision cycle.
Decide between online and test-center delivery based on your environment and risk tolerance. Online proctoring requires a compliant room setup, stable internet, a reliable webcam, and adherence to strict rules (desk cleared, no interruptions). Test centers reduce home-environment risk but introduce travel and check-in constraints. Either way, aim to schedule your exam at a time of day when you reliably perform well on timed practice (many candidates score higher mid-morning than late evening).
If you need accommodations, request them well ahead of time—do not assume you can add them the week of your exam. Build accommodation lead time into your 14-day plan so your test date remains realistic.
Retake policies can vary over time, so confirm the current rules on Microsoft Learn/Microsoft Certification pages before committing to a timeline. Your plan should assume you want to pass on the first attempt, but it should also include a “contingency week” after the exam date for continued light review and rapid retake readiness if needed.
Exam Tip: Schedule the exam first, then build your plan backward. Candidates who “study until ready” often drift into unfocused reading; candidates with a date convert learning into measurable practice milestones.
Microsoft exams commonly report scores on a 1–1000 scale with a published passing threshold (often 700), but what matters is not the raw number—it’s how consistently you can answer across objective areas. Your score report typically shows performance by skill domain, which is crucial for targeted remediation. A frequent misconception is that a high score in one domain “covers” weak performance in another. In practice, persistent weaknesses in a single domain will keep resurfacing in new question variations.
Expect multiple-choice and multiple-response items, plus scenario-based formats where the question stem is longer and requires selecting the best option from similar services. Read instructions carefully: “Select one” vs “Select all that apply” changes your approach. A common trap is over-selecting options because they are true statements in isolation, even if they do not satisfy the requirement in the scenario.
Time management is foundational. You must balance careful reading with forward momentum. Build a habit: (1) identify the domain and key requirement words (cost-effective, least administrative effort, high availability, compliant), (2) eliminate options that are the wrong category (governance vs identity vs compute), then (3) pick the best remaining match. If you get stuck, mark and move—your goal is to secure easy points first and return later with fresh eyes.
Exam Tip: Watch for “least effort,” “most cost-effective,” and “best” language. These words signal that more than one option could work, but only one aligns with the optimization constraint.
In this test bank, use timed sessions to simulate pressure and tutor mode to dissect why distractors were tempting. The combination builds both speed and accuracy.
Your study plan must mirror the exam’s objective map. AZ-900 clusters around three themes that match this course’s outcomes: (1) cloud concepts, (2) Azure architecture and services, and (3) Azure management and governance. Treat each theme as a “bucket,” then practice identifying which bucket a question belongs to before you even look at the answer choices.
Describe cloud concepts includes cloud models (public/private/hybrid), consumption-based pricing, scalability/elasticity, and shared responsibility. The exam frequently tests whether you can distinguish IaaS vs PaaS vs SaaS and match benefits (reduced CapEx, global reach, rapid provisioning) to cloud computing. Trap: candidates memorize definitions but miss the implication—e.g., PaaS reduces management overhead compared to IaaS, but you still control application logic and data.
Describe Azure architecture and services focuses on the building blocks: regions, region pairs, availability zones, subscriptions, resource groups, and core service families (compute, networking, storage, identity). Expect conceptual mapping: when should you use a VM vs a managed platform? What is the purpose of VNets, VPN gateways, or storage redundancy options? Trap: mixing scope boundaries—regions vs availability zones vs data centers—when a question asks about resiliency.
Describe Azure management and governance covers cost management, monitoring, security posture, and governance controls like RBAC, Azure Policy, resource locks, tags, and compliance offerings. Trap: confusing “who can do what” (RBAC) with “what should be deployed” (Policy). Another trap is assuming security tools automatically enforce governance; often, governance is about standardizing and preventing drift.
Exam Tip: Create a one-page “domain map” and label every missed practice question with the domain and subtopic. Your goal is to reduce misses caused by misclassification, not just lack of knowledge.
Practice tests are not just measurement—they are training. The highest ROI habit is an error log: for every missed (or guessed) question, record (1) the objective domain, (2) the concept you misunderstood, (3) the keyword in the stem that should have guided you, and (4) the rule of thumb you will use next time. This turns random errors into a shrinking set of known traps.
Use spaced repetition to revisit the same concepts over multiple days rather than cramming. A practical 14-day plan looks like this: Days 1–2 orientation + baseline diagnostic; Days 3–10 rotate domain-focused practice (cloud concepts one day, architecture/services the next, governance the next) with short review sessions; Days 11–13 mixed sets under time; Day 14 light review + rest. The key is domain mapping: you are not “doing questions,” you are systematically closing objective gaps.
Use this test bank in two modes. Tutor mode is for learning: take your time, read explanations, and update your error log immediately. Timed mode is for performance: simulate exam pacing, practice skipping/returning, and test your ability to identify the domain quickly. Candidates who only do tutor mode often feel confident but underperform when rushed; candidates who only do timed mode often repeat the same misconception without fixing it.
Exam Tip: Treat “confident wrong” as the most valuable data point. It indicates a flawed mental model (e.g., Policy vs RBAC) that will cause repeated misses until corrected.
AZ-900 can be passed without extensive hands-on Azure use if you learn to reason from first principles. The exam rarely requires portal navigation knowledge; it rewards conceptual correctness. When you see a scenario, translate it into requirements and constraints. Ask: Is this about cost optimization, access control, compliance enforcement, resiliency, connectivity, or service model choice? Then choose the service or concept that directly addresses that requirement with the least extra assumptions.
Build “if requirement → then concept” rules. Example patterns: if the question is about enforcing standards across deployments, think Azure Policy; if it’s about permissions to resources, think RBAC; if it’s about organizing resources for lifecycle management, think resource groups; if it’s about geographic resiliency, think regions/availability zones/region pairs; if it’s about reducing management of OS and runtime, think PaaS over IaaS. These rules let you answer accurately even if you have never deployed the service yourself.
Also practice eliminating distractors by category. If the stem is governance-focused, compute answers are usually wrong. If it’s identity-focused, storage redundancy options are usually irrelevant. This “category filter” is a lab-free substitute for experience: you’re using the exam blueprint as your compass.
Exam Tip: Don’t chase unfamiliar service names. Anchor on the concept being tested (identity, governance, resiliency, pricing model). Microsoft often includes plausible-sounding options to test whether you can stay aligned with the requirement.
By combining scenario reasoning with objective mapping and deliberate practice, you can convert this test bank into a structured pathway to passing—without needing a full lab environment.
1. You are starting AZ-900 preparation and want your study plan to be aligned to what Microsoft actually measures on the exam. Which approach should you use to build a focused 14-day plan?
2. You want to take AZ-900 online with a proctor. Which combination correctly describes the typical registration and scheduling flow?
3. After completing the AZ-900 exam, you receive a score report. Which statement best reflects how to interpret the score?
4. You are using a practice test bank to prepare. Your exam is in 10 days and you consistently miss questions due to running out of time. Which practice mode should you prioritize and why?
5. A candidate studies by memorizing Azure service names but struggles on scenario questions such as choosing between IaaS, PaaS, and SaaS. Which study adjustment best targets what AZ-900 questions typically assess?
This chapter targets the AZ-900 “Describe cloud concepts” objective set: cloud models and service types, shared responsibility, and the core benefits of cloud computing. On the exam, most misses come from confusing terminology (elasticity vs scalability, high availability vs disaster recovery), or from picking an answer that is technically true but not the “best fit” for the scenario. Your job is to match the keyword in the question to the concept Microsoft is testing.
As you read, keep a mental checklist: (1) pricing model (CapEx vs OpEx, consumption-based), (2) what you manage vs what the provider manages (IaaS/PaaS/SaaS responsibility), (3) where it runs (public/private/hybrid/multi-cloud), and (4) how it stays up (availability, fault tolerance, DR). Then, in Practice Set A you’ll apply these patterns repeatedly.
The sections below follow the exact conceptual buckets you’ll see in AZ-900 question stems and answer choices. Use them as a decoding guide: when a stem says “avoid upfront costs,” think OpEx; when it says “no server management,” think PaaS/serverless; when it says “keep data on-prem but use cloud analytics,” think hybrid.
Practice note for Master cloud models and service types (IaaS/PaaS/SaaS): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain cloud benefits and economics (CapEx vs OpEx): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Apply scalability, elasticity, and reliability concepts to scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set A: Cloud Concepts (50 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Master cloud models and service types (IaaS/PaaS/SaaS): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain cloud benefits and economics (CapEx vs OpEx): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Apply scalability, elasticity, and reliability concepts to scenarios: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set A: Cloud Concepts (50 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Master cloud models and service types (IaaS/PaaS/SaaS): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain cloud benefits and economics (CapEx vs OpEx): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Cloud computing is the delivery of computing services (compute, storage, networking, databases, analytics, AI, and more) over the internet with on-demand access and pay-as-you-go economics. AZ-900 expects you to recognize cloud as a shift from building and owning infrastructure to consuming services. The exam commonly frames this as a financial decision: do you want predictable capital purchases or flexible operational spending?
Economies of scale is the idea that large providers can buy hardware, power, and connectivity in bulk and run highly optimized datacenters, lowering the per-unit cost. In exam scenarios, economies of scale is the “why” behind cloud cost efficiency, but it does not automatically mean “cheaper” for every workload. Your answer should align to the question’s stated goal: cost optimization, speed, or reduced management.
Consumption-based pricing means you pay for what you use (e.g., per hour/second of compute, per GB stored, per request). This is strongly tied to OpEx. If a stem includes “seasonal traffic,” “temporary project,” “spiky demand,” or “pilot,” consumption-based pricing is usually central to the correct choice.
Exam Tip: If an answer mentions “avoid upfront costs,” “pay only for what you use,” or “stop paying when you shut it down,” that’s consumption-based pricing/OpEx. Don’t confuse this with “reserved capacity” concepts (discounts for commitment) unless the question explicitly mentions committing to 1–3 years.
Common trap: assuming “cloud = always cheaper.” The exam’s safer pattern: cloud reduces upfront cost and increases cost flexibility, but total cost depends on usage, architecture, and governance.
AZ-900 heavily tests your ability to map a requirement to the right service model by asking: “Who manages what?” The three core service models form a spectrum from most customer control (IaaS) to least (SaaS). In many questions, two options could work technically; the best answer is the one that meets requirements with the least management overhead.
IaaS (Infrastructure as a Service) provides virtualized compute, storage, and networking. You manage the OS, patching, runtime, and your applications. Choose IaaS when you need maximum control, lift-and-shift migration, custom OS configurations, or legacy apps that don’t fit managed platforms.
PaaS (Platform as a Service) provides a managed platform (OS, runtime, scaling constructs) so you focus on deploying code and data. Choose PaaS when the scenario stresses reduced admin effort, faster development, built-in scaling, or managed database/app hosting.
SaaS (Software as a Service) is a complete application delivered to end users (for example, email/CRM/collaboration tools). You manage configuration and users, not infrastructure or application code. Choose SaaS when the requirement is business capability, not custom development.
Exam Tip: When the stem says “the team wants to focus on application development and not manage servers,” eliminate IaaS first. When it says “must control the OS” or “requires specific OS-level components,” eliminate PaaS/SaaS first.
Common trap: thinking PaaS means “no responsibility.” You still own your data, identities, access control choices, and application logic/configuration—even if the provider patches the underlying platform.
Deployment models describe where the cloud resources run and who has access. AZ-900 questions often embed regulatory constraints, data residency, latency, or existing datacenter investments, then ask you to select the correct model. Your approach: identify the constraint first, then choose the model that satisfies it with minimal complexity.
Public cloud (e.g., Azure) means resources are owned/operated by the provider and delivered over the internet to many customers with logical isolation. Public cloud is typically the default answer when the question emphasizes speed, global reach, and avoiding datacenter management.
Private cloud is a cloud environment dedicated to a single organization (on-premises or hosted). It can meet strict control requirements, but it usually reduces the elasticity and economies-of-scale advantages. On AZ-900, private cloud tends to appear when the stem insists on isolated infrastructure due to policy or specialized hardware needs.
Hybrid cloud combines on-premises (or private) resources with public cloud services, enabling scenarios like keeping sensitive data on-prem while using cloud compute/analytics, or bursting into the cloud during peak demand. If the stem says “some resources must remain on-prem” or “integrate with existing datacenter,” hybrid is usually the best match.
Multi-cloud means using multiple public cloud providers. This is commonly driven by vendor strategy, regional availability, or specific best-of-breed services. The exam tests that multi-cloud is not the same as hybrid: hybrid mixes on-prem and cloud; multi-cloud mixes cloud providers.
Exam Tip: If you see “on-prem + cloud,” think hybrid. If you see “Azure + another cloud provider,” think multi-cloud. If the question only says “cloud,” the default is usually public cloud.
Common trap: assuming multi-cloud automatically improves availability. It can, but it adds complexity and requires careful design; don’t choose it unless the stem hints at avoiding vendor lock-in or needing services/regions from different providers.
Reliability vocabulary is a frequent AZ-900 discriminator. The exam expects you to select the term that precisely matches the outcome described. Start by translating the stem into “prevent downtime” vs “recover from downtime,” then decide which concept fits.
High availability (HA) focuses on keeping services running by minimizing downtime through redundancy and quick failover within a region or across zones. HA is about meeting uptime goals during expected component failures.
Fault tolerance is stronger: the system continues operating even when components fail, often through additional redundancy and design patterns that avoid single points of failure. In exam language, fault tolerance implies the workload is built to survive failures with minimal or no interruption.
Disaster recovery (DR) is about restoring service after a major outage (region-wide failure, significant incident). DR is commonly tied to backups, replication to a secondary site/region, and defined objectives:
Global reach is the cloud’s ability to deploy services near users worldwide, improving latency and supporting business continuity. On the exam, when you see “users in multiple countries,” “serve customers globally,” or “deploy near customers,” global reach is the benefit being tested.
Exam Tip: If the stem mentions “restore after outage,” “secondary region,” “backup,” or “RTO/RPO,” pick disaster recovery. If it mentions “minimize downtime” or “redundant components,” pick high availability (or fault tolerance if the wording implies continuous operation through failure).
Common trap: mixing HA with DR. HA reduces the chance of downtime; DR assumes downtime can happen and focuses on recovery. Many answers sound similar—anchor on the stem’s verbs: “prevent” vs “recover.”
This section is a high-yield AZ-900 area because the terms are easy to confuse and frequently appear as near-synonyms in distractor answers. The exam often provides a scenario (holiday traffic, rapid growth, new app launch) and asks which cloud characteristic is being described.
Scalability is the ability to increase resources to meet demand. It can be vertical (scale up: bigger VM) or horizontal (scale out: more instances). Scalability doesn’t necessarily imply you scale back down.
Elasticity is the ability to automatically add and remove resources as demand changes. Elasticity is the best match for “spiky” or “unpredictable” workloads and is tightly connected to consumption-based cost savings.
Agility is the speed of provisioning and experimenting—launching environments quickly, iterating faster, and shortening time-to-market. When the stem emphasizes “rapidly deploy,” “test quickly,” or “fast innovation,” agility is the concept.
Serverless (a common exam term) means you don’t manage server infrastructure; you run code in response to events and typically pay per execution. It is not “no servers exist,” but “no server management by you.” Serverless is often positioned as the best choice when the stem mentions event-driven processing, short-lived tasks, or unpredictable demand.
Exam Tip: If the key detail is “automatically scales to match demand,” that’s elasticity. If the key detail is “can handle more load by adding resources,” that’s scalability. If the key detail is “deploy quickly,” that’s agility. If the key detail is “run code without managing servers,” that’s serverless.
Common trap: selecting scalability when the question explicitly includes “and scale back down” or “pay only during execution.” Those phrases are pointing you to elasticity or serverless.
The shared responsibility model is one of the most tested AZ-900 ideas because it explains nearly every security-and-compliance “who is responsible?” question. The provider is responsible for security of the cloud (physical datacenters, physical network, and foundational platform). The customer is responsible for security in the cloud (data, identities, access, configurations), with the exact split depending on IaaS/PaaS/SaaS.
In IaaS, you typically manage the OS, patches, network controls you configure, and application security. In PaaS, the provider manages more (OS/runtime), while you still manage data, identities, and application-level security. In SaaS, the provider manages nearly everything except tenant configuration, user access, and your data governance choices.
Security boundaries on AZ-900 are often tested through “what is isolated?” thinking. In public cloud, customers share underlying infrastructure but remain logically isolated by strong tenant boundaries (identity, authorization, virtualization, and networking segmentation). The exam does not require deep technical proofs—just that you understand “shared infrastructure, isolated workloads.”
Trust model basics means understanding that cloud security is a partnership: the provider offers controls and compliance attestations, but you must implement least privilege, strong authentication, data classification, and secure configuration. If a stem implies a breach due to misconfiguration (open storage, overly permissive access), that is typically the customer side of responsibility.
Exam Tip: If the question asks about physical security, datacenter access, or hardware disposal, the provider is responsible. If it asks about user permissions, data classification, or configuring network rules, the customer is responsible—even in PaaS/SaaS.
Common trap: assuming the provider handles “security” entirely in SaaS. Providers secure the service, but you still must secure identities (e.g., strong passwords/MFA), manage users, and protect the data you put into the service.
1. A company has an on-premises datacenter and must keep customer data stored locally due to regulatory requirements. They want to use Azure to run advanced analytics on that data with minimal changes to their current environment. Which cloud model best fits this requirement?
2. A startup wants to deploy a web application without managing the underlying operating system, patching, or server hardware. They only want to focus on application code and deployment. Which cloud service type should they choose?
3. A company is budgeting for a migration to Azure. They want to avoid large upfront hardware purchases and instead pay only for what they consume each month. Which cost model does this describe?
4. An e-commerce site experiences predictable traffic increases every weekend. The company plans to add additional compute resources before the weekend starts and remove them afterward to reduce costs. Which cloud concept is being demonstrated?
5. A team deploys virtual machines in Azure using IaaS. They ask who is responsible for applying security patches to the guest operating system on those virtual machines. Under the shared responsibility model, who is responsible?
This chapter aligns to the AZ-900 objective area “Describe Azure architecture and services.” Expect questions that test recognition and selection: which Azure construct provides geographic resiliency, which scope applies policy, and which networking service fits a scenario. AZ-900 rarely demands deep configuration steps; instead, it checks whether you can map a requirement (latency, resiliency, isolation, governance boundary, connectivity type) to the right Azure building block.
You’ll work through Azure geography (regions, region pairs, availability zones), resource organization (management groups, subscriptions, resource groups), and core networking (VNets, VPN, ExpressRoute, DNS). The chapter ends with guided practice framing: how to choose the right component in mini-case scenarios—the same mental skill used in the “Scenario Drill” on the exam.
Exam Tip: When an item looks “too detailed” (port numbers, routing protocols, step-by-step wizards), it’s usually beyond AZ-900. Focus on purpose, scope, and when-to-use comparisons.
Practice note for Identify Azure geography: regions, region pairs, and availability zones: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand core resources: subscriptions, management groups, and resource groups: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain core networking: VNets, VPN, ExpressRoute, and DNS basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set B: Architecture Foundations (40 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Scenario Drill: Choose the right architecture component (mini-cases): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Identify Azure geography: regions, region pairs, and availability zones: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand core resources: subscriptions, management groups, and resource groups: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain core networking: VNets, VPN, ExpressRoute, and DNS basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set B: Architecture Foundations (40 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Scenario Drill: Choose the right architecture component (mini-cases): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Identify Azure geography: regions, region pairs, and availability zones: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
AZ-900 expects you to speak Azure’s “location language.” A region is a set of datacenters deployed within a latency-defined perimeter. When you deploy a resource, you typically choose a region (for example, East US). A geography is a discrete market, often aligned to data residency and compliance boundaries (for example, US, Europe, Asia Pacific). Geographies contain one or more regions.
Region pairs are a key resiliency concept. Each Azure region is paired with another region in the same geography (for example, West Europe is paired with North Europe). The pairing is used by Microsoft to prioritize recovery in large-scale outages and to reduce the risk of simultaneous impact from regional events. On exams, region pairs often appear when the question mentions “disaster recovery across regions,” “planned maintenance,” or “service updates.”
Sovereignty also shows up: certain cloud environments are designed for regulatory or national requirements (for example, Azure Government, Azure China). The exam typically tests recognition that these environments exist and that they address compliance/sovereignty needs rather than performance or cost optimization.
Exam Tip: If the scenario says “must keep data within a country/market boundary,” think geography/sovereign cloud. If it says “survive a region-wide outage,” think region pair (or multi-region design).
AZ-900 tests availability at a concept level: can you distinguish availability zones from availability sets, and can you pick the right one given a requirement? Availability zones are physically separate locations within a single Azure region, each with independent power, cooling, and networking. Designing across zones helps protect against datacenter-level failures while keeping low latency inside the region.
Availability sets are a legacy-but-still-relevant concept for VM workloads: they spread VMs across fault domains (hardware/rack separation) and update domains (staggered maintenance). Availability sets do not provide the same physical separation guarantees as availability zones, but they improve resilience against localized hardware issues and planned maintenance within a datacenter environment.
In exam wording, look for cues. If you see “separate datacenters within the same region” or “zone-redundant,” that points to availability zones. If you see “spread VMs across fault domains/update domains,” that points to availability sets. If the question emphasizes application-level resiliency, you might also see load balancing concepts nearby, but AZ-900 mainly wants you to recognize the availability construct, not design a full architecture.
Exam Tip: “Within one region” + “datacenter failure protection” = availability zones. “VMs” + “fault/update domains” = availability set.
Resource organization is a frequent AZ-900 topic because it ties directly to governance, billing, and access control. Think in scopes from largest to smallest: management groups → subscriptions → resource groups → resources. A management group is a container for subscriptions, used to apply governance (like Azure Policy) across multiple subscriptions. This is common in enterprises with multiple business units or environments.
A subscription is primarily a billing and access boundary. Many exam items test this exact point: you might separate dev/test/prod into different subscriptions to isolate costs or administrative boundaries. A resource group is a logical container that holds related resources for a solution (for example, a web app, database, and storage account for the same application). Resource groups are also a lifecycle boundary: deleting a resource group deletes contained resources, which is often tested through “what happens if…” questions.
Resources are the actual services (VMs, VNets, storage accounts). AZ-900 expects you to know that role-based access control (RBAC) and policies can apply at different scopes—management group, subscription, resource group, or resource—often inheriting downwards.
Exam Tip: If the question mentions “apply a policy to all subscriptions,” choose management groups. If it mentions “separate billing,” choose subscriptions. If it mentions “delete everything for this app,” choose the resource group boundary.
Networking questions in AZ-900 are about recognizing foundational components. An Azure Virtual Network (VNet) is your private network in Azure, where you define IP address ranges and connect resources. VNets are segmented into subnets, which help isolate tiers (web, app, data) and apply controls such as network security groups (NSGs)—AZ-900 may reference NSGs conceptually, even if configuration isn’t tested deeply.
VNet peering connects two VNets (within the same region or across regions) enabling private traffic between them over Microsoft’s backbone. The exam often frames peering as “connect two VNets without using the public internet.” Remember: peering is VNet-to-VNet connectivity, not on-premises connectivity.
Private endpoints are a frequent modern pattern. They allow access to certain PaaS services (like storage or databases) via a private IP in your VNet, reducing exposure to the public internet. When a scenario emphasizes “keep traffic off the public internet” and “access Azure PaaS privately,” private endpoints are a strong clue.
Exam Tip: Read for the direction of connectivity: VNet↔VNet suggests peering; VNet↔PaaS privately suggests private endpoint; on-prem↔Azure suggests VPN/ExpressRoute.
AZ-900 connectivity items usually compare VPN Gateway and ExpressRoute. A VPN Gateway enables encrypted connectivity over the public internet between an on-premises network and an Azure VNet (site-to-site VPN). It can also support point-to-site scenarios for individual client devices. On the exam, VPN is the cost-effective, fast-to-start option when internet-based connectivity is acceptable.
ExpressRoute provides private connectivity between your on-premises network and Microsoft’s network through a connectivity provider. Because it does not traverse the public internet, it can offer more consistent latency and meet strict compliance or reliability requirements. If a scenario mentions “dedicated private connection,” “avoid public internet,” or “high bandwidth/enterprise connectivity,” ExpressRoute is the likely match.
When deciding, anchor on three exam-tested signals: (1) path (internet vs private), (2) requirements (compliance/reliability), and (3) operational model (provider involvement and higher cost for ExpressRoute).
Exam Tip: If the stem says “over the public internet,” “IPsec,” or “quickly set up,” think VPN Gateway. If it says “private peering/dedicated line/provider,” think ExpressRoute.
Basic name resolution and edge services appear in AZ-900 as “what service does what.” Azure DNS is a hosting service for DNS domains, providing name resolution using Microsoft’s global DNS infrastructure. If the question is about managing DNS records (A, CNAME, etc.) for a domain, Azure DNS is the direct fit. Keep in mind that DNS does not “host” the website content; it maps names to endpoints.
Azure CDN caches content at edge locations to improve performance for users globally, especially for static assets (images, scripts, downloads). When the scenario stresses “reduce latency for global users” and “cache static content,” CDN is a strong match.
Azure Front Door is an edge entry point for web applications, providing global routing and acceleration for HTTP/HTTPS traffic and supporting features like load balancing across regions. In exam terms: CDN is caching content; Front Door is routing and accelerating web traffic at the edge (often for multi-region apps). If the requirement mentions “global application entry point,” “route users to the closest/healthy backend,” or “improve availability across regions,” Front Door is typically the better answer than DNS alone.
Exam Tip: Look for verbs: “resolve” (DNS), “cache” (CDN), “route/accelerate global web traffic” (Front Door). This is exactly the skill tested in scenario-style items where you must choose the right architecture component.
1. A company deploys a mission-critical workload in a single Azure region and wants protection from a datacenter failure within that region. Which Azure feature provides this capability?
2. Your organization has multiple Azure subscriptions for different departments. You need to apply an Azure Policy across all subscriptions while allowing each department to manage resources independently. Which scope should you use?
3. A company needs a dedicated, private connection from its on-premises datacenter to Azure that does not use the public internet and provides consistent performance. Which Azure service should you recommend?
4. You are designing an Azure solution where multiple virtual machines must communicate privately with each other, and inbound traffic from the internet should not be allowed by default. Which Azure service provides the private network boundary for these resources?
5. A company hosts a public website in Azure and wants to map the name www.contoso.com to the public IP address of the web app. Which Azure service should be used to host the DNS records?
This chapter maps directly to the AZ-900 objective area “Describe Azure architecture and services,” with extra emphasis on the compute, storage, and identity services you must recognize by name and by best-fit scenario. On the exam, questions are rarely about deep configuration; they are about choosing the right service given constraints like “no server management,” “burst traffic,” “needs shared file storage,” or “requires least-privilege access.”
You’ll see many “compare” items: virtual machines vs containers vs serverless; Blob vs File vs Queue; Entra ID vs RBAC; and when to mention redundancy (LRS/ZRS/GRS). Train yourself to identify keywords that signal the intended category: “lift-and-shift OS control” points to VMs, “portable image” points to containers, “event-driven” points to Functions/Logic Apps, and “identity for users” points to Microsoft Entra ID.
Exam Tip: When an answer choice is a management tool (e.g., Azure Policy) but the scenario is about running code (compute) or storing data (storage), eliminate it quickly. AZ-900 often mixes governance terms into architecture questions as distractors.
We’ll also weave in fundamentals-level database and analytics recognition. Expect the exam to test whether you know “relational” vs “NoSQL,” “globally distributed,” and “managed platform service” wording more than query syntax or tuning. Finally, identity questions commonly revolve around “who authenticates” (Entra ID), “who is allowed” (RBAC), and “how strong is the sign-in” (MFA), framed under a Zero Trust mindset.
Practice note for Compare compute options: VMs, containers, and serverless: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Choose storage types: blobs, files, queues, disks, and tiers: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand databases and analytics at a fundamentals level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain identity and access: Entra ID, RBAC, and MFA: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set C: Services (60 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Compare compute options: VMs, containers, and serverless: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Choose storage types: blobs, files, queues, disks, and tiers: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand databases and analytics at a fundamentals level: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Explain identity and access: Entra ID, RBAC, and MFA: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Azure Virtual Machines (VMs) are Infrastructure as a Service (IaaS): you choose the OS image, size (vCPU/RAM), disks, and networking. In exam terms, VMs are the “most control” option—ideal for lift-and-shift workloads, custom software needing OS-level access, or scenarios explicitly mentioning “install agents,” “run legacy app,” or “need admin access to the operating system.”
VM Scale Sets (VMSS) add a key idea: automated scaling for many identical VM instances. When the scenario says “scale out/in based on demand” but still implies VM-level control, VMSS is the fit. The exam may contrast VMSS with “autoscale in App Service” or with container scaling; anchor on the clue “identical VMs” plus “orchestrated scaling.”
Azure App Service is Platform as a Service (PaaS) for hosting web apps, REST APIs, and back ends. You deploy code (or a container image in some plans) without managing the OS. If the prompt emphasizes “no server management,” “built-in scaling,” “managed platform,” or “rapid deployment for web app,” App Service is a common correct answer.
Exam Tip: If the question explicitly requires OS patching control or custom drivers, App Service is usually wrong. If it emphasizes “just deploy code” or “managed runtime,” VMs are usually wrong.
Common trap: confusing “availability” with “scaling.” Availability (keeping it running) can be addressed via redundancy/architecting across instances; scaling is about adding/removing capacity under load. VMSS is primarily a scaling construct, whereas availability can be achieved across zones/regions with multiple instances.
Containers package an application and its dependencies into a portable image. AZ-900 tests recognition: containers are lighter than VMs, start faster, and are ideal for microservices and consistent deployment across environments. You typically don’t manage a full OS per workload; instead you manage images and container settings.
Azure Container Instances (ACI) is the “run a container without managing servers” option. If the scenario mentions “simple container workload,” “burst,” “run a job,” “no cluster,” or “quickly spin up containers,” ACI is usually correct. Think of ACI as serverless-like for containers: minimal infrastructure concerns and fast start, but not a full orchestration platform.
Azure Kubernetes Service (AKS) is managed Kubernetes for container orchestration. If you see phrases like “orchestrate,” “microservices at scale,” “service discovery,” “rolling updates,” “self-healing,” or “need a cluster,” AKS is the expected answer. You still manage parts of the cluster (like node pools conceptually), but Azure helps manage the control plane.
Exam Tip: The fastest elimination technique: if the question includes “orchestration” or “Kubernetes,” pick AKS; if it says “single container” or “no orchestration,” pick ACI. Don’t overthink “containers = AKS” automatically.
Common trap: selecting App Service when the scenario is explicitly container-centric. App Service can host containerized web apps, but exam prompts that emphasize “Kubernetes” capabilities (pods, orchestration, scaling across many services) are pointing to AKS, not App Service.
Serverless in AZ-900 is about event-driven execution and reduced operational overhead. The exam expects you to think in triggers: “when an event happens, run something.” Cost is often usage-based (pay per execution/resources consumed), which fits unpredictable or spiky workloads.
Azure Functions is “serverless code.” If the prompt says “run code when a file is uploaded,” “process queue messages,” “scheduled job,” or “event-driven API endpoint,” Functions is a strong match. The key is that you deploy function code, choose triggers/bindings, and Azure handles the infrastructure scaling.
Logic Apps is “serverless workflow/integration.” It’s commonly the right choice when the scenario is about connecting services with a low-code approach: “automate business process,” “integrate SaaS connectors,” “approval workflow,” or “move data between systems.” While Functions and Logic Apps can overlap, Logic Apps leans toward orchestrating steps and connectors rather than writing custom code.
Exam Tip: Look for wording: “workflow,” “connector,” “business process,” “approval,” or “send an email when…” usually signals Logic Apps. “Write code,” “custom processing,” “developer function,” or “event-driven compute” usually signals Functions.
Common trap: confusing “serverless” with “no servers exist.” Servers still exist; you just don’t manage them. On the exam, that distinction matters when comparing to VMs, where you are responsible for OS-level tasks.
Most Azure storage services are provisioned through a storage account. AZ-900 expects you to recognize that a storage account is a container for services like blobs, files, queues, and tables (and that managed disks are separate but related in the storage ecosystem). If a question asks “where do you configure redundancy for Blob storage?” the answer typically involves the storage account settings.
Redundancy options are frequent test items: Local Redundant Storage (LRS) replicates within a single datacenter; Zone Redundant Storage (ZRS) replicates across availability zones in a region; Geo-Redundant Storage (GRS) replicates to a secondary region (paired region concept); and Read-Access GRS (RA-GRS) allows read access to the secondary endpoint. The exam often frames this as “protect against datacenter failure” (ZRS) versus “protect against regional outage” (GRS/RA-GRS).
Exam Tip: If the scenario explicitly requires “read access in the secondary region,” choose RA-GRS, not GRS. If it says “within the region but across datacenters,” choose ZRS.
Access patterns: understand that storage design relates to how data is accessed—frequently accessed data aligns with hot tiers, infrequently accessed with cool, and rarely accessed long-term retention aligns with archive (details next section). Performance and cost trade-offs are core. The exam also likes “object storage vs file shares” clues: object-based access (HTTP/REST) points to Blob, SMB/NFS-like shared access points to File storage.
Common trap: choosing a redundancy option based on “highest sounding.” Always map to the requirement boundary: same datacenter, same region, or cross-region. Overbuying redundancy may be plausible in real life, but the exam expects the “best match” to stated needs.
AZ-900 storage questions are typically “choose the right type.” Azure Blob Storage is object storage for unstructured data: images, video, backups, logs, and static website assets. If the prompt says “store files accessible via HTTP,” “unstructured,” or “massive scale,” Blob is a safe pick. Azure File Storage provides managed file shares, commonly accessed via SMB, making it fit for “lift and shift file server” or “shared drive for multiple VMs.”
Azure Queue Storage is for simple messaging to decouple application components—look for “buffer,” “asynchronous processing,” “message backlog,” or “decouple.” Table Storage is a NoSQL key-value store for semi-structured data (note: it’s not relational). In fundamentals questions, Table is often positioned as a low-cost NoSQL option within a storage account.
Managed disks are persistent block storage for Azure VMs. If the scenario references “VM OS disk,” “data disk,” “attach a disk,” or “persistent storage for a VM,” choose managed disks—not Blob/File/Queue. The exam may include a distractor implying “store VM files in Blob” (possible for some scenarios, but persistent VM disks are managed disks in typical AZ-900 wording).
Exam Tip: Use the access pattern to decide: object via REST = Blob; shared filesystem = File; decoupled messages = Queue; key/attribute NoSQL = Table; VM block storage = managed disks.
Hot/cool/archive tiers apply to Blob storage. Hot tier is for frequent access (higher storage cost, lower access cost). Cool tier is for infrequent access (lower storage cost, higher access cost, often minimum retention considerations). Archive tier is for rarely accessed data with the lowest storage cost but requires rehydration time to access.
Common trap: assuming “Archive” means “instant access but cheap.” Archive is cheap because retrieval is slower and requires rehydration. If the scenario needs immediate access, archive is likely wrong even if data is rarely used.
At a fundamentals level, database questions are classification problems. Azure SQL Database is a managed relational database (PaaS). If you see “relational,” “SQL,” “tables with relationships,” or “managed database without managing OS,” Azure SQL is a top candidate. Azure Cosmos DB is a globally distributed NoSQL database designed for low-latency and elastic scaling. If the prompt says “NoSQL,” “global distribution,” “multi-region,” or “low latency worldwide,” Cosmos DB is typically the intended choice.
Analytics may appear indirectly (e.g., “analyze large volumes of data”). In AZ-900, you’re usually not required to pick a specific analytics engine unless named, but you should recognize that operational databases (Azure SQL/Cosmos DB) are different from analytics workloads (data warehousing/big data). When the question is simply “store transactional data,” pick the operational database service over analytics tools.
Identity and access are heavily tested. Microsoft Entra ID (formerly Azure Active Directory) is the cloud identity provider: it authenticates users, groups, and applications. If the scenario says “sign in,” “SSO,” “user accounts,” or “authenticate,” think Entra ID.
Role-Based Access Control (RBAC) is authorization: it controls what an authenticated identity can do on Azure resources. The scope keywords matter: management group, subscription, resource group, and resource. Least privilege is the principle: assign the minimal role at the smallest necessary scope to meet requirements.
Exam Tip: Memorize the split: Entra ID = authentication (who you are); RBAC = authorization (what you can do). Many exam distractors swap these roles.
Multi-Factor Authentication (MFA) strengthens sign-in by requiring an additional factor beyond password. If the scenario mentions “reduce risk of compromised passwords,” “additional verification,” or “secure sign-in,” MFA is the fit.
Zero Trust is a security model summarized as “never trust, always verify,” assuming breach, and applying least privilege. On AZ-900, Zero Trust is tested conceptually: enforce strong identity, verify explicitly, and limit access via RBAC and conditional controls (MFA is frequently mentioned as part of this story).
Common trap: choosing RBAC to “authenticate users,” or choosing Entra ID to “assign permissions on a resource.” Use the verb in the prompt—authenticate vs authorize—to separate the services cleanly.
1. A company needs to run a Windows Server application that requires full control of the operating system and the ability to install custom drivers. The workload will run 24/7. Which Azure compute service should you choose?
2. You need storage that allows multiple Azure virtual machines to mount the same shared folder using SMB. The data should be managed as a file share rather than objects. Which storage option should you use?
3. An application processes images only when new files are uploaded to a storage container. You want to avoid managing servers and pay primarily when the code runs. Which compute option best meets the requirement?
4. You must assign least-privilege permissions so that a specific user can read only one storage account in a subscription. Which Azure feature should you use?
5. A development team needs a managed database service for relational data with SQL querying, without managing the underlying database servers. Which Azure service best fits?
AZ-900 expects you to recognize not just what Azure services exist, but how you operate them day-to-day, keep them secure, and control spend. In practice questions, management and governance topics often appear as “which tool would you use?” or “which feature prevents X?” scenarios. The exam is testing that you can pick the correct control plane capability (deploy, monitor, govern, or optimize cost) without confusing it with a data plane feature (what the workload itself does).
This chapter connects the most-tested management tools (Portal, CLI, PowerShell), deployment approaches (ARM/Bicep and Infrastructure as Code), operational monitoring basics (Azure Monitor, alerts, Service Health), cost controls (pricing, TCO, budgets, tags), governance guardrails (Policy, management groups, RBAC, locks), and trust/compliance resources (Defender for Cloud, Trust Center). You should be able to map each requirement in a question to the right service category: “operate” typically points to Monitor/Service Health; “secure” points to RBAC/Policy/Defender for Cloud; “control costs” points to calculators, Cost Management, budgets, and tags.
Exam Tip: When a question asks “who can do what,” think RBAC. When it asks “what is allowed to be deployed,” think Azure Policy. When it asks “prevent deletion,” think resource locks. When it asks “estimate before buying,” think Pricing Calculator or TCO Calculator.
Practice note for Navigate management tools: Portal, CLI, PowerShell, and ARM/Bicep concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Control and optimize costs with pricing, budgets, and TCO concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Secure and govern with Policy, locks, and role-based access: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand compliance, privacy, and trust resources: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Practice Set D: Management & Governance (50 questions): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Navigate management tools: Portal, CLI, PowerShell, and ARM/Bicep concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Control and optimize costs with pricing, budgets, and TCO concepts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Secure and govern with Policy, locks, and role-based access: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Understand compliance, privacy, and trust resources: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
AZ-900 commonly tests your ability to choose the right management interface. The Azure Portal is the web-based GUI for creating and managing resources. It’s ideal for discovery, quick configuration, and visual navigation of resource relationships. However, exam scenarios often imply repeatability or automation—when you see “script,” “automate,” or “repeat across many resources,” you should lean toward Azure CLI or Azure PowerShell rather than the Portal.
Cloud Shell is an in-browser shell hosted by Microsoft that gives you authenticated access to Azure CLI and PowerShell without local installation. It’s frequently the best answer when the question emphasizes “from any computer,” “no setup,” or “run commands quickly.” Cloud Shell persists files using an Azure Storage share, which explains why you can keep scripts and history across sessions.
Azure CLI is a cross-platform command-line tool (Bash-friendly) using the az command. Azure PowerShell uses PowerShell cmdlets (typically Az modules) and is a natural fit in Windows and automation scripts that already use PowerShell patterns. The Azure mobile app focuses on monitoring and basic management on-the-go (view health, restart a VM, check alerts), not full-scale deployment design.
Exam Tip: If the scenario says “Linux admin” or “Bash,” prefer Azure CLI; if it says “Windows admin” or “PowerShell,” prefer Azure PowerShell. If it says “browser-based, no local tools,” prefer Cloud Shell. If it says “quickly view and manage resources visually,” prefer the Azure Portal.
Common trap: confusing Cloud Shell with Azure CLI/PowerShell as separate “products.” Cloud Shell is a hosted environment that can run either. Another trap is assuming the mobile app is for full provisioning—it’s primarily for operational visibility and a limited set of actions.
Resource deployment in Azure is ultimately executed through Azure Resource Manager (ARM), which is the management layer that receives requests and applies them consistently across subscriptions and resource groups. On AZ-900, you are not expected to author complex templates, but you must recognize what ARM templates are used for: declarative deployment of infrastructure (JSON templates) that can be versioned, reused, and deployed consistently.
Bicep is a domain-specific language (DSL) that simplifies authoring templates; it compiles down to ARM template JSON. If a question says “simpler syntax than JSON” or “modern IaC for ARM,” Bicep is often the correct choice. Infrastructure as Code (IaC) benefits are key exam points: repeatability, consistency, reduction of manual errors, standardization, faster provisioning, and integration with CI/CD pipelines.
Expect questions that contrast “click-ops” (manual portal configuration) with declarative templates. The exam typically favors IaC when the prompt includes “deploy the same environment multiple times,” “ensure consistency across regions,” or “track changes.”
Exam Tip: Look for declarative keywords: “define desired state,” “idempotent deployments,” “version control,” and “repeatable.” Those signal ARM/Bicep/IaC rather than scripts that imperatively run a sequence of commands.
Common trap: mixing up ARM (the platform) with ARM templates (the JSON format) and Bicep (the authoring language). Another trap is assuming templates only deploy compute—ARM templates can deploy virtually any Azure resource type and can include parameters, variables, and outputs to support multiple environments (dev/test/prod).
Operational awareness is a core governance skill: you can’t secure or control costs if you can’t observe what’s happening. Azure Monitor is the umbrella service for collecting, analyzing, and acting on telemetry from resources. In exam questions, Azure Monitor is the “parent” answer when the prompt is broad (metrics, logs, alerts, dashboards). If the prompt is about troubleshooting with queryable logs, that points to Log Analytics (a workspace where logs are stored and queried, typically using KQL).
Alerts are action-oriented. You configure an alert rule based on a signal (metric threshold, log query result, activity log event) and then choose an action group (email/SMS/push/webhook, etc.). AZ-900 often tests conceptual mapping: “notify when CPU exceeds X” implies a metric alert; “notify when someone deletes a resource” implies an activity log alert; “notify when a specific error appears repeatedly” implies a log alert based on Log Analytics queries.
Service Health is different: it informs you about Azure platform incidents, planned maintenance, and advisories that may impact your resources. If the question is about “Azure had an outage in a region” or “planned maintenance affecting services,” Service Health is the most accurate answer, not Azure Monitor. You use Service Health to understand whether the issue is in Microsoft’s cloud versus your own configuration.
Exam Tip: “Is Azure broken?” → Service Health. “Is my workload performing?” → Azure Monitor. “What happened and why?” with deep logs → Log Analytics.
Common trap: assuming Log Analytics replaces Monitor. Log Analytics is a component used by Azure Monitor for log data. Another trap is confusing Azure Advisor (recommendations) with monitoring; Advisor provides best-practice suggestions, while Monitor/Log Analytics provide telemetry and alerting.
Cost questions on AZ-900 often start with “estimate,” “forecast,” “reduce,” or “allocate.” The Pricing Calculator estimates expected Azure costs for planned deployments (choose services, regions, tiers, and usage assumptions). The TCO (Total Cost of Ownership) Calculator compares on-premises costs to Azure costs, emphasizing migration justification and savings (hardware refresh, datacenter, power, cooling, labor). If the scenario is “should we move to cloud and what’s the cost comparison,” TCO is the better match than Pricing Calculator.
Cost Management + Billing is used after deployment to track actual spending, analyze trends, and set budgets. Budgets allow threshold-based notifications (and can integrate with automation), helping prevent surprise charges. Tags are metadata labels applied to resources (for example, CostCenter=Finance or Env=Prod) that enable cost allocation and reporting. AZ-900 expects you to know that tags don’t enforce security by themselves; they help organize and report.
Exam Tip: Before deployment: Pricing Calculator / TCO Calculator. After deployment: Cost Management + Billing. If the question includes “chargeback/showback” or “allocate costs to departments,” tags are usually part of the solution.
Common traps: thinking budgets “cap” spending automatically. Budgets primarily alert; they don’t inherently stop services (unless you implement automation). Another trap is confusing “cost optimization recommendations” (often from Azure Advisor) with “cost tracking and budgets” (Cost Management). Also watch wording around “estimate monthly bill” (Pricing Calculator) versus “compare on-prem vs Azure” (TCO).
Governance is about guardrails: ensuring resources meet organizational standards continuously. Azure Policy evaluates resources against rules (for example, “only allow certain VM sizes,” “require tags,” “deploy resources only in approved regions”). Policy can deny noncompliant deployments, audit them, or deploy settings automatically in some cases. If the prompt is “ensure compliance at scale,” “enforce standards,” or “prevent creation,” Azure Policy is a prime candidate.
Initiatives (policy sets) bundle multiple policies into a single assignment, often aligning to standards such as ISO or organizational baselines. Management groups organize subscriptions into a hierarchy so you can apply policy and RBAC at scale above the subscription level. This is frequently tested as a “large enterprise with many subscriptions” scenario.
RBAC (role-based access control) answers “who can do what” by assigning roles (Reader, Contributor, Owner, or custom roles) to users, groups, or service principals at a scope (management group, subscription, resource group, resource). Questions that mention “least privilege” or “allow a user to manage VMs but not networking” point to RBAC role assignment and scope selection.
Resource locks protect against accidental deletion or modification. The two lock types are typically “CanNotDelete” (can read/modify but not delete) and “ReadOnly” (no changes). Locks are not a permission system—admins may still remove locks if they have sufficient rights—so locks complement RBAC and Policy rather than replace them.
Exam Tip: “Enforce” standards = Policy; “Group policies together” = Initiative; “Apply across many subscriptions” = Management groups; “Access control” = RBAC; “Prevent accidental delete” = Locks.
Common trap: confusing Policy with RBAC. Policy controls resource properties and allowed states; RBAC controls actions by identities. Another trap: assuming locks are the best answer for compliance; locks are safety mechanisms, while Policy/Initiatives provide compliance enforcement and reporting.
AZ-900 expects foundational understanding of how Microsoft helps you manage security posture and compliance in Azure. Microsoft Defender for Cloud (formerly Azure Security Center) provides security posture management and threat protection recommendations across your Azure (and often multi-cloud/hybrid) resources. In exam terms, it helps you identify misconfigurations (for example, open management ports, missing disk encryption), provides secure score, and can surface alerts. If the scenario says “recommendations to improve security” or “security posture,” Defender for Cloud is a strong match.
Compliance offerings are usually tested at the “where do you find information” level. Microsoft provides documentation on regulatory compliance, certifications, and audit reports through compliance resources (commonly referenced via the Microsoft compliance documentation and portals). The Trust Center is a central resource for understanding Microsoft’s approach to security, privacy, compliance, and transparency. If the question asks where to learn about Microsoft’s privacy principles or how Microsoft handles data protection, Trust Center is often the intended answer.
Exam Tip: If the prompt includes “security recommendations,” “secure score,” or “improve posture,” choose Defender for Cloud. If it includes “privacy,” “compliance reports,” “certifications,” or “how Microsoft protects data,” choose Trust/Compliance resources rather than operational tools like Monitor.
Common traps: treating Defender for Cloud as an identity/access product (that’s more Entra ID/RBAC) or as a pure logging tool (that’s Monitor/Log Analytics). Another trap is assuming “compliance” means Azure Policy only; Policy can help enforce configurations, but compliance questions frequently ask where to review Microsoft’s compliance commitments, which points to Trust Center and compliance documentation.
1. You need to ensure that only specific Azure VM SKUs can be deployed in a subscription. The requirement is preventative (deny noncompliant deployments) and should be centrally enforced. Which Azure feature should you use?
2. A team accidentally deleted a production resource group last month. You need to prevent deletion of the resource group while still allowing administrators to modify resources inside it. What should you configure?
3. You are evaluating moving an on-premises workload to Azure. Management asks for an estimate of cost savings that includes current on-premises expenses such as servers, power, cooling, and IT labor. Which tool should you use?
4. Your finance team needs to receive an alert when monthly spending for a subscription is forecasted to exceed $10,000. What should you configure?
5. You need to automate a consistent deployment of multiple Azure resources using an Infrastructure as Code approach. The solution must be declarative and support repeatable deployments. What should you use?
This chapter is where your preparation becomes exam performance. AZ-900 rewards breadth, clean definitions, and the ability to pick the best option when several look plausible. You will run a full mock exam in two parts (to mimic real pacing and fatigue), diagnose weak spots by objective area, and finish with an objective-by-objective sprint that tightens recall for the exam’s most frequently tested concepts.
Your goal is not just a high mock score—it’s a repeatable process: answer under time pressure, review explanations with intent, convert misses into a remediation plan, and then re-test until your outcomes are stable. Throughout this chapter, you’ll also see the recurring traps that cause otherwise well-prepared candidates to lose points (wording tricks, service mix-ups, and “best answer” logic).
Exam Tip: Treat every explanation as a miniature lesson mapped to an objective (cloud concepts; Azure architecture/services; Azure management/governance). If you can’t say which objective a question belongs to, you’re reviewing passively—and passive review doesn’t stick under exam pressure.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Final Objective-By-Objective Sprint: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 2: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weak Spot Analysis: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Exam Day Checklist: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Final Objective-By-Objective Sprint: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Mock Exam Part 1: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Run your mock under “exam-real” conditions: a quiet room, no notes, no browsing, and a single uninterrupted sitting. AZ-900 is designed to test foundational judgment across domains, so your practice should train rapid classification: “What domain is this?” followed by “What single concept decides the answer?” Set a timer that approximates exam pacing and commit to moving on when you start debating two similar options for too long.
Use a two-pass approach. Pass one: answer everything, flag uncertain items, and avoid deep rereads. Pass two: return only to flagged items and make a final call. The biggest score gains often come from preventing time sinks on early questions that reduce attention later.
Reviewing explanations is where the learning happens, but only if it’s structured. For each incorrect or guessed question, write a one-line “why” in your own words and attach it to an objective area (Cloud Concepts; Architecture/Services; Management/Governance). Then create a micro-drill: a single definition, comparison, or scenario rule you can recall in 10 seconds.
Exam Tip: Don’t just memorize the correct option—identify the “disqualifier” that makes the other choices wrong. AZ-900 often includes distractors that are true statements but don’t answer the question asked.
Common review mistake: re-reading explanations until they feel familiar. Familiarity is not recall. Close the explanation and restate the rule from memory. If you can’t, it goes into your weak spot list for Section 6.3 remediation.
This lesson corresponds to Mock Exam Part 1 and Mock Exam Part 2. Split the 100-question mock into two blocks of 50 with a short reset between them to simulate the mental shift you’ll need on exam day. The AZ-900 blueprint expects you to move fluidly between broad cloud concepts (CapEx vs OpEx, elasticity, shared responsibility), core Azure architecture and services (regions, availability zones, compute, networking, storage, identity), and management/governance (cost tools, security posture, policy, compliance, resource organization).
During the mock, practice “domain tagging” in your head. When you recognize a Cloud Concepts item, your decision should hinge on a definition: public vs private vs hybrid cloud, IaaS/PaaS/SaaS responsibilities, or core benefits (scalability, high availability, fault tolerance, agility). For Architecture/Services, the deciding factor is usually what the service is and what problem it solves (VMs vs containers, VNets vs VPN Gateway, Blob vs Disk vs Files, Entra ID vs RBAC). For Management/Governance, the deciding factor is the control plane tool: Policy vs RBAC vs Blueprints (legacy concept) vs resource locks vs Microsoft Defender for Cloud, plus cost governance (budgets, cost analysis, tags).
Exam Tip: When two answers both sound correct, look for scope words: “subscription,” “resource group,” “management group,” “tenant,” “region,” “zone,” “global.” AZ-900 loves scope alignment—correct service, wrong scope equals wrong answer.
Keep pacing consistent. If you’re stuck, ask: “What is the exam testing here?” Usually it’s a single contrast: availability zones vs region pairs, DDoS protection vs firewall, NSG vs Azure Firewall, authentication vs authorization, or IaaS vs PaaS responsibility boundaries.
After Part 1, don’t review yet—complete Part 2 first. Reviewing midstream can artificially inflate performance and hides fatigue effects. Your goal is a realistic baseline before remediation.
Convert your mock score into a plan, not a mood. Start by calculating domain-level performance: Cloud Concepts, Architecture/Services, and Management/Governance. A single overall percentage can hide a critical weakness—especially if you’re strong in definitions but shaky on governance tooling, or vice versa.
Sort misses into three buckets:
Build a 3-day remediation loop: Day 1 re-study only the weakest domain; Day 2 do targeted drills only from that domain; Day 3 re-take a mixed set. The purpose is to confirm improvement transfers to mixed conditions, not just same-topic practice.
Exam Tip: Track “confusion pairs.” If you repeatedly mix up two items (e.g., Azure Policy vs RBAC; NSG vs Azure Firewall; Azure Monitor vs Service Health), make a one-line differentiator and review it before every practice set.
Finally, set a “confidence threshold.” For foundational exams, aim for stable performance across two different mixed mocks rather than a single peak score. Consistency is your exam-day advantage.
AZ-900 is not trying to trick you with obscure facts; it tests whether you can interpret exam wording and select the best fit. The most common trap is “true but not the answer.” For example, multiple options might improve security, but the question asks for the tool that enforces a rule (Policy) rather than the tool that grants permissions (RBAC).
Watch for “best answer” qualifiers: lowest administrative effort, highest availability, most cost-effective, or fastest to deploy. These qualifiers usually point toward managed services and platform features. If the question hints at minimizing maintenance, the exam likely expects PaaS over IaaS (or a managed database over a self-managed VM). If it emphasizes granular network traffic filtering within a VNet/subnet, think NSG; if it emphasizes centralized, stateful, enterprise filtering, think Azure Firewall.
Service confusion patterns to drill:
Exam Tip: If an option is “more powerful” but requires heavier management, it may be wrong when the question says “minimize administrative overhead.” The exam frequently rewards the simplest service that satisfies the requirement.
Reading trap: negatives (“NOT,” “except,” “least likely”). Circle it mentally before reading answers. Many otherwise correct candidates miss these under time pressure.
This is your Final Objective-By-Objective Sprint. Use three one-page “review sheets” (mental or written) and rehearse them until recall is automatic.
Cloud concepts sheet: Know the definitions and why they matter. Public/private/hybrid cloud, plus consumption-based pricing. Benefits: scalability/elasticity (scale out/in), high availability, reliability, agility, fault tolerance, and global reach. Shared responsibility: Microsoft always secures the cloud; you secure what you put in it—more responsibility in IaaS, less in SaaS. CapEx vs OpEx: upfront hardware vs pay-as-you-go operating spend.
Architecture and services sheet: Scope and geography: regions, region pairs, availability zones. Core services: compute (VMs, App Service, containers concepts), networking (VNet, VPN Gateway, ExpressRoute at a high level, NSG), storage (Blob vs Files vs Disk; redundancy options conceptually), and identity (Entra ID, MFA, SSO as ideas). Practice mapping a requirement to the smallest correct building block.
Management and governance sheet: Cost management (budgets, cost analysis, tagging), security posture (Defender for Cloud concepts), governance controls (Policy, RBAC, locks), compliance resources (Service Trust Portal conceptually), and resource management tooling (Azure Portal, Azure CLI, PowerShell, ARM templates at a high level). The exam often asks which tool enforces, which tool reports, and which tool grants access.
Exam Tip: Your final sprint should be recall-first. Cover the sheet, recite the key contrasts, then check. If you only read, you’ll overestimate readiness.
Exam day is an execution problem. Reduce uncertainty with a checklist and a repeatable routine.
During the exam, manage pacing and anxiety with process cues. Start with controlled breathing for 20–30 seconds before you begin, then commit to your two-pass strategy. If you feel stuck, label it (“service confusion” or “scope issue”), pick the best available option, flag it, and move on. Momentum reduces stress and improves accuracy later.
Exam Tip: Don’t change answers impulsively. Only change when you can name the specific concept that proves your first choice wrong (e.g., “Policy enforces; RBAC authorizes,” “zones are within a region,” “tags don’t enforce”).
Finally, do a quick mental scan of your three review sheets (cloud concepts; architecture/services; management/governance) right before check-in—definitions, scopes, and the top confusion pairs. This primes recall and prevents early-question jitters from erasing easy points.
1. A company wants to deploy a web app in Azure. During the AZ-900 exam you see several plausible answers. The requirement is to choose a compute service that provides managed hosting for web apps without managing VMs. Which Azure service should you select?
2. You are reviewing a mock exam miss: a question asked which feature provides fault tolerance by distributing resources across multiple datacenters within an Azure region. Which concept should you associate with that requirement?
3. A team wants to enforce that all newly created Azure resources must be tagged with 'CostCenter'. They also want to deny deployments that do not include the tag. Which Azure governance feature should they use?
4. You run the first half of a full mock exam and notice you are slow on questions that ask 'Which tool should you use to review service health issues and planned maintenance?' Which Azure service best matches this requirement?
5. A company wants to reduce costs by using a pricing benefit that applies an hourly rate based on a 1-year or 3-year commitment for eligible compute services. Which pricing option should they choose?