HELP

+40 722 606 166

messenger@eduailast.com

AI Governance Basics: Write Clear Workplace AI Use Rules

AI Ethics, Safety & Governance — Beginner

AI Governance Basics: Write Clear Workplace AI Use Rules

AI Governance Basics: Write Clear Workplace AI Use Rules

Write simple, enforceable rules for safe AI use at work.

Beginner ai-governance · ai-policy · ai-ethics · workplace-ai

Why this course exists

AI tools are showing up in everyday work—writing emails, summarizing meetings, drafting job posts, answering customer questions, and more. Many teams adopt these tools quickly, but the rules for using them often arrive late (or not at all). That gap creates avoidable problems: private data pasted into public tools, inaccurate outputs copied into documents, unclear accountability when something goes wrong, and confusion about what is allowed.

This beginner-friendly course gives you a practical starting point for AI governance. You will learn how to write clear workplace rules that protect people and the business—without needing a law degree, a technical background, or a complicated framework.

What “AI governance” means in plain language

Governance is simply how decisions get made and enforced. In AI, that means answering questions like: Which AI tools can we use? What data can we put into them? When do we need human review? Who approves new use cases? What do we do if AI causes harm or a serious mistake?

Instead of abstract theory, you will build a simple structure you can actually use. Each chapter adds one building block, so by the end you have a clear, workable set of rules and a lightweight process for keeping them up to date.

Who this is for

This course is designed for absolute beginners. If you are in operations, HR, marketing, finance, customer support, IT, compliance, or management—and you need a starting point for “how we use AI at work”—you are in the right place.

  • No AI knowledge needed
  • No coding or data science required
  • Focused on everyday workplace scenarios

What you will build as you learn

You will create a practical first version of AI governance for a team or organization. That includes an AI use inventory, simple risk levels, clear data and privacy rules, and guidelines for quality, bias, and transparency. You will also set roles, approvals, and an incident process so the rules can be followed and improved over time.

  • A one-page inventory of AI tools and use cases
  • Plain-language principles and “allowed vs. not allowed” guidance
  • Do/don’t rules for sensitive data and safe prompting
  • Human review and verification expectations
  • Roles, approvals, exceptions, and incident reporting

How the 6 chapters work (book-style progression)

You will start by learning the core idea of governance and why it matters. Next, you will inventory real AI use so you are not writing rules in the dark. Then you will set simple principles and risk tiers, followed by clear rules for data, privacy, and security. After that, you will add rules for output quality and fairness. Finally, you will make the program operational with roles, approvals, incident handling, and rollout steps.

Get started

If you want a clear, beginner-safe path to responsible AI use at work, this course is your starting point. Register free to begin, or browse all courses to compare options.

What You Will Learn

  • Explain what AI governance is and why workplaces need clear AI rules
  • Map common workplace AI use cases and spot where risk can appear
  • Write plain-language AI policy statements people can follow
  • Define roles and responsibilities for approving, using, and reviewing AI
  • Create basic rules for privacy, security, and data handling with AI tools
  • Set simple guardrails for accuracy, bias, and human review
  • Build a lightweight process for exceptions, incidents, and improvements
  • Publish and roll out an AI use policy with training and checklists

Requirements

  • No prior AI or coding experience required
  • Comfort reading and writing simple workplace documents
  • Access to a computer or tablet for note-taking (optional)
  • Willingness to think through your organization’s everyday workflows

Chapter 1: What AI Governance Means (In Plain English)

  • Define AI in everyday workplace terms
  • Understand what “governance” is and what it is not
  • Identify who is affected by AI rules at work
  • Choose a practical goal for your first AI policy

Chapter 2: Inventory AI Use at Work (So You Can Govern It)

  • List where AI is already used (including “shadow AI”)
  • Group AI use cases by purpose and impact
  • Identify data inputs, outputs, and who sees them
  • Create a one-page AI use inventory

Chapter 3: Set Principles and Risk Levels (Your Policy Backbone)

  • Write 5–7 guiding principles for AI use
  • Define risk levels anyone can understand
  • Decide which uses are allowed, limited, or not allowed
  • Add “human in the loop” rules for critical work

Chapter 4: Write Clear Rules for Data, Privacy, and Security

  • Choose simple data categories (public, internal, sensitive)
  • Write do/don’t rules for entering data into AI tools
  • Set basic security expectations for accounts and access
  • Create a short checklist employees can follow

Chapter 5: Rules for Quality, Bias, and Responsible Outputs

  • Set accuracy and citation expectations for AI-assisted work
  • Define how to handle errors and uncertain answers
  • Add simple bias and fairness checks
  • Create output labeling rules (when to disclose AI use)

Chapter 6: Make It Real: Roles, Approvals, Incidents, and Rollout

  • Assign simple roles (owner, approver, user, reviewer)
  • Build a lightweight approval and exception process
  • Create an incident response plan for AI mishaps
  • Publish, train, and improve your AI governance over time

Sofia Chen

AI Governance & Risk Specialist

Sofia Chen designs practical AI governance programs for small and mid-sized organizations, focusing on clear policies people actually follow. She has supported cross-functional teams across HR, Legal, IT, and Operations to reduce AI-related risk while enabling responsible adoption.

Chapter 1: What AI Governance Means (In Plain English)

AI governance sounds like something only lawyers, auditors, or big-tech companies need. In reality, it’s a practical workplace skill: deciding what AI tools people can use, for which tasks, with what data, and under what human oversight. If your team writes emails with a chatbot, summarizes meetings, screens resumes, drafts code, or analyzes customer feedback with a model, you are already “doing AI.” The only question is whether you are doing it consistently and safely.

This chapter sets a plain-English foundation. You’ll define AI in everyday workplace terms, understand what “governance” is (and what it is not), identify who is affected by AI rules, and choose a realistic goal for your first AI policy. Along the way, you’ll see where risks typically show up—privacy leaks, security gaps, inaccurate outputs, bias, and unclear accountability—and how clear rules prevent small mistakes from becoming expensive incidents.

Think of governance as the bridge between values (privacy, fairness, safety, quality) and daily behavior (what you may paste into a tool, what you must verify, who approves a use case). The best governance is boring in the best way: simple, repeatable decisions that let the business move quickly without surprises.

Practice note for Define AI in everyday workplace terms: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand what “governance” is and what it is not: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify who is affected by AI rules at work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Choose a practical goal for your first AI policy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define AI in everyday workplace terms: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand what “governance” is and what it is not: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify who is affected by AI rules at work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Choose a practical goal for your first AI policy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define AI in everyday workplace terms: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Understand what “governance” is and what it is not: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: AI vs. automation—what’s different

Section 1.1: AI vs. automation—what’s different

In workplaces, people often label any “smart” software as AI. For governance, you need a working definition that fits everyday decisions. A useful distinction is this: automation follows predefined rules; AI makes probabilistic predictions or generates content based on patterns learned from data.

Automation is like a spreadsheet formula or a workflow rule: “If a ticket is urgent, route it to Team A.” The behavior is deterministic and predictable when inputs are known. AI is different: it may classify, rank, recommend, summarize, or generate text/images/code, but it does so with uncertainty. Two people can ask the same question and get different wording, emphasis, or even different conclusions.

In plain workplace terms, AI includes tools such as large language models (drafting and summarizing), machine-learning scoring models (risk scores, lead scoring, fraud flags), speech-to-text and sentiment tools (call center analytics), and computer vision (document extraction). The governance challenge is not just what the tool can do, but what people will assume it can do. A common mistake is treating AI output as if it were a fact or a policy decision, when it is often a suggestion.

  • Rule of thumb: If the system can “hallucinate” (confidently produce wrong content) or “drift” (change performance over time), it needs stronger human review and monitoring.
  • Practical outcome: Your policy should define AI broadly enough to cover new tools, but concretely enough that employees recognize it in their day-to-day work.

Start your course notes with an everyday definition you can reuse in policy language: “AI tools generate or predict outputs from data and may be wrong, biased, or inconsistent; therefore they require appropriate review before use in decisions.”

Section 1.2: Why AI needs rules (speed, scale, and mistakes)

Section 1.2: Why AI needs rules (speed, scale, and mistakes)

Workplaces need AI rules because AI changes three variables at once: speed, scale, and the nature of mistakes. A person can draft one flawed email; AI can draft a hundred in minutes. A person can misread one resume; an AI screening model can systematically downgrade an entire group if the training data reflects past bias. The same capability that makes AI valuable also multiplies risk.

AI mistakes are also different from typical human errors. They can be persuasive, consistent, and hard to detect. A model may invent citations, misstate a contract clause, or output plausible-but-wrong compliance advice. If employees trust the tone and formatting, the error can pass through quickly—especially in fast-moving teams like sales, support, marketing, and engineering.

Rules are not about banning AI. They are about making sure the organization gets the upside (productivity, better service, faster analysis) while preventing predictable failures: privacy leaks from pasting sensitive data into public tools, security incidents from uploading proprietary code, reputational harm from biased language, and operational harm from inaccurate recommendations.

  • Common risk hotspots: hiring and performance decisions, customer communications, financial approvals, safety-related guidance, legal and HR content, and any workflow that touches regulated or confidential data.
  • Engineering judgment: The higher the impact of a decision, the stronger the required controls (human review, testing, restricted data, logging).

AI governance begins by mapping use cases (what people want to do) and identifying where risk can appear (what could go wrong, for whom, and how you would notice). This chapter will help you name the affected groups and set expectations for review before AI output becomes a decision, a customer message, or a stored record.

Section 1.3: Governance as “how decisions get made”

Section 1.3: Governance as “how decisions get made”

Governance is not a document. Governance is the operating system for decisions: who can approve an AI tool, who can use it, what checks are required, and who is accountable when something goes wrong. In plain English, AI governance answers: “How do we decide what is allowed, and how do we keep it working safely?”

It helps to clarify what governance is not. It is not only legal compliance, and it is not a one-time review. It is not a single team’s job either. Good governance coordinates multiple roles: business owners (who need outcomes), IT/security (who manage access and risk), privacy (who manage personal data), legal/compliance (who interpret obligations), HR (who shapes employee expectations), and front-line users (who know the real workflows).

Identify who is affected by AI rules at work. This includes employees (who need clarity and training), customers (who may receive AI-generated content or decisions), job candidates (who may be evaluated), vendors (whose tools you adopt), and internal stakeholders (finance, audit, leadership). A common mistake is writing rules only for “AI developers,” when many risks come from non-technical staff using general-purpose tools.

  • Roles to define early: an AI Use Case Owner (business), an Approver (risk/privacy/security), an Operator/User (day-to-day), and a Reviewer (quality/audit).
  • Practical outcome: When a new AI use case appears, people should know the path: request → review → approve → monitor → improve or retire.

When governance is working, employees don’t guess. They know what data is allowed, when human review is mandatory, and where to report issues. That clarity is what prevents shadow AI use and inconsistent behavior across teams.

Section 1.4: Policies, standards, and procedures—simple definitions

Section 1.4: Policies, standards, and procedures—simple definitions

Workplace AI rules typically come in three layers. Keeping the definitions simple helps you write documents people will follow and leaders will approve.

Policy is the “what” and “why”: the rule at a level that applies across the organization. Example: “Employees must not enter confidential customer data into non-approved AI tools.” A policy is stable and short; it sets direction and boundaries.

Standard is the “how good is good enough”: measurable requirements that support the policy. Example: “Approved AI tools must provide enterprise access controls, data retention settings, and audit logs.” Standards make expectations testable.

Procedure is the “how to”: step-by-step instructions for a specific process. Example: “To request approval for an AI tool, complete the intake form, attach a data classification, run a pilot, and obtain sign-off from Security and Privacy.” Procedures change more often because tools and workflows change.

  • Common mistake: writing a policy that reads like a procedure (too detailed) or a procedure that reads like a policy (too vague).
  • Plain-language test: Could a new employee follow it on their first week without asking three people what it means?

This course outcome includes writing policy statements people can follow. A practical pattern is: “You may do X for Y purpose, but you must do Z safeguard.” For example: “You may use AI to draft internal summaries, but you must verify factual claims and remove personal data before sharing externally.” This keeps rules usable while still setting guardrails for accuracy, privacy, and human review.

Section 1.5: The AI lifecycle: choose, use, monitor, improve

Section 1.5: The AI lifecycle: choose, use, monitor, improve

AI governance becomes manageable when you treat AI like a lifecycle rather than a one-time purchase. A simple lifecycle is: choose → use → monitor → improve (or retire). This aligns with how risk actually appears: not just at adoption, but during everyday use and over time.

Choose: Decide whether a tool or model is appropriate for a task. This is where you map use cases and spot risk: What data will be used? Who is impacted? Is this a high-stakes decision (hiring, credit, safety) or a low-stakes productivity task (drafting internal notes)? Choose also includes vendor review, access controls, and whether the tool can meet privacy/security needs.

Use: Define how people should use it safely. This is where clear rules matter most: data handling (no sensitive data in unapproved tools), security (approved accounts only), accuracy (verify before acting), bias (avoid using AI to justify discriminatory outcomes), and human review (who signs off before external release).

Monitor: Decide what “healthy” looks like and how you will notice problems. Monitoring can be lightweight at first: sampling outputs for quality, tracking incidents, and watching for drift (e.g., customer complaints rising after a chatbot change). A common mistake is assuming the vendor will monitor impact for you; you still own the business outcome.

Improve: Use feedback to update prompts, training, workflows, and rules. Improvement may also mean narrowing scope or turning off a feature. Governance should make improvement easy by assigning responsibility and setting a review cadence.

  • Practical outcome: For each use case, you can name an owner, allowed data types, required human review, and a simple monitoring plan.

This lifecycle framing also supports roles and responsibilities: someone chooses and approves, someone operates, and someone reviews. When those roles are unclear, risk becomes “everyone’s problem,” which usually means “no one’s job.”

Section 1.6: Start small—minimum viable governance

Section 1.6: Start small—minimum viable governance

Many organizations delay AI governance because they imagine a large compliance program. A better approach is minimum viable governance: the smallest set of rules and roles that prevents the most likely harms while enabling useful experimentation. Your first AI policy should have a practical goal—something employees can remember and leaders can enforce.

A good first goal is to control data exposure and set expectations for human review. These two guardrails cover a large portion of real-world incidents. Minimum viable governance typically includes: (1) a clear definition of AI tools covered by the policy, (2) an “approved tools” concept (even if the initial list is short), (3) data handling rules tied to your existing data classifications, (4) required review for external communications or high-impact decisions, and (5) a simple approval and exception process.

  • Example minimum rules (plain language): Use only approved AI accounts; do not paste confidential or personal data into non-approved tools; label AI-generated drafts internally; verify factual statements before sharing; do not use AI output as the sole basis for hiring, disciplinary, or financial decisions.
  • Who approves what: Team leads approve low-risk productivity use; Privacy/Security approve anything involving personal/confidential data; Legal/Compliance review customer-facing or regulated use cases.

Common mistakes at this stage include writing rules that are too abstract (“use AI responsibly”), creating an approval process so heavy that teams bypass it, or ignoring the needs of non-technical staff. Aim for clarity, not perfection. You can expand over time by adding standards (logging, retention, testing) and procedures (intake forms, monitoring checklists) as your AI footprint grows.

By the end of this chapter, you should be able to say—in one paragraph—what AI governance means in your workplace, who it affects, and what your first policy is trying to achieve. That paragraph becomes the backbone for the rules you’ll write in the next chapters.

Chapter milestones
  • Define AI in everyday workplace terms
  • Understand what “governance” is and what it is not
  • Identify who is affected by AI rules at work
  • Choose a practical goal for your first AI policy
Chapter quiz

1. In this chapter, AI governance is best described as:

Show answer
Correct answer: Deciding what AI tools people can use, for which tasks, with what data, and under what human oversight
The chapter defines AI governance as practical workplace decision-making about tools, tasks, data, and oversight.

2. Which example shows a team is already "doing AI" at work, according to the chapter?

Show answer
Correct answer: Using a chatbot to write emails or summarize meetings
The chapter lists everyday uses like drafting emails and summarizing meetings as AI use.

3. What is the main problem AI governance is meant to solve in a typical workplace?

Show answer
Correct answer: Whether AI use is consistent and safe
The chapter says the key question is not if you use AI, but whether you use it consistently and safely.

4. Which set of risks is highlighted as common when AI is used at work?

Show answer
Correct answer: Privacy leaks, security gaps, inaccurate outputs, bias, and unclear accountability
These are the typical risk areas the chapter explicitly calls out.

5. The chapter describes governance as a bridge between:

Show answer
Correct answer: Values (privacy, fairness, safety, quality) and daily behavior (what to share, verify, and approve)
Governance connects stated values to concrete everyday actions like what data can be pasted in and what must be verified.

Chapter 2: Inventory AI Use at Work (So You Can Govern It)

You cannot govern what you have not named. In most workplaces, AI is already embedded in everyday tasks—sometimes through official tools, sometimes through “quick fixes” that never went through review. This chapter gives you a practical method to surface where AI is used, describe those uses in plain language, and capture enough detail (data in, data out, and who can see it) to write realistic rules later.

An AI inventory is not a compliance exercise or a hunt for “gotchas.” Done well, it becomes a shared map: what teams are using, why they use it, what it touches, and what could go wrong. It also reduces friction. When employees know the organization understands their needs, they are more willing to use approved tools and follow guardrails.

Engineering judgment matters here: the goal is not perfect completeness on day one; the goal is a repeatable process that finds the important uses first, captures enough context to evaluate risk, and can be updated as tools and workflows change. You’ll work from the outside in: start with common uses, then look for shadow AI, then document inputs/outputs/storage, and finally classify by impact to decide where governance effort should go.

  • Outcome of this chapter: a one-page AI use inventory you can circulate and maintain.
  • Scope: AI tools employees use directly (chatbots, writing assistants) and AI embedded in products/services (recommendations, routing, fraud flags).

In the sections that follow, you will list where AI is already used, group use cases by purpose and impact, identify data paths and access, and consolidate everything into a reusable template.

Practice note for List where AI is already used (including “shadow AI”): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Group AI use cases by purpose and impact: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify data inputs, outputs, and who sees them: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a one-page AI use inventory: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for List where AI is already used (including “shadow AI”): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Group AI use cases by purpose and impact: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Identify data inputs, outputs, and who sees them: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a one-page AI use inventory: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: Common workplace AI uses (writing, search, support)

Section 2.1: Common workplace AI uses (writing, search, support)

Start your inventory with the “normal” uses—the ones people will admit quickly because they feel harmless or productivity-focused. This reduces defensiveness and gives you a baseline for later comparison. In many organizations, the most frequent uses fall into three buckets: writing, search, and support.

Writing and rewriting includes drafting emails, summarizing meeting notes, creating slide outlines, rephrasing sensitive messages, translating content, generating job descriptions, and producing first drafts of policies or SOPs. The governance relevance is not the writing itself, but what the writing contains: internal strategy, customer data, HR information, or regulated content. A common mistake is inventorying only the tool name (“ChatGPT”) rather than the task (“summarize customer escalation emails for weekly report”). The task description is what later determines policy rules.

Search and retrieval includes asking an AI to find answers in documentation, searching across internal wikis, or using “AI search” features inside SaaS products. These uses often blend internal and external data sources. A practical inventory question is: “Is the AI searching your internal documents, the public web, or both?” Another mistake is assuming “search” is safe because it is read-only; the query itself can contain sensitive details, and some tools store prompts.

Support and service work includes customer support agents generating responses, IT helpdesk triage, ticket categorization, call summarization, and knowledge-base article drafting. These are high-leverage workflows, which is exactly why they deserve careful documentation: support systems often contain personal data, account identifiers, and complaint narratives.

  • Practical step: ask each department lead to list the top 5 tasks where AI saves time today (not the top 5 tools).
  • Capture “human-in-the-loop” reality: do people copy/paste output directly, or do they edit and verify?

Once you have a rough list, group each use case by purpose (write/search/support) and by audience (internal-only vs customer-facing). This simple grouping makes patterns visible and prepares you for impact-based prioritization later.

Section 2.2: Shadow AI—how it happens and why it matters

Section 2.2: Shadow AI—how it happens and why it matters

“Shadow AI” is AI use that bypasses official approval: personal accounts, browser extensions, unvetted plugins, or features quietly turned on inside existing tools. Shadow AI is usually not malicious. It happens because employees feel urgency (“I need an answer now”), friction (“the approved tool is slow”), or uncertainty (“nobody said I couldn’t”). If you treat it like wrongdoing, people hide it; if you treat it like signal, you learn what the business actually needs.

Shadow AI matters because governance assumptions break. Your organization may believe prompts are not stored, that data stays within certain regions, or that access is controlled—none of which is true if people are using consumer tools, free tiers, or personal accounts. Shadow AI also creates inconsistency: one team may use AI to screen resumes while another refuses, producing uneven outcomes and potential fairness concerns.

To discover shadow AI, use multiple channels. Interviews and surveys are necessary but incomplete. Also review:

  • Expense reports (AI subscriptions, transcription services, “assistant” tools).
  • Browser extension allowlists/denylists and endpoint management data (where appropriate and lawful).
  • Procurement inquiries (“Can I buy this tool?”) and helpdesk tickets (“How do I connect this plugin?”).
  • System logs for API usage if your environment supports it.

A key judgment call: distinguish between unauthorized tool and unauthorized use. Sometimes the tool is approved, but the use case is not (e.g., approved chatbot used to paste customer medical information). Your inventory should capture both the tool and the use case so you can write rules that target real behavior.

Common mistake: trying to eliminate shadow AI before offering alternatives. The practical outcome you want is safe adoption, which often requires a clear “approved path” (enterprise accounts, privacy settings, and training) alongside a clear “no” list (classes of tools or data types that must never be used).

Section 2.3: Inputs, outputs, and storage—where data can leak

Section 2.3: Inputs, outputs, and storage—where data can leak

An AI inventory becomes governance-ready when you document data flow: what goes into the tool, what comes out, where it is stored, and who can see it. Many AI risks are not about “AI” in the abstract—they are about routine data handling mistakes amplified by speed and scale.

For each use case, identify inputs (prompts, files, copied text), outputs (generated text, summaries, classifications, scores), and storage (chat history, vendor logs, internal databases, exported documents). Then capture access: which roles can view inputs/outputs and whether they are shared beyond the immediate user (team workspace, admin console, vendor support).

  • Input examples: customer email threads, support tickets, contracts, source code, HR notes, financial forecasts, images, voice recordings.
  • Output examples: “next best reply,” risk flags, sentiment labels, summarized call notes, rewritten policy language.
  • Storage examples: vendor prompt retention, model training use, internal ticketing system attachments, shared drive exports.

Watch for “copy/paste bridges.” Even if the AI tool is approved, people may paste output into systems with different retention rules or broader visibility. Conversely, they may paste sensitive data from a restricted system into a less controlled AI interface. Another frequent leak path is attachments: users upload spreadsheets or PDFs that contain more sensitive data than the immediate task requires.

Engineering judgment: you do not need to map every network hop. You do need to capture the decisive points for policy: data classification (public/internal/confidential), retention (how long is it kept), and sharing (who can access). A practical outcome is the ability to write rules like “Do not paste customer identifiers into external tools” because you have already identified which workflows currently do that.

Section 2.4: Internal vs. external tools—what changes

Section 2.4: Internal vs. external tools—what changes

Not all AI tools are governed the same way. A core distinction for your inventory is whether the tool is internal (built or hosted within your controlled environment) or external (a third-party service, SaaS feature, or consumer app). The same use case—say, summarizing meeting notes—has different risks depending on where the data goes and what contractual controls exist.

Internal tools typically offer stronger alignment with enterprise controls: single sign-on, access logging, data residency, and integration with your data classification and retention policies. However, internal does not mean safe by default. Internal tools can still leak data through misconfigured permissions, overly broad access, or poor separation between environments (dev/test/prod). They can also introduce model risks if training data includes sensitive content without controls.

External tools introduce vendor and supply-chain considerations: prompt retention, use for training, subprocessors, regional storage, and support access. Even when a vendor offers “enterprise privacy,” you need to verify the settings and contract terms actually match your expectations. A common mistake is assuming a consumer account behaves like an enterprise account. Your inventory should capture account type (personal/free, team, enterprise) and configured settings (history on/off, training opt-out, sharing disabled, etc.).

  • Inventory fields that matter here: vendor name, plan tier, authentication method, where the tool is accessed (browser, plugin, API), and whether it is embedded in another product.
  • Practical governance outcome: clear approval paths (e.g., “external AI tools require procurement and security review”) grounded in real usage.

When you later write workplace AI rules, this distinction often becomes a simple policy structure: what is allowed internally with guardrails, what is allowed externally with stricter data limits, and what is prohibited entirely (for example, uploading regulated data to any external system).

Section 2.5: High-impact vs. low-impact use cases

Section 2.5: High-impact vs. low-impact use cases

Once you can see the landscape, you must prioritize. Not every AI use deserves the same governance effort. The most useful next step is to classify use cases as high-impact or low-impact based on the potential harm if the AI is wrong, biased, leaked, or misused.

Low-impact uses are typically internal productivity tasks with minimal consequences if the output is imperfect, such as drafting internal meeting agendas, brainstorming names, or rewriting non-sensitive text. These still need basic data handling rules, but they rarely require formal approvals.

High-impact uses affect people’s rights, opportunities, finances, health, legal exposure, or safety—or they make decisions that customers or employees cannot easily contest. Examples include: screening candidates, recommending disciplinary action, approving credit/discounts/refunds automatically, generating legal advice sent to customers, diagnosing issues, or producing compliance-critical reports. High-impact also includes any use that handles highly sensitive data (e.g., medical, payroll, government IDs) or that is customer-facing at scale (one error replicated thousands of times).

  • Impact signals: decisions about hiring, pay, promotion, termination; eligibility or access decisions; safety-critical operations; regulated communications; automated actions without human review.
  • Likelihood signals: frequent use, time pressure, low user expertise, unclear instructions, or known model limitations (hallucinations, uneven performance across groups).

A practical method is a 2x2: impact (high/low) vs control maturity (strong/weak). High impact + weak controls is your first governance target. Common mistake: prioritizing by visibility (“everyone uses it”) rather than by consequence (“it can harm people or create legal risk”).

This classification prepares you to assign roles and responsibilities later: low-impact uses can follow standard “safe use” rules, while high-impact uses may require documented approval, testing, monitoring, and explicit human review steps.

Section 2.6: A simple inventory template you can reuse

Section 2.6: A simple inventory template you can reuse

Your goal is a one-page AI use inventory that is easy to complete, easy to update, and detailed enough to drive policy decisions. If the template is too long, teams will avoid it; if it is too short, it will not support governance. The template below is intentionally simple and focuses on the minimum viable fields that connect to privacy, security, accuracy, bias, and human review.

  • Use case name (verb + object): “Summarize customer calls,” “Draft job postings,” “Classify inbound tickets.”
  • Business purpose: writing, search, support, analysis, automation, other.
  • Team/owner: accountable person and department.
  • Tool: product name, vendor, version; internal vs external; account tier.
  • Workflow: where it happens (browser, plugin, API), frequency, and whether output is customer-facing.
  • Inputs: data types used (include whether any personal/confidential/regulated data).
  • Outputs: what the AI produces (text, score, label, decision suggestion) and how it is used.
  • Human review: required? who reviews? what is checked (facts, tone, policy compliance).
  • Storage & retention: chat history, logs, exports, where stored, how long kept.
  • Access: who can view inputs/outputs; sharing settings; admin visibility.
  • Impact rating: high vs low, with a one-sentence rationale.
  • Known issues/controls: prompt guidance, redaction steps, banned data types, monitoring.

Workflow to produce your first inventory in one week: (1) run a 30-minute intake with each department; (2) draft entries yourself to reduce burden; (3) send back for validation; (4) flag high-impact or unclear data flows for follow-up; (5) publish a living document with an owner and review cadence (monthly or quarterly).

Common mistakes to avoid: treating the inventory as static, leaving out embedded AI features in existing software, and failing to record who is responsible for the use case. The practical outcome is immediate: you now have a credible map of AI usage that makes the next chapters—writing clear rules, assigning approvals, and setting guardrails—concrete rather than theoretical.

Chapter milestones
  • List where AI is already used (including “shadow AI”)
  • Group AI use cases by purpose and impact
  • Identify data inputs, outputs, and who sees them
  • Create a one-page AI use inventory
Chapter quiz

1. Why does the chapter argue you must inventory AI use before writing governance rules?

Show answer
Correct answer: Because you cannot govern what you have not named, and rules must match real workflows
The chapter frames inventorying as necessary to make realistic rules based on actual AI use across the workplace.

2. Which sequence best matches the chapter’s recommended “outside in” approach to building an AI inventory?

Show answer
Correct answer: Start with common uses, then find shadow AI, then document inputs/outputs/storage, then classify by impact
The chapter specifies an outside-in order that surfaces key uses first and adds detail needed for risk evaluation.

3. What information must each AI use case capture to support later risk evaluation and rule-writing?

Show answer
Correct answer: Data inputs, data outputs, and who can see/access them
The chapter emphasizes mapping data paths (in/out) and visibility/access so risks can be evaluated realistically.

4. How does the chapter characterize a well-done AI inventory in relation to employees and adoption of guardrails?

Show answer
Correct answer: A shared map that reduces friction and increases willingness to use approved tools and follow guardrails
It’s positioned as a shared map that builds trust and reduces friction, improving compliance with guardrails.

5. Which set of AI uses is explicitly in scope for the chapter’s inventory?

Show answer
Correct answer: AI tools employees use directly and AI embedded in products/services
The scope includes both direct-use tools (e.g., chatbots) and embedded AI (e.g., recommendations, fraud flags).

Chapter 3: Set Principles and Risk Levels (Your Policy Backbone)

Policies fail when they read like legal disclaimers or when they give people only one tool: “don’t.” This chapter gives you a practical backbone for workplace AI rules: a small set of guiding principles, a simple risk model anyone can use, clear decisions about what’s allowed vs. limited vs. not allowed, and explicit “human in the loop” checkpoints for critical work.

Think of this backbone as the operating system for the rest of your governance. Principles tell people why the rules exist and how to make judgement calls. Risk levels tell them how careful to be for a given use. Approval paths and review requirements turn those ideas into repeatable workflow.

A common mistake is starting with a list of tools (“ChatGPT is allowed, Tool X is not”). Tools change weekly; your values and risk approach should not. Another mistake is over-building a risk framework that only compliance experts can use. Your best policy backbone is understandable to a front-line employee in five minutes and still defensible to leadership and regulators.

In the sections that follow, you’ll draft 5–7 guiding principles, define risk as a combination of harm and likelihood, map uses into a low/medium/high tier, and write plain-language rules for when human review is required. You’ll also establish “red lines” that remove ambiguity and close the loop by explaining tradeoffs: how to enable real productivity while reducing risk.

Practice note for Write 5–7 guiding principles for AI use: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define risk levels anyone can understand: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Decide which uses are allowed, limited, or not allowed: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Add “human in the loop” rules for critical work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write 5–7 guiding principles for AI use: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define risk levels anyone can understand: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Decide which uses are allowed, limited, or not allowed: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Add “human in the loop” rules for critical work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write 5–7 guiding principles for AI use: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: Principles: fairness, privacy, safety, accountability

Section 3.1: Principles: fairness, privacy, safety, accountability

Guiding principles are short statements that help employees make consistent decisions when your policy doesn’t explicitly cover a situation. Aim for 5–7 principles, written in plain language, each with an action implication. If a principle can’t be translated into a behavior (“do X / don’t do Y”), it’s too abstract.

Start with four foundational principles most workplaces need: fairness, privacy, safety, and accountability. Then add one to three that fit your context (for example: transparency, security-by-design, or purpose limitation). Below is a practical set you can adapt, each tied to how people should work.

  • Fairness: We do not use AI in ways that create unjustified disparities across protected or vulnerable groups. Action: test and monitor for bias in hiring, performance, credit, benefits, and customer decisions; document mitigations.
  • Privacy: We minimize personal data shared with AI tools and follow our data classification rules. Action: do not paste confidential or personal data into unapproved tools; use anonymization and approved environments.
  • Safety: AI outputs must not introduce unacceptable risk to people, customers, or operations. Action: require review for safety-sensitive advice, instructions, or decisions.
  • Accountability: Humans remain responsible for outcomes. Action: name an owner for each AI use case; keep records of approvals, changes, and incidents.
  • Transparency: We disclose AI assistance where it matters to trust or compliance. Action: label AI-generated content in external communications when required; keep internal notes on material AI involvement.
  • Quality & accuracy: AI is a draft partner, not an authority. Action: verify facts, citations, calculations, and policy statements before use.

Common mistake: listing principles without defining “who does what.” Pair your principles with roles: the business owner defines the purpose and impact; IT/security validates tooling and data flows; legal/compliance defines constraints; managers enforce day-to-day behavior; users follow the rules and report issues. Principles are your compass—roles turn the compass into a route people can follow.

Section 3.2: What “risk” means: harm, likelihood, and scale

Section 3.2: What “risk” means: harm, likelihood, and scale

To govern AI well, you need a shared definition of “risk” that doesn’t require a statistics background. In workplace AI, risk is usually a combination of: (1) how bad the harm could be, (2) how likely it is to happen, and (3) how widely the harm could spread.

Harm includes more than financial loss. It can be privacy exposure (leaking customer data), safety issues (incorrect instructions), legal/regulatory violations (unlawful discrimination), reputational damage (misleading claims), or operational harm (bad decisions at scale).

Likelihood is the chance the harm occurs given your context: the tool’s reliability, how trained the users are, whether guardrails exist, and whether there is review. A weak process can turn a moderately capable model into a high-likelihood risk.

Scale captures “blast radius.” One wrong internal email draft is small. The same error in a customer-facing template that is reused across thousands of accounts is large. Scale is also about speed: AI can spread mistakes quickly through automation and reuse.

  • Risk goes up when AI outputs directly affect people’s rights, access, pay, hiring, credit, healthcare, or safety.
  • Risk goes up when personal, confidential, regulated, or export-controlled data is involved.
  • Risk goes up when the AI action is automated or reused broadly (templates, auto-approvals, bulk processing).
  • Risk goes down when outputs are clearly non-critical, used for brainstorming, and reviewed before any external or binding use.

Engineering judgement matters here: “likelihood” is not only model quality. It’s workflow design. If people routinely copy-paste outputs into customer contracts, your likelihood of harm is high even if the model is “usually right.” Treat risk as a system property—tool + data + users + process.

Section 3.3: A beginner-friendly risk tier model (low/medium/high)

Section 3.3: A beginner-friendly risk tier model (low/medium/high)

A three-tier model works well for most organizations starting AI governance. It’s simple enough for employees to use and structured enough to drive approvals and controls. Your goal is to help someone answer, “Is this allowed, limited, or not allowed?” without a meeting.

Low risk means minimal harm if wrong, limited data sensitivity, and easy reversibility. Typical examples: brainstorming, rewriting internal text, summarizing non-confidential notes, generating draft meeting agendas, or producing code snippets for non-production experiments. Controls: use approved tools; no confidential/personal data; user checks for obvious errors; no automated publishing.

Medium risk means meaningful impact is possible, the work may be customer-facing, or sensitive data might be involved (even if masked). Examples: drafting customer emails, creating marketing copy, summarizing support tickets with identifiers removed, creating internal policies, assisting analysts with reports that influence decisions, or generating code for systems that could reach production. Controls: approved tools and environments; stronger data handling rules; required human review; documentation of prompts/inputs for traceability when appropriate.

High risk means potential for serious harm, legal exposure, safety issues, or rights-impacting decisions. Examples: hiring screening recommendations, performance and compensation decisions, credit/insurance eligibility, medical or safety advice, security-sensitive automation, or generating customer contract terms without legal oversight. Controls: formal approval; testing and monitoring; documented model limitations; access restrictions; incident response plan; and mandatory human oversight with sign-off.

  • Allowed (Low): employees can use with standard rules.
  • Limited (Medium): allowed only with required review steps and approved data handling.
  • Not allowed (High without controls): prohibited unless an explicit governance process approves and safeguards it.

Common mistake: labeling a use “low risk” because it’s “just a draft.” If the draft becomes a template used across the organization, the scale increases and the tier may change. Build a habit: reassess tier when the audience changes (internal → external), when automation is added, or when sensitive data enters the workflow.

Section 3.4: When human review is required and why

Section 3.4: When human review is required and why

“Human in the loop” is not a slogan; it’s a control that prevents AI from making unverified decisions or statements in high-impact contexts. The key is to define when review is required, what the reviewer must check, and what “approval” means in your organization.

Require human review whenever AI output is: (1) customer- or public-facing, (2) used to make or justify a decision affecting someone’s rights, access, pay, or safety, (3) based on sensitive data, or (4) likely to be reused at scale (templates, scripts, automated workflows). In practice, this usually covers most medium-risk uses and all high-risk uses.

  • Accuracy checks: verify facts, calculations, dates, pricing, citations, and any claims about policies, law, or medical/safety guidance.
  • Bias and fairness checks: look for disparate treatment signals (language that encodes protected traits, unjustified proxies, inconsistent criteria).
  • Privacy checks: confirm no personal/confidential data was entered into unapproved tools; ensure output doesn’t reveal sensitive details.
  • Security checks: for code or configurations, review for vulnerabilities, secrets, and unsafe defaults.

Define review depth by tier. For medium risk, a competent peer review may be enough (manager or designated reviewer signs off). For high risk, require a documented approval with named accountable owner, plus domain experts (legal, HR, security, safety) as applicable.

Common mistake: “human review required” but no time is allocated, so people rubber-stamp. Make the workflow realistic: add checklists, require reviewers to edit or comment, and ensure the organization accepts slightly slower throughput for higher confidence work. Governance is engineering: you’re designing a process that reliably catches failures, not hoping individuals will be vigilant forever.

Section 3.5: Red lines: prohibited uses (and clear examples)

Section 3.5: Red lines: prohibited uses (and clear examples)

Every policy backbone needs a short list of red lines: uses that are never allowed, or not allowed without a formal exception process. Red lines remove ambiguity, protect employees from pressure (“just try it”), and reduce organizational exposure. Keep them concrete and example-driven.

  • No entering restricted data into unapproved AI tools: personal data, customer confidential information, credentials, proprietary source code, regulated data (PHI/PCI), or trade secrets. Example: pasting a customer contract with names and pricing into a public chatbot.
  • No fully automated high-impact decisions: AI cannot be the sole basis for hiring, firing, compensation, credit, benefits, or access decisions. Example: auto-rejecting candidates based on an AI score without human review and documented criteria.
  • No impersonation or deceptive content: don’t generate messages that misrepresent identity, endorsements, or approvals. Example: creating a “CEO-approved” statement or fake customer testimonials.
  • No unsafe instructions: do not use AI to generate or distribute instructions that could cause harm without expert review. Example: safety procedures, medical advice, or hazardous equipment steps without qualified sign-off.
  • No bypassing security controls: AI must not be used to write malware, exploit code, or to circumvent authentication/monitoring. Example: asking for ways to disable endpoint detection.

Write red lines in “do not” language and include at least one example for each. Another practical tip: specify what to do instead. For instance: “Use the approved enterprise AI environment for any work involving internal documents; otherwise use synthetic or anonymized examples.” Red lines should feel protective and actionable, not punitive.

Section 3.6: Tradeoffs: enabling value while reducing risk

Section 3.6: Tradeoffs: enabling value while reducing risk

The purpose of governance is not to stop AI use; it’s to make AI use dependable. A strong policy backbone balances value and risk by steering people toward safer pathways rather than forcing “shadow AI” behavior. If rules are too strict or unclear, employees will route around them.

Make the “safe path” the easiest path. Approve a small set of tools, provide templates for low- and medium-risk prompts, and give people a simple decision flow: identify the data type, identify who will see the output, assign a tier, then follow the matching controls. When employees can self-serve the basics, governance scales.

  • Enable with guardrails: allow low-risk drafting and brainstorming broadly, but restrict sensitive data and external publishing.
  • Reduce risk through design: prefer retrieval from approved knowledge bases over pasting documents into chat; use redaction/anonymization; log usage in enterprise tools.
  • Invest where it matters: spend review time and testing budget on high-impact use cases; don’t over-control trivial tasks.

Also plan for change. Risk levels shift as you add automation, integrate with systems of record, or expand to new audiences. Build a lightweight review cadence: quarterly re-check of medium/high use cases, a place to report incidents, and a trigger to re-tier when scope changes.

Practical outcome: by the end of this chapter, you should have (1) 5–7 principles employees can repeat, (2) a shared definition of risk (harm × likelihood × scale), (3) a low/medium/high tier model tied to allowed/limited/prohibited decisions, and (4) explicit human review rules for critical work. This backbone will make the rest of your AI policy clearer, shorter, and easier to enforce.

Chapter milestones
  • Write 5–7 guiding principles for AI use
  • Define risk levels anyone can understand
  • Decide which uses are allowed, limited, or not allowed
  • Add “human in the loop” rules for critical work
Chapter quiz

1. Why does Chapter 3 recommend starting with guiding principles and a risk approach instead of a list of approved AI tools?

Show answer
Correct answer: Because tools change frequently, but principles and risk approach stay stable and transferable
The chapter warns that tool-first policies become outdated quickly; principles and risk levels help people make consistent decisions even as tools change.

2. What makes a risk model effective according to the chapter?

Show answer
Correct answer: It is understandable to a front-line employee in five minutes and still defensible to leaders and regulators
The chapter emphasizes a simple, usable framework that works for everyday staff while remaining credible to leadership and regulators.

3. How does the chapter define risk for workplace AI use?

Show answer
Correct answer: Risk is a combination of potential harm and the likelihood of that harm occurring
It explicitly frames risk as harm × likelihood, which then maps into tiers like low/medium/high.

4. What is the main purpose of defining uses as allowed, limited, or not allowed?

Show answer
Correct answer: To turn principles and risk levels into clear, repeatable decisions about what people can do
The chapter stresses moving from abstract values to clear operational rules and workflows (including approvals and reviews).

5. What does adding “human in the loop” rules accomplish for critical work?

Show answer
Correct answer: It sets explicit human review checkpoints so high-impact decisions aren’t made solely by AI
Human-in-the-loop checkpoints are described as plain-language rules for when human review is required, especially for critical work.

Chapter 4: Write Clear Rules for Data, Privacy, and Security

If Chapters 1–3 helped your workplace decide why you need AI rules and how to write them in plain language, this chapter turns to the question employees ask the moment they open an AI tool: “What am I allowed to put in here?” Most AI incidents in workplaces are not dramatic hacking stories. They are ordinary mistakes—pasting the wrong snippet of text, using a personal account, uploading a spreadsheet without thinking, or assuming the tool is “private” when it is not. Clear governance prevents these errors by making data categories simple, creating do/don’t rules people can remember, setting basic account expectations, and giving everyone a short checklist that fits into normal work.

The goal is not to write a perfect legal definition. The goal is to reduce confusion and variability. When policies are vague (“be careful with data”), employees fill the gaps with assumptions. When policies are concrete (“never paste customer lists, contracts, or credentials”), people can comply even under deadline pressure. The best rules also match engineering judgment: they recognize that risk depends on the data and the tool and who can access the outputs. Throughout this chapter you will build a practical structure: three simple data categories (public, internal, sensitive), a few specific restrictions for AI prompts and uploads, basic security expectations for accounts, and a repeatable checklist.

As you draft, remember a useful principle: write rules as if the reader is smart, busy, and not thinking about security right now. That is real life. Good governance succeeds in real life.

Practice note for Choose simple data categories (public, internal, sensitive): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write do/don’t rules for entering data into AI tools: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set basic security expectations for accounts and access: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a short checklist employees can follow: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Choose simple data categories (public, internal, sensitive): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write do/don’t rules for entering data into AI tools: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set basic security expectations for accounts and access: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a short checklist employees can follow: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: What counts as sensitive data (plain-language examples)

Section 4.1: What counts as sensitive data (plain-language examples)

Start by choosing simple data categories your whole organization can apply consistently. A practical set is: Public, Internal, and Sensitive. These categories are not about whether something feels “important.” They are about what could realistically go wrong if the data is shared outside approved channels—whether by mistake, through tool training, through logging, or by someone else gaining access.

Public data is information you would be comfortable seeing on your website or in a press release. Examples: published marketing copy, job postings, public product documentation, already-announced pricing, public research articles. Employees can usually use public data in most AI tools, but still must watch for accuracy and brand voice.

Internal data is non-public business information that is not intended for external sharing. Examples: internal process docs, meeting notes that don’t include sensitive details, draft project plans, internal org charts, internal metrics that are not customer- or employee-identifying. Internal data may be allowed only in approved tools where your organization has configured privacy settings and access controls.

Sensitive data is where you should draw a bright line. Give employees examples they can recognize in seconds. Common sensitive items include: customer lists and contact details; employee HR records; individual performance notes; payroll details; passwords, API keys, tokens, private certificates; source code for unreleased products; security incident details; unreleased financial results; contracts and legal correspondence; non-public product roadmaps; regulated data (medical information, government IDs); and any document marked “Confidential.”

A common mistake is to define “sensitive” so narrowly that only obvious regulated data qualifies. In practice, credentials, customer identifiers, and contractual documents cause frequent problems. Your rule should be memorable: if it identifies a person, grants access, or exposes confidential business plans, treat it as sensitive. Also state what to do when unsure: default to “sensitive” and ask the designated approver.

Section 4.2: Personal data and confidentiality—core concepts

Section 4.2: Personal data and confidentiality—core concepts

Employees often confuse “personal data” with “sensitive data.” Clarify the relationship: personal data is information that identifies or can reasonably be linked to an individual (customers, employees, partners). Some personal data is low-risk (a business email in a public directory), while other personal data is high-risk (home address, ID numbers, health information). In governance, the safest practical approach is to treat personal data as at least Internal, and often Sensitive, depending on context and volume.

Confidentiality is broader than privacy. It covers any information your organization has promised (explicitly or implicitly) to keep limited—through contracts, ethics, competitive advantage, or trust. For AI use rules, you want employees to pause on two questions before they paste text into a tool: (1) Is this about a person? and (2) Is this something the organization would not share externally?

Make the concept operational by specifying handling expectations. For example: if a task needs personal data (support ticket analysis, hiring notes, account management), require either (a) an approved internal AI tool with a documented privacy mode, or (b) de-identification: remove names, emails, account numbers, addresses, and any unique identifiers. Explain what “de-identification” means in plain terms: replace “Jane Smith” with “Customer A,” and remove reference numbers that could be used to look someone up.

Engineering judgment matters here. People often think removing the name is enough, but combinations of details can re-identify someone (role, location, exact dates, unique complaint). A good policy sets the expectation: reduce to the minimum data needed. If the AI task does not require identity, do not include identity. The practical outcome is fewer accidental disclosures and fewer reasons to block useful AI assistance for routine work.

Section 4.3: Prompting safely: what not to paste into AI

Section 4.3: Prompting safely: what not to paste into AI

Now convert your categories into do/don’t rules that employees can follow while prompting. The easiest format is a short “Allowed / Allowed with conditions / Not allowed” block. In day-to-day work, “Not allowed” must be explicit. Avoid vague phrases like “avoid confidential data” without examples.

Don’t paste credentials (passwords, API keys, session tokens), even into approved tools. AI tools may store prompts in logs, and credentials are instantly exploitable. Also don’t paste customer lists, full contracts, HR files, incident reports, or unreleased financial results into general-purpose external chatbots. If you allow certain sensitive workflows at all, route them through a controlled tool and process.

Do use safer prompting techniques that achieve the same outcome without risky data. For example: summarize instead of pasting verbatim; use synthetic examples; replace identifiers; ask the AI for a template (“Write a customer email apology template”) rather than providing the real incident details. When employees need help with analysis, encourage them to paste only the minimum excerpt needed and to remove names and reference numbers.

Call out a subtle mistake: employees sometimes paste an entire document “because it’s faster,” then ask for a tiny output. Governance should teach the opposite: share less, ask more. Another mistake is copying internal code or configuration snippets into an external tool to debug them. Instead, encourage generic reproduction steps, redacted config samples, or use of an approved coding assistant configured for your environment.

Practical policy language you can reuse: “If you cannot explain why the AI needs a specific piece of information, remove it.” This sets a simple engineering standard and reduces risk without stopping productivity.

Section 4.4: Vendor and tool considerations (without legal jargon)

Section 4.4: Vendor and tool considerations (without legal jargon)

Your rules should acknowledge an uncomfortable truth: the same prompt is not equally safe in every tool. Rather than forcing employees to read vendor contracts, define a small set of tool “types” and what they are allowed to handle. Keep the language practical and centered on observable controls.

Identify, at minimum, three tool buckets: (1) Public/general AI tools accessed on the open internet; (2) Approved enterprise AI tools where your organization manages accounts and settings; and (3) Internal AI systems hosted and monitored by your organization. Then state your baseline: sensitive data is only permitted in buckets (2) or (3), and only when the specific tool is listed as approved for that data category.

When evaluating a vendor, focus on a short set of questions employees can understand: Does the tool allow the organization to turn off training on your data? Can you control who has access? Does it support single sign-on? Can you export audit logs? Can you delete conversations or files? Where is data stored, and is it encrypted? If you can’t answer these, the tool is not ready for internal or sensitive use.

Also address file uploads and connectors. Uploading a document or connecting a drive can expose far more information than a single prompt. A common mistake is allowing “internal” data in a chat tool while forgetting that the same tool also supports long-term memory, shared workspaces, or auto-indexing of uploads. Write one clear expectation: features that expand data sharing (connectors, team workspaces, memory, plugins) must be explicitly approved.

The practical outcome is a tool list employees trust: they know which tools are safe for which tasks, and you reduce shadow AI usage caused by unclear or overly restrictive rules.

Section 4.5: Access control basics: who can use which tools

Section 4.5: Access control basics: who can use which tools

Governance is not only about what data can go into AI. It is also about who can use which AI tools, under what account setup, and with what oversight. Keep this section simple: define a default permission model and a small number of roles responsible for approvals and reviews.

Start with an expectation that employees use company-managed accounts for any approved AI tool. This supports consistent security settings, prompt logging where appropriate, and offboarding. Prohibit using personal emails for work AI tasks. Then set baseline security expectations: strong unique passwords (or password manager), multi-factor authentication where available, and single sign-on for enterprise tools. State that sharing AI accounts is not allowed; shared accounts destroy auditability and increase leakage risk.

Next, map tools to data categories. Example policy: “Public tools: public data only. Approved enterprise tools: public + internal; sensitive only with feature controls enabled. Internal systems: may process sensitive data for approved workflows.” This gives managers a practical way to say yes while staying within guardrails.

Define who approves what. A workable pattern is: team leads approve public/internal use cases in approved tools; the security or privacy owner approves any sensitive-data workflow; and IT/admin approves new tools and integrations. Make the review cadence realistic (e.g., quarterly tool list review) and assign a named owner for the “approved tools” page so employees can find the answer quickly.

Common mistakes include granting everyone access to everything “for innovation,” then discovering later that sensitive workflows were happening in uncontrolled spaces. A tighter access model reduces incidents and makes it easier to expand access safely over time.

Section 4.6: A practical “before you use AI” checklist

Section 4.6: A practical “before you use AI” checklist

Finally, give employees a short checklist they can run in under a minute. Checklists work because they fit real workflows: the moment before pasting text, uploading a file, or connecting a data source. Keep it short enough that people actually use it, and specific enough that it changes behavior.

  • 1) What data category is this? Public, Internal, or Sensitive. If you’re unsure, treat it as Sensitive and ask.
  • 2) Does the AI tool match the data? Public tools get Public data only. Internal/Sensitive requires an approved, company-managed tool.
  • 3) Did you minimize the data? Remove names, IDs, account numbers, reference numbers, and anything not required for the task.
  • 4) Are you about to paste secrets? Never include passwords, API keys, tokens, private URLs, or security details.
  • 5) Are uploads/connectors involved? If you’re uploading a document or connecting a drive, confirm that feature is approved for this data category.
  • 6) Are your account settings correct? Use your company account, MFA on, no shared logins.
  • 7) Can you explain the purpose? If the AI doesn’t need a detail to complete the task, don’t include it.

Teach employees how to act on a “no.” The checklist should not be a dead end. Provide a fallback: use a template prompt with synthetic examples, use a redacted excerpt, switch to an approved enterprise tool, or request approval for the workflow. This turns governance into a productivity enabler rather than a blocker.

When these rules and the checklist are in place, you get a practical outcome: employees know what to do without guessing, managers can approve workflows consistently, and security/privacy teams see fewer preventable incidents. That is the core of effective AI governance—clear, actionable rules that match how work actually happens.

Chapter milestones
  • Choose simple data categories (public, internal, sensitive)
  • Write do/don’t rules for entering data into AI tools
  • Set basic security expectations for accounts and access
  • Create a short checklist employees can follow
Chapter quiz

1. What problem is Chapter 4 primarily trying to prevent when employees use AI tools at work?

Show answer
Correct answer: Ordinary mistakes like pasting the wrong text, uploading files without thinking, or assuming a tool is private
The chapter emphasizes that most workplace AI incidents come from everyday errors, so rules should prevent those.

2. Why does the chapter recommend using simple data categories like public, internal, and sensitive?

Show answer
Correct answer: To reduce confusion and variability so employees can make consistent decisions quickly
The goal is clarity and consistency, not perfect legal precision.

3. Which rule style best matches the chapter’s guidance for making AI data policies usable under deadline pressure?

Show answer
Correct answer: Concrete do/don’t rules (e.g., never paste customer lists, contracts, or credentials)
Specific examples reduce guesswork, while vague rules force employees to rely on assumptions.

4. According to the chapter, what should good AI governance rules account for when judging risk?

Show answer
Correct answer: The data involved, the tool being used, and who can access the outputs
The chapter notes risk depends on the data, the tool, and access to outputs.

5. What is the most practical reason the chapter recommends a short employee checklist?

Show answer
Correct answer: It helps people follow the rules within normal work routines, even when they are busy and not thinking about security
The checklist is meant to fit real life: smart, busy employees who may not be focused on security in the moment.

Chapter 5: Rules for Quality, Bias, and Responsible Outputs

Most workplaces adopt AI because it makes work faster: drafting emails, summarizing documents, generating code, preparing slide outlines, or answering customer questions. The risk is that “faster” can quietly become “sloppier” unless you set clear rules for quality and responsible outputs. This chapter focuses on guardrails you can write in plain language: accuracy expectations, how to handle uncertain answers, simple bias checks, and when to disclose AI involvement.

Good governance does not require turning every AI-assisted task into a formal review process. Instead, you want consistent habits that scale. The practical goal is to reduce three common failure modes: (1) incorrect statements presented confidently, (2) biased or unfair content that harms people or creates legal exposure, and (3) unclear ownership—nobody knows whether an output was AI-generated, who verified it, or what it was based on.

When you write rules for quality and responsible outputs, use “if/then” triggers and define a minimum verification standard. For example: if the output contains numbers, quotes, policy claims, medical guidance, legal interpretations, or hiring recommendations, then it requires a human check against authoritative sources. Also define what “authoritative” means in your context: official internal systems, signed contracts, published policies, regulated guidance, or peer-reviewed sources.

This chapter gives you a simple, repeatable workflow: treat AI outputs as drafts; verify facts and sources; check for bias and sensitive-domain risks; label AI assistance where required; and leave lightweight documentation that proves you did the right thing without burying teams in bureaucracy.

Practice note for Set accuracy and citation expectations for AI-assisted work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define how to handle errors and uncertain answers: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Add simple bias and fairness checks: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create output labeling rules (when to disclose AI use): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set accuracy and citation expectations for AI-assisted work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define how to handle errors and uncertain answers: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Add simple bias and fairness checks: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create output labeling rules (when to disclose AI use): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set accuracy and citation expectations for AI-assisted work: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: AI can be wrong: hallucinations and overconfidence

AI tools can produce fluent text that is factually wrong. In governance language, this is often called “hallucination,” but the workplace impact is simpler: the tool may invent details, misread context, or stitch together plausible-sounding claims that were never true. The most dangerous version is overconfidence—an answer delivered with strong certainty even when the model is guessing.

Your rules should assume AI output is a draft, not an authority. Write policy statements that set expectations such as: “AI-generated content must be treated as unverified until checked,” and “Users are accountable for the final output, even if AI drafted it.” This clarifies responsibility and prevents the “the tool said so” defense.

Define common triggers for extra caution. AI is more likely to be wrong when asked for: exact numbers, dates, legal clauses, citations, current events, or details outside the provided context. A practical rule is to require humans to validate any factual claim that could change a decision, spend, customer promise, or compliance outcome.

  • Common mistake: Copying a confident AI answer into a report without verifying the primary source.
  • Engineering judgment: Decide which outputs must be “correct-by-construction” (pulled from systems of record) versus “draft-by-default” (language suggestions that a human refines).
  • Practical outcome: Fewer retractions, fewer customer corrections, and clearer accountability.

Also set a rule for uncertainty: if the AI cannot reliably know (missing context, conflicting sources, or non-deterministic interpretation), the output must say so. A good standard is: “When uncertain, the AI-assisted draft must include assumptions, open questions, and what would confirm the answer.”

Section 5.2: Verification habits: cross-checking and sources

Verification is not a one-time “fact check”; it is a habit built into your workflow. The goal is to create a minimum standard that is easy to follow. Start by requiring that important outputs include either (a) links to sources, (b) citations to internal documents, or (c) a note that no authoritative source was available and the content is a best-effort draft.

For workplace policy writing, a useful rule is: “If an AI-assisted output contains factual claims, it must include verifiable sources or be rewritten to remove unverifiable claims.” This encourages staff to either back statements up or reframe them as suggestions, hypotheses, or questions.

Teach a simple cross-check routine people can do in minutes:

  • Triangulate: Confirm the claim using at least two independent sources (e.g., internal policy + official regulator guidance).
  • Quote check: If the AI provides a quote, find the original document and verify the exact wording and context.
  • Number check: Recalculate or pull numbers from the system of record; do not trust generated figures.
  • Scope check: Confirm geography, date, product line, or customer segment—many errors are “right thing, wrong scope.”

Define what counts as an acceptable source. For example, internal: HR handbook, approved SOPs, finance system reports, legal-approved templates. External: official government sites, standards bodies, vendor documentation. Discourage “citation laundering” where an AI invents references; your rule can state: “Users must open and review cited sources; citations that cannot be opened and verified must be removed.”

Finally, set expectations for citation format. You do not need academic rigor, but you do need traceability. A lightweight approach is “source + date + link or document ID.” That alone makes audits and peer review dramatically easier.

Section 5.3: Bias basics: how unfair outcomes can appear

Bias in AI-assisted work is not only about the model’s training data; it also comes from your prompts, your data inputs, and how humans interpret outputs. Unfair outcomes can appear as stereotypes in text, uneven tone across groups, or “default assumptions” that disadvantage certain people. Bias also shows up in omissions: whose perspective is missing, which risks are downplayed, or what success looks like.

Your governance rules should define a simple fairness check that fits daily work. You can require that AI-assisted content affecting people (customers, candidates, employees, patients) must be reviewed for: (1) inappropriate references to protected characteristics, (2) unequal standards, and (3) unsupported generalizations.

  • Language scan: Look for proxies (e.g., “culture fit,” “energetic,” “native speaker”) that can encode discrimination.
  • Consistency scan: Apply the same criteria across individuals and groups; avoid shifting standards.
  • Evidence scan: Ask, “What observable evidence supports this claim?” If none, remove or reframe.

Bias checks work best when they are concrete. For example, for performance feedback drafted with AI, require managers to verify that critiques reference measurable behavior, not personality traits. For customer communications, require that instructions and disclaimers are accessible and not targeted in ways that exclude users.

Also address “automation bias,” where humans over-trust AI recommendations. A practical policy statement is: “AI suggestions must not be the sole basis for decisions impacting employment, compensation, access to services, credit, or eligibility.” Even if you are not running a formal model, AI-assisted summaries and rankings can effectively function as decision systems.

Section 5.4: Sensitive domains: hiring, finance, health, legal

Some domains are sensitive because errors and bias have high consequences and may trigger regulatory obligations. Your rules should identify these domains explicitly—hiring/HR, finance, health, and legal—and set stricter guardrails. The purpose is not to ban AI; it is to require a higher verification bar, clearer approvals, and stronger documentation.

Hiring/HR: AI can help draft job descriptions or summarize interview notes, but it must not decide who advances. Require that interview summaries be checked against the original notes or recordings, and prohibit generating “candidate scores” unless a formally approved process exists. Add a rule: “Do not include protected characteristics or inferred traits (age, health status, religion) in AI prompts or outputs.”

Finance: AI can draft variance explanations or customer invoices, but numbers must come from systems of record. Require sign-off for external-facing financial statements and forbid AI-generated investment or credit advice unless reviewed by qualified personnel. A simple trigger: any output with pricing, tax, revenue, or forecast figures requires manual reconciliation.

Health: In workplace settings this might include benefits guidance, wellness programs, or occupational health. Require that AI outputs avoid diagnosis or treatment instructions and instead point to approved resources. A practical rule: “AI may provide general information but must not provide individualized medical advice; route to clinicians or approved materials.”

Legal: AI can summarize contracts, but it can misstate obligations. Require that any legal interpretation be reviewed by legal counsel and that templates come from approved libraries. For customer promises, require legal-approved language and prohibit “made-up” policy citations.

Across all sensitive domains, define escalation: when a user is unsure, they must stop and consult the domain owner. This turns uncertainty into a safety mechanism rather than a hidden defect.

Section 5.5: Transparency: labeling AI-assisted content

Transparency protects your organization in two ways: it prevents accidental misrepresentation (claiming a human wrote or verified something that was not), and it helps downstream reviewers apply the right level of scrutiny. Labeling does not have to be heavy-handed, but it should be consistent.

Start by defining when disclosure is required. Common triggers include: external communications, customer support answers, published marketing content, policy documents, training materials, and any content that could be relied on for decisions. Internal brainstorming notes may not need disclosure, but final deliverables often should.

  • Internal label example: “Drafted with AI assistance; human reviewed for accuracy and policy alignment.”
  • External label example: “This content was prepared with AI assistance and reviewed by our team.”
  • High-risk label: “AI-assisted summary; verify against the source document before acting.”

Make labeling rules practical by linking them to channels. For instance: emails to customers require a disclosure line if AI wrote more than minor edits; support knowledge base articles require an editor sign-off and an “AI-assisted” tag in the CMS metadata; reports for executives require a methods note describing what was AI-generated and what was validated.

Also address a subtle transparency risk: AI can paraphrase copyrighted or confidential content in ways that blur ownership. Your policy can require that users confirm rights to reuse text and avoid pasting third-party content into public AI tools. Transparency includes being honest about provenance—where content came from, and whether it is permitted to be reused.

Section 5.6: Documentation: keeping notes without heavy process

Documentation is how you prove responsible use without turning daily work into paperwork. The goal is “just enough” traceability: what tool was used, what inputs mattered, what checks were performed, and who approved the final output. This is especially important when an error occurs—good notes help you correct quickly and prevent repeats.

A lightweight standard is to require a short “AI use note” for medium- and high-impact outputs. This can live in a comment, ticket, or document footer. Keep it consistent so it becomes muscle memory.

  • Tool + version: Which AI system was used (and if possible, which model/version).
  • Purpose: Drafting, summarization, translation, classification, brainstorming.
  • Inputs: High-level description (avoid copying sensitive data into the note).
  • Verification: What sources were checked, and what was reconciled.
  • Edits: Major changes the human made (e.g., removed claims without sources).
  • Reviewer/approver: Name/role for sensitive or external outputs.

Include an “error handling” rule: when an AI-assisted output is found to be wrong or biased, teams must (1) correct the artifact, (2) notify downstream users if they might rely on it, and (3) record a brief note describing the cause (missing source, ambiguous prompt, outdated policy). This creates organizational learning without blame.

Finally, document your thresholds. Not every Slack message needs an audit trail. Define tiers: low-risk (no documentation), medium-risk (short AI use note), high-risk (note + approval + saved sources). That tiering is governance that people will actually follow.

Chapter milestones
  • Set accuracy and citation expectations for AI-assisted work
  • Define how to handle errors and uncertain answers
  • Add simple bias and fairness checks
  • Create output labeling rules (when to disclose AI use)
Chapter quiz

1. Why does Chapter 5 argue workplaces need explicit quality and responsibility rules when adopting AI?

Show answer
Correct answer: Because speed from AI can quietly turn into sloppier work without guardrails
The chapter warns that “faster” can become “sloppier” unless clear quality and responsibility rules are set.

2. Which set best matches the three common failure modes this chapter aims to reduce?

Show answer
Correct answer: Confident incorrect statements; biased/unfair content; unclear ownership of AI-generated work
The chapter highlights incorrect confident claims, bias/fairness harms, and unclear ownership/verification as key risks.

3. What is the recommended way to write quality rules so they are practical and scalable?

Show answer
Correct answer: Use “if/then” triggers and define a minimum verification standard
The chapter recommends simple, repeatable rules using if/then triggers and a minimum verification standard.

4. According to the chapter’s example, which output should trigger a human check against authoritative sources?

Show answer
Correct answer: A draft that includes numbers and policy claims
The chapter lists numbers, quotes, policy claims, medical guidance, legal interpretations, or hiring recommendations as triggers for human verification.

5. Which workflow best reflects the chapter’s suggested repeatable process for responsible AI outputs?

Show answer
Correct answer: Treat outputs as drafts; verify facts/sources; check bias and sensitive-domain risks; label AI use where required; leave lightweight documentation
The chapter proposes a lightweight, consistent workflow that includes verification, bias checks, required labeling, and minimal documentation.

Chapter 6: Make It Real: Roles, Approvals, Incidents, and Rollout

Policies fail most often not because the words are wrong, but because the workplace cannot operate them. People do not know who is allowed to approve an AI use, how to ask for permission, what to do when something goes wrong, or how the rules change over time. This chapter turns your AI governance from “a document” into a working system: simple roles, lightweight approvals, a controlled way to handle exceptions, and an incident plan that reduces harm quickly.

Good governance balances speed and safety. If you make approvals too heavy, teams will route around the policy and use tools in shadow IT. If you make it too loose, sensitive data leaks, inaccurate outputs get published, and the organization loses trust. The goal is a practical operating model: clear accountability, predictable decisions, and an improvement loop that keeps up with tools and risks.

We will use four building blocks that fit most organizations: (1) assign simple roles (owner, approver, user, reviewer), (2) build a lightweight approval and exception process, (3) create an incident response plan for AI mishaps, and (4) publish, train, and improve your governance over time. The rest is execution discipline.

Practice note for Assign simple roles (owner, approver, user, reviewer): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a lightweight approval and exception process: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create an incident response plan for AI mishaps: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Publish, train, and improve your AI governance over time: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Assign simple roles (owner, approver, user, reviewer): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a lightweight approval and exception process: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create an incident response plan for AI mishaps: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Publish, train, and improve your AI governance over time: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Assign simple roles (owner, approver, user, reviewer): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a lightweight approval and exception process: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: RACI made simple: who does what

Start by making responsibilities explicit. A simple RACI (Responsible, Accountable, Consulted, Informed) avoids the common failure mode where “everyone owns it,” which means no one does. For workplace AI, keep the role set small and repeatable across departments.

  • Owner (Accountable): accountable for the AI use case or tool integration. Owns the documented purpose, data flows, and controls. Decides what “good” means and ensures monitoring happens.
  • Approver (Accountable/Consulted): grants permission to use an AI tool for a defined scope. In practice this is often a functional leader plus a risk partner (Security/Privacy/Legal) depending on the data type and impact.
  • User (Responsible): uses the AI tool according to the rules. Ensures prompts/inputs follow data handling requirements and that outputs are reviewed before use.
  • Reviewer (Responsible): performs the required human review (accuracy, bias, tone, compliance). Reviewer can be a peer, manager, QA function, or domain expert depending on risk.

Engineering judgment matters in matching roles to risk. Low-risk uses (brainstorming internal copy with no sensitive data) can assign the “Reviewer” as the user’s manager on a sampling basis. Higher-risk uses (customer communications, HR, finance, safety-critical decisions) require a named reviewer, documented review steps, and sometimes second-line review.

Common mistakes: assigning approvals to a committee that meets monthly, making the approver also the user (no independence), or forgetting accountability for monitoring after launch. A practical outcome is a one-page “role map” per use case: names, backups, and what evidence each role must produce (e.g., approval ticket, review checklist, incident log).

Section 6.2: Approvals: when you need them and how to request

Approvals work when they are predictable and fast. Define “approval triggers” so employees do not guess. A good default: require approval when any of the following are true: sensitive data is entered, outputs go to customers or the public, decisions affect people (employment, credit, pricing, eligibility), the system integrates with internal data sources, or the tool/vendor has not been vetted.

Create a lightweight request workflow that fits existing tools (ticketing system, procurement intake, or a simple form). The request should capture only what the approver needs to make a decision, not a dissertation.

  • Use case description: what problem, who uses it, who is impacted.
  • Tool and deployment: vendor/tool name, version, where it runs (browser, API, on-prem), and whether it stores data.
  • Data classification: public/internal/confidential/regulated; examples of what will be entered.
  • Output handling: where outputs go, retention, and required human review steps.
  • Risk notes: known failure modes (hallucinations, bias, prompt injection), mitigations, and fallback plan.
  • Owner/Reviewer: named roles, with a proposed monitoring plan (even if simple).

Set service-level targets so approvals do not become bottlenecks. For example: low-risk approvals within 2 business days; medium-risk within 10; high-risk requires a review meeting. Also define approval outcomes: approved, approved-with-conditions (e.g., “no customer data,” “use only approved tenant,” “mandatory disclaimer”), or rejected with a clear reason and a path forward.

Practical outcome: employees can answer “Do I need approval?” in under a minute, and if yes, submit a request in under 15 minutes. That is how you prevent shadow usage.

Section 6.3: Exceptions: handling edge cases without chaos

No policy covers every situation. Exceptions are inevitable; unmanaged exceptions become loopholes. Treat exceptions as a controlled process with a tight definition: a temporary, documented deviation from a rule, approved by an accountable person, with compensating controls and an end date.

First, clarify what is not an exception. “I’m busy,” “the tool is convenient,” or “everyone is doing it” are not valid reasons. Valid reasons include urgent business continuity needs, regulatory deadlines, or technical constraints that prevent immediate compliance.

  • Exception request: cite the specific rule being bypassed, the justification, scope (teams/systems), and duration.
  • Compensating controls: added steps that reduce risk (e.g., stronger human review, redaction, using synthetic data, disabling storage, limiting access).
  • Expiration: a date and a plan to return to standard compliance (migration, vendor change, process update).
  • Recordkeeping: log exceptions in a shared register so repeats are visible.

Engineering judgment is required in choosing compensating controls. If a team must use an unvetted model for a short period, you might prohibit sensitive inputs, require manual fact-checking, and restrict outputs to internal drafts only. If the exception involves regulated data, the right answer is often “no” until a compliant pathway exists.

Common mistakes: granting indefinite exceptions, failing to reassess when the tool changes, and allowing exceptions to multiply without learning. Practical outcome: exceptions teach you where the policy is too rigid or unclear, and they create a prioritized backlog for governance improvements.

Section 6.4: Incident basics: reporting, triage, and fixes

Incidents will happen: an employee pastes confidential data into the wrong tool, an AI-generated customer email contains false claims, a model output reflects bias, or an integration exposes data through a prompt-injection attack. The goal is not zero incidents; it is fast detection, containment, and learning.

Define what counts as an AI incident and how to report it. Make reporting simple: a dedicated email alias or ticket category, with “report within 24 hours” guidance. Encourage reporting by focusing on safety rather than blame, while still enforcing deliberate misuse consequences.

  • Report: who reported, tool/use case, what happened, what data was involved, where outputs went.
  • Triage: severity level (e.g., low/medium/high) based on data sensitivity, external exposure, and human impact.
  • Contain: stop the bleeding (disable access keys, revoke links, pause automation, notify recipients if needed).
  • Investigate: root cause (process gap, training gap, tool misconfiguration, unclear policy, vendor behavior).
  • Fix: corrective actions (prompt/input guardrails, access controls, review steps, updated rules, vendor changes).

Include your existing Security/Privacy incident response team. AI incidents are rarely “AI-only”; they intersect with data handling, communications, and operational risk. Decide in advance who can authorize containment actions, who communicates externally, and how evidence is preserved (logs, prompts, outputs).

Common mistakes: treating incidents as one-off embarrassments, failing to notify affected stakeholders, or “fixing” by banning all AI use. Practical outcome: every incident produces a short post-incident note and at least one measurable control improvement (e.g., a new redaction step, a revised approval trigger, or an updated review checklist).

Section 6.5: Training and adoption: making rules usable

Publishing a policy is not adoption. People follow rules they can remember under deadline pressure. Your rollout should translate governance into daily habits: what to do, what not to do, and how to get help.

Design training by audience and task. “All employees” training should be short and concrete: approved tools list, data do’s/don’ts, required human review, and where to request approval. Role-based training should go deeper: owners learn how to document data flows and monitoring; reviewers learn how to fact-check and detect bias; approvers learn how to apply triggers consistently.

  • Job aids: one-page checklists (e.g., “Before you paste: classify the data,” “Before you send: verify claims and sources”).
  • Examples: approved vs. non-approved prompts; compliant vs. non-compliant outputs; what a good review looks like.
  • Tool configuration: make the safe path the easy path (SSO, approved tenants, disabled training on customer data where possible).
  • Internal comms: announce what changed, why it matters, and where to ask questions.

Adoption improves when people see practical outcomes: fewer rework cycles, clearer approvals, and reduced risk anxiety. Common mistakes: overly legalistic language, training that ignores real workflows, and failing to update onboarding for new hires. A practical outcome is a “governance starter kit” that teams can reuse: templates, pre-approved use cases, and a clear escalation path.

Section 6.6: Review cadence and continuous improvement

AI tools and risks change faster than annual policy cycles. Set a review cadence that matches the pace of change without creating constant churn. A workable model: quarterly governance review for metrics and policy tweaks, plus ad-hoc reviews for major tool changes, new regulations, or significant incidents.

Track a small set of governance metrics to guide decisions. Focus on signals that indicate whether the system is working: number of approval requests and average time to decision, top reasons for rejection, exception volume and duration, incident counts by severity, training completion rates, and audit findings (e.g., whether required human review evidence exists).

  • Quarterly: review metrics, update approved tools list, refine approval triggers, and publish clarifications.
  • After incidents: update controls and training; decide whether similar use cases need re-approval.
  • When vendors change: reassess data handling, retention, model behavior, and integration security.

Use engineering judgment to avoid “policy thrash.” Not every new headline requires rewriting the rules; prioritize changes that reduce real risk and improve usability. Also watch for governance debt: outdated approved lists, stale exceptions, and unclear ownership when teams reorganize.

Practical outcome: governance becomes a living system—stable enough that teams trust it, but responsive enough that it keeps you safe. When someone asks, “Can we use this AI for that task?”, your organization can answer quickly, consistently, and with evidence.

Chapter milestones
  • Assign simple roles (owner, approver, user, reviewer)
  • Build a lightweight approval and exception process
  • Create an incident response plan for AI mishaps
  • Publish, train, and improve your AI governance over time
Chapter quiz

1. According to Chapter 6, why do AI governance policies most often fail in workplaces?

Show answer
Correct answer: Because the workplace cannot operate the policy in practice (roles, approvals, incident steps, updates)
The chapter emphasizes that failure usually comes from operational gaps—people don’t know who approves, how to request permission, what to do in incidents, or how rules evolve.

2. What is the main purpose of the chapter’s approach to turn governance from “a document” into a working system?

Show answer
Correct answer: To create a practical operating model with clear accountability, predictable decisions, and an improvement loop
Chapter 6 frames governance as an operating model that balances speed and safety while staying adaptable over time.

3. What risk does the chapter warn about if the approval process is too heavy?

Show answer
Correct answer: Teams will route around the policy and use AI tools in shadow IT
Overly burdensome approvals push teams to bypass official processes, creating shadow IT and reducing real oversight.

4. What does Chapter 6 identify as the goal of balancing speed and safety in governance?

Show answer
Correct answer: A practical operating model that reduces harm and maintains trust
The chapter stresses that governance should be practical—fast enough to be used and safe enough to prevent leaks, inaccurate publication, and loss of trust.

5. Which set of “four building blocks” does Chapter 6 propose for most organizations?

Show answer
Correct answer: Assign roles; build a lightweight approval and exception process; create an incident response plan; publish, train, and improve over time
The chapter explicitly lists these four building blocks as a practical foundation for execution discipline.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.