HELP

+40 722 606 166

messenger@eduailast.com

AI Sales Scripts for Beginners: Discovery & Objection Handling

AI In Marketing & Sales — Beginner

AI Sales Scripts for Beginners: Discovery & Objection Handling

AI Sales Scripts for Beginners: Discovery & Objection Handling

Write, practice, and improve sales conversations with AI—step by step.

Beginner ai sales · sales scripts · objection handling · discovery questions

Learn sales scripting with AI—without sounding fake

This beginner course is a short, book-style guide to creating AI-assisted sales scripts you can actually use. You will learn how to ask better discovery questions, respond to objections with confidence, and practice real conversations using an AI chatbot as your training partner. No coding, no data science, and no complicated tools—just clear steps and repeatable templates.

Many new sellers struggle with two moments: (1) figuring out what to ask so they understand the buyer, and (2) knowing what to say when the buyer pushes back. AI can help, but only if you give it the right inputs and apply a simple quality check. This course teaches you both: how to prompt AI for useful scripts and how to edit those scripts so they sound human, honest, and on-brand.

What you’ll build as you go

By the end, you will have a complete “sales script kit” you can reuse across calls, email, and DMs. Each chapter adds one piece, so you never feel lost or overwhelmed.

  • A one-page product and customer brief that guides AI outputs
  • A discovery question bank (with follow-ups) for common situations
  • A practical objection-handling library for price, timing, trust, and competitors
  • Channel-specific scripts: call openers, follow-up emails, DMs, and voicemails
  • A practice routine using AI role-play, plus a simple way to score and improve your scripts

How the course is taught (beginner-friendly)

This course avoids jargon and explains everything from first principles. You’ll learn a simple conversation structure, then use AI to draft scripts, and then refine them with a “human check” so you don’t rely on AI blindly. You’ll also learn how to protect privacy and avoid making claims you can’t back up.

Each chapter includes clear milestones and small practice steps, like rewriting awkward lines, generating follow-up questions, and running short objection drills. The goal is not to memorize perfect lines. The goal is to build skill: staying calm, asking useful questions, and guiding the buyer to a next step.

Who this is for

If you are new to sales, new to AI, or simply tired of scripts that feel pushy or robotic, this course is for you. It works for individuals improving their career skills, small businesses building a basic sales process, and teams in larger organizations that want consistent messaging.

Get started

You can begin right away and build your script kit chapter by chapter. Register free to save progress, or browse all courses to find related training in AI for marketing and sales.

When you finish, you won’t just have “AI-generated scripts.” You’ll have a repeatable method to create, practice, and improve sales conversations that feel natural—and help you move deals forward.

What You Will Learn

  • Explain what an AI chatbot can and cannot do in sales conversations
  • Create a simple product-and-customer brief that makes AI outputs accurate
  • Write beginner-friendly discovery questions for calls, emails, and DMs
  • Generate role-play practice scripts and follow-up questions using AI
  • Handle common objections (price, timing, trust, competitors) with calm replies
  • Turn a messy conversation into clean call notes, next steps, and a follow-up message
  • Build a reusable prompt library for your sales process and tone
  • Check AI-written scripts for accuracy, honesty, and compliance basics

Requirements

  • No prior AI or coding experience required
  • Basic ability to read and write in English
  • Any computer or mobile device with internet access
  • Willingness to practice short scripts out loud

Chapter 1: Sales Conversations and AI—The Basics

  • Define your goal: discovery, demo, or follow-up
  • Meet the AI tool: what it does well vs. poorly
  • Set your voice: friendly, direct, or consultative
  • Create your first simple script with AI
  • Practice: read it out loud and spot awkward lines

Chapter 2: Give AI the Right Inputs (So Outputs Make Sense)

  • Write a one-page product snapshot (who, what, why)
  • Describe your ideal customer in simple terms
  • List outcomes, proof, and boundaries (what you can’t promise)
  • Build a reusable prompt that includes context + tone
  • Test and improve: compare two prompt versions

Chapter 3: Discovery Questions That Don’t Sound Robotic

  • Create a discovery question bank for your offer
  • Generate follow-up questions that dig deeper (without pressure)
  • Practice active listening lines and summaries
  • Turn answers into a tailored next step
  • Mini role-play: 10-minute discovery call simulation

Chapter 4: Objection Handling—Stay Calm and Be Helpful

  • Learn the 4-step objection handling structure
  • Generate objection replies for your top 10 objections
  • Role-play: short objection drills with AI
  • Rewrite replies to match your voice and ethics
  • Build a “safe alternatives” script when it’s not a fit

Chapter 5: Scripts by Channel—Call, Email, and DM

  • Create a call opener + agenda that earns permission
  • Write 3 follow-up emails (no reply, recap, next step)
  • Draft 5 DM messages that don’t feel spammy
  • Create a short voicemail script and subject lines
  • A/B test: generate two versions and choose based on clarity

Chapter 6: Practice, Improve, and Use Scripts Responsibly

  • Create a weekly practice routine with AI role-play
  • Turn calls into clean notes, next steps, and follow-ups
  • Build your personal script library and quick prompts
  • Set quality checks: accuracy, honesty, and privacy basics
  • Final project: your complete beginner sales script kit

Sofia Chen

Sales Enablement Specialist (AI Conversation Design)

Sofia Chen designs sales playbooks and training for small teams and growing companies. She focuses on using AI to create natural-sounding discovery questions, objection responses, and call practice that beginners can apply immediately.

Chapter 1: Sales Conversations and AI—The Basics

Sales conversations feel “natural” when they’re well prepared. That doesn’t mean they’re improvised; it means the rep has a clear goal, a simple structure, and language that fits their voice. In this course, you’ll use AI to speed up preparation—discovery questions, role-play practice, objection replies, and follow-ups—without losing the human judgment that makes a conversation credible.

The first skill is defining your goal before you generate anything: are you doing discovery (learning), a demo (showing), or a follow-up (moving the deal forward)? The second skill is giving the AI a clean, simple brief so its output matches your product, your customer, and your brand voice. The third skill is learning to “human check” the draft so you don’t ship awkward lines, risky claims, or tone that doesn’t fit the moment.

This chapter builds your foundation: what a script is, how a basic conversation flows, what AI can and cannot do, where it helps most, the mistakes beginners make, and the final read-out-loud check that turns a draft into something you can confidently use on calls, emails, and DMs.

Practice note for Define your goal: discovery, demo, or follow-up: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Meet the AI tool: what it does well vs. poorly: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set your voice: friendly, direct, or consultative: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create your first simple script with AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice: read it out loud and spot awkward lines: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Define your goal: discovery, demo, or follow-up: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Meet the AI tool: what it does well vs. poorly: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set your voice: friendly, direct, or consultative: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create your first simple script with AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice: read it out loud and spot awkward lines: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 1.1: What a sales script is (and what it is not)

A sales script is a prepared set of prompts, questions, and transitions that helps you run a consistent conversation. It’s not a word-for-word performance you recite. Think of it as guardrails: it keeps you from forgetting key questions, losing the thread, or jumping to pitch too early. A good script sounds like you on a good day—clear, calm, and curious—while still leaving room to respond naturally.

For beginners, scripts solve two common problems: (1) freezing and filling silence with a rushed pitch, and (2) “random walk” conversations where you learn a few facts but can’t summarize pain points, impact, and next steps. A simple script gives you repeatability: you can run five discovery calls and compare notes because the core questions stayed consistent.

A script is not a magic persuasion document. It can’t replace product knowledge, pricing clarity, or genuine listening. It also isn’t a list of clever lines designed to “handle” customers. If a customer raises a concern, your job is to understand it, confirm it, and respond with relevant information—not to trap them in wordplay.

  • Use scripts for: opening, agenda setting, discovery questions, transitions, confirming understanding, next steps, follow-up messages.
  • Don’t use scripts for: pretending you know details you haven’t learned, making guarantees, or pushing past a clear “not a fit.”

When you introduce AI, the role of a script stays the same. AI helps you draft options faster, but you still decide what belongs in your script based on your goal (discovery vs. demo vs. follow-up) and your customer’s context.

Section 1.2: The flow of a simple sales conversation

Most beginner sales conversations improve immediately when you follow a basic flow. The goal isn’t to sound “salesy”—it’s to make the conversation easy to follow for the other person. You will also use this flow as the template you ask AI to generate.

Start by defining the goal for this touchpoint. Discovery is about understanding needs and fit. A demo is about showing specific value tied to what you learned. A follow-up is about removing friction and agreeing on next steps. If you skip this step, AI will often mix the three and you’ll get a messy script that feels pushy or confusing.

  • 1) Opening: greet, confirm time, and establish purpose in one or two sentences.
  • 2) Agenda: “I’ll ask a few questions, share what I’m hearing, and if it makes sense we’ll discuss next steps.”
  • 3) Discovery: ask about current process, pain, impact, decision criteria, and timing.
  • 4) Confirm: summarize in the customer’s language and ask if you got it right.
  • 5) Value mapping: connect your solution to their priorities (not a full pitch).
  • 6) Next steps: propose a clear action (demo, trial, stakeholder call) with a date/time.

This same flow works in email or DMs, just compressed: a purpose line, 1–2 discovery questions, a small relevant insight, then a clean next step. When you later generate objection handling, you’ll still use the flow—acknowledge, clarify, respond, confirm—rather than “counterpunching” objections.

The practical outcome: once you can name the goal and follow a simple structure, you can use AI to produce targeted drafts instead of generic scripts that talk too much and ask too little.

Section 1.3: What AI is in plain language

In this course, “AI” usually means a language model: software trained on large amounts of text to predict what words should come next. It’s good at producing plausible, well-structured writing—questions, summaries, role-plays, and alternative phrasing. It is not a mind reader, and it does not automatically know your product, your policies, or what happened in your last call unless you provide that information.

Here’s the practical way to think about it: AI is a drafting assistant. It can create a first version quickly, offer variations in tone (friendly, direct, consultative), and help you practice by role-playing a prospect. It can also transform messy input—bullet points, call transcripts, chat logs—into cleaner outputs like call notes, action items, and follow-up emails.

But AI has limitations you must engineer around:

  • It can hallucinate: invent features, pricing, legal claims, or “case studies.”
  • It can overgeneralize: produce scripts that sound professional but say nothing specific.
  • It can miss context: if your buyer is technical, it may oversimplify; if your buyer is sensitive, it may sound too blunt.
  • It can mirror your prompt: vague input creates vague output; biased input can create risky messaging.

Your job is to supply a simple, accurate product-and-customer brief, then choose outputs that match your goal. The best results come from treating AI like a junior teammate: fast, helpful, and in need of review.

Section 1.4: Where AI helps in sales scripting

AI is strongest where salespeople spend time but don’t need to reinvent the wheel: drafting, rephrasing, and practicing. If you use it with a clear goal and a tight brief, it can give you a usable script in minutes, plus backup options when a conversation goes off track.

Start with a simple product-and-customer brief. Keep it short and factual so the model doesn’t improvise. Include:

  • Product: what it is, top 3 outcomes, top 3 features, pricing range (or “starts at”), and key limitations.
  • Customer: target role/title, industry, common pain points, and what “success” looks like.
  • Proof: 1–2 true examples (metrics only if verified), plus compliance constraints (what you cannot claim).
  • Goal + channel: discovery call vs. email vs. DM follow-up, plus length constraints.
  • Voice: friendly, direct, or consultative (define what that means in your words).

Then ask AI for specific outputs: beginner-friendly discovery questions, short call openers, transitions, and follow-up questions that dig deeper without sounding interrogative. You can also generate role-play scripts: “Act as a skeptical operations manager. Give short answers. Raise one objection about timing.” This is a fast way to practice handling price, timing, trust, or competitor comparisons while staying calm.

Finally, AI can help after the conversation: paste rough notes and request clean call notes, next steps, and a follow-up message that matches the agreed plan. The practical outcome is speed with consistency—while you keep control over accuracy and tone.

Section 1.5: Common beginner mistakes with AI-written scripts

Most problems with AI scripts come from unclear goals, missing context, or treating the draft as “done.” Beginners often copy-paste a script and wonder why prospects disengage. The fix is straightforward: tighten the prompt, shorten the output, and run a human check.

  • Mistake: mixing goals. A “discovery script” that starts pitching features and pushing demos too early feels jarring. Tell AI the goal: discovery, demo, or follow-up.
  • Mistake: vague customer. “Small businesses” is too broad. Specify role, industry, and situation (e.g., “owner-operator HVAC company with 5–15 techs”).
  • Mistake: generic questions. “What are your pain points?” is easy to ignore. Ask grounded questions about workflow, tools, volume, handoffs, and constraints.
  • Mistake: overlong scripts. AI tends to write paragraphs. You need short lines and clear checkpoints.
  • Mistake: unnatural tone shifts. AI can jump from friendly to overly formal. Lock your voice: friendly, direct, or consultative—and define it.
  • Mistake: risky claims. AI may imply guarantees, compliance coverage, or ROI you can’t promise. Remove or qualify anything uncertain.

A useful rule: if a line would feel weird to say to a real person, it will be weird. AI can’t feel that. You can. Treat the draft as raw material, not a final asset.

Section 1.6: The “human check” before you use a script

Before you use any AI-written script with a real prospect, run a short “human check.” This is where engineering judgment shows up: you’re validating truth, tone, and usefulness under real conversation pressure. The goal is not perfection—it’s preventing avoidable mistakes and making the script easy to deliver.

Use this checklist and read the script out loud (yes, out loud). Reading silently hides awkward phrasing and overly long sentences.

  • Accuracy check: Are product features, integrations, pricing, and results accurate and approved?
  • Fit check: Does it match the channel (call vs. email vs. DM) and the stage (discovery vs. follow-up)?
  • Voice check: Does it sound like you—friendly, direct, or consultative—without sudden formality?
  • Question quality check: Do questions invite real answers (specific, contextual) instead of yes/no?
  • Length check: Can you deliver the opener in 15–25 seconds? Can the prospect respond quickly?
  • Friction check: Remove pressure words (“just,” “quick,” “obviously”) and replace with calm clarity.

Then do a micro-practice: run the opener and the first three discovery questions. If you stumble, shorten the line or swap words until it flows. Finally, prepare one follow-up question after each core question. That way, when a prospect answers briefly, you’re ready to go deeper without sounding scripted.

Once the script passes the human check, it becomes a tool you can rely on—consistent enough to repeat, flexible enough to adapt, and grounded enough to keep trust.

Chapter milestones
  • Define your goal: discovery, demo, or follow-up
  • Meet the AI tool: what it does well vs. poorly
  • Set your voice: friendly, direct, or consultative
  • Create your first simple script with AI
  • Practice: read it out loud and spot awkward lines
Chapter quiz

1. Why do sales conversations often feel “natural” when done well, according to the chapter?

Show answer
Correct answer: Because they are well prepared with a clear goal, simple structure, and voice-matched language
The chapter says “natural” comes from preparation: goal, structure, and language that fits the rep’s voice—not improvisation.

2. Before generating anything with AI, what is the first skill you should apply?

Show answer
Correct answer: Define whether you need discovery, a demo, or a follow-up
The chapter emphasizes defining the goal first: discovery (learning), demo (showing), or follow-up (moving the deal forward).

3. What is the main reason to give the AI a clean, simple brief?

Show answer
Correct answer: So the output matches your product, customer, and brand voice
A clear brief helps the AI produce language aligned to your product, audience, and voice.

4. What does the chapter recommend as the “human check” step before using an AI-generated script?

Show answer
Correct answer: Review for awkward lines, risky claims, and tone that doesn’t fit the moment
The human check is meant to catch awkward phrasing, risky claims, and mismatched tone.

5. How does the chapter position AI’s role in sales preparation?

Show answer
Correct answer: AI speeds up preparation while keeping human judgment to maintain credibility
AI helps generate drafts quickly, but the rep’s judgment is needed to keep the conversation credible.

Chapter 2: Give AI the Right Inputs (So Outputs Make Sense)

AI can draft sales scripts fast, but it cannot guess your product, your market, or your standards. When beginners say “the AI wrote something generic,” it’s usually because the prompt was generic. In sales, small differences in positioning, audience, and promises matter. A script for “HR software” sounds very different depending on whether you sell to a 50-person agency, a 500-person manufacturer, or a global enterprise—and AI needs those specifics.

This chapter is about building the inputs that make outputs usable: a one-page product snapshot (who/what/why), a simple ideal-customer description, clear outcomes and boundaries (what you can’t promise), and a reusable prompt that carries context and tone. You’ll also learn a practical habit: test two prompt versions side-by-side and keep the one that produces fewer “fixes” downstream.

Think of AI like a junior sales assistant. If you hand them a clear brief, they can draft discovery questions, role-play objections, and create follow-up messages quickly. If you hand them vague directions, they will fill gaps with assumptions. Your job is not to “prompt harder.” Your job is to reduce ambiguity.

Practice note for Write a one-page product snapshot (who, what, why): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Describe your ideal customer in simple terms: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for List outcomes, proof, and boundaries (what you can’t promise): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a reusable prompt that includes context + tone: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Test and improve: compare two prompt versions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write a one-page product snapshot (who, what, why): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Describe your ideal customer in simple terms: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for List outcomes, proof, and boundaries (what you can’t promise): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a reusable prompt that includes context + tone: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Test and improve: compare two prompt versions: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 2.1: The minimum info AI needs to write good scripts

Before you ask AI for discovery questions or objection replies, give it the minimum viable context. This is where most beginners skip steps. A strong script comes from knowing: who the buyer is, what problem you solve, what success looks like, and what constraints exist. If you provide only “write me a cold DM,” AI will default to buzzwords and overpromises.

Create a one-page product snapshot. Keep it plain language, not marketing fluff. Include: (1) who it’s for, (2) what it does, (3) why it’s different, and (4) how it’s bought (price range, contract type, onboarding time). Add your sales situation: inbound lead vs. outbound prospect, warm referral vs. cold list, and whether you’re booking a call or closing in chat.

  • Product: name, category, what it replaces, typical setup time
  • Buyer: title/role, company size, industry, level of urgency
  • Use case: the “day in the life” moment when they feel the pain
  • Offer: what you want next (15-minute call, demo, trial)
  • Constraints: what you cannot do or claim (compliance, guarantees)

Engineering judgment: aim for “specific enough to guide language” without turning the prompt into a novel. If AI consistently asks irrelevant questions, your buyer/use case is too vague. If AI writes scripts that ignore a key restriction (like “guaranteed ROI”), your constraints are missing or buried.

Section 2.2: Writing a beginner-friendly value proposition

A value proposition is the anchor for everything AI generates—questions, summaries, objection handling, and follow-ups. Beginners often write value props that are either too broad (“save time and money”) or too technical (“multi-tenant architecture with advanced analytics”). Your goal is a sentence a busy buyer can understand in one read.

Use a simple pattern: For [ideal customer], [product] helps you [achieve outcome] by [how it works], without [common downside]. Then add one supporting line that clarifies the use case. This becomes the “north star” you paste into prompts so AI stays on-message.

Common mistake: describing features before outcomes. AI will mirror your input. If you give it a feature list, it will produce feature-heavy scripts. If you give it outcomes tied to a situation, it will write buyer-centered discovery.

  • Bad (generic): “We use AI to automate your workflow.”
  • Better: “For small recruiting teams, our ATS reduces time-to-screen by organizing applicants and auto-scheduling interviews, so you don’t lose candidates to faster competitors.”

Practical outcome: once your value proposition is clear, you can ask AI for three versions by channel (call opener, email intro, DM hook) and they’ll stay consistent because the core promise is stable.

Section 2.3: Customer pains vs. customer goals

Discovery works when you separate “pain” from “goal.” Pain is the current friction (what’s broken, costly, risky). Goals are the future state (what they want to achieve). Many beginners only prompt AI with pains (“they struggle with reporting”), which produces negative, pressure-heavy scripts. If you add goals (“they want weekly visibility without manual work”), AI can generate balanced questions that feel helpful, not pushy.

Describe your ideal customer in simple terms: role, context, and what they care about. Then list 3–5 pains and 3–5 goals in their language. Avoid internal jargon. A buyer rarely says “we need a CRM migration.” They say “we can’t find anything and follow-ups slip.”

Use this distinction to produce better discovery questions across channels:

  • Pain questions uncover cost and urgency: “What happens when a lead isn’t followed up within 24 hours?”
  • Goal questions uncover priorities: “If this process worked perfectly, what would your team be spending time on instead?”
  • Constraint questions uncover boundaries: “Are there compliance or security requirements that would block certain tools?”

Engineering judgment: prompt AI to ask questions that are observable and measurable. Replace “Do you have a problem with X?” with “How are you doing X today?” and “What does it cost you when X happens?” This produces cleaner call notes later and makes objection handling easier because you’ve quantified impact.

Section 2.4: Social proof, examples, and safe claims

AI can write persuasive copy, but it will also invent proof if you don’t provide it. This is a critical boundary. You must supply real examples, approved claims, and what you cannot promise. Otherwise you risk credibility damage or compliance issues—especially around ROI, earnings, medical outcomes, or legal guarantees.

Build a proof-and-boundaries list. Proof includes: customer logos you can name, anonymized case snippets, metrics that are verified, and credible “soft proof” (years in business, certifications, reviews). Boundaries include: things you don’t do, results you can’t guarantee, and situations where your product is not a fit.

  • Proof (hard): “Reduced onboarding time from 10 days to 4 (n=12 customers, 2025 survey).”
  • Proof (soft): “Used by teams in fintech and healthcare; SOC 2 Type II certified.”
  • Safe claim: “Many customers see…” + the conditions.
  • Unsafe claim: “You will save 30%” (unless you can guarantee and document it).

Practical workflow: paste your proof list into your reusable prompt and instruct AI: “Use only the proof provided. If proof is missing, write a neutral line like ‘I can share an example from a similar team.’” This prevents hallucinated testimonials and keeps objection replies calm and credible.

Section 2.5: Tone, length, and channel (call, email, DM)

Even with perfect context, scripts fail when tone and channel expectations are wrong. A call opener can be slightly longer and conversational. A DM must be short, scannable, and easy to reply to. Email can carry more structure, but still needs a clear “why you, why now” in the first two lines.

Decide tone settings before you prompt: friendly vs. formal, direct vs. consultative, high-energy vs. calm, and how assertive you want to be. Beginners often ask for “persuasive” and receive aggressive language. If your brand is trust-first, specify: “calm, professional, no hype, no pressure, assume the prospect is busy.”

Also specify length limits. AI is happy to write paragraphs; buyers are not. Use constraints like: “DM under 350 characters,” “email under 120 words,” or “call opener under 20 seconds.”

  • Call: include a permission-based opener and 2–3 discovery questions.
  • Email: include a relevant trigger, one proof point, and one clear CTA.
  • DM: one sentence of relevance + one question that’s easy to answer.

Common mistake: reusing the same script across channels. Instead, reuse the same inputs (snapshot, customer, proof, boundaries) and let AI adapt the format per channel.

Section 2.6: Prompt template: context → task → format

A reusable prompt is your “sales script generator,” but only if it carries the right context and forces a usable output format. Use a three-part template: context → task → format. This keeps AI from drifting into generic advice and makes it produce copy you can paste into your CRM, email tool, or role-play practice.

Context should include your one-page product snapshot, ideal customer description, pains/goals, proof, and boundaries. Task is what you want generated (discovery questions, objection replies, role-play, follow-up). Format defines length, channel, tone, and structure.

Template you can reuse:

  • Context: “Product: … Ideal customer: … Top pains: … Top goals: … Proof allowed: … Boundaries: …”
  • Task: “Write (1) a 20-second call opener, (2) 6 discovery questions, and (3) 4 calm replies to objections: price, timing, trust, competitor.”
  • Format: “Use bullet points. No jargon. No guarantees. Include one follow-up question after each objection reply.”

Test and improve by comparing two prompt versions. Version A: minimal context. Version B: full snapshot + proof + boundaries + tone + length. Run the same task and compare outputs for specificity, compliance, and edit time. Keep the version that needs fewer corrections, then iterate one variable at a time (tone, proof detail, length). This small discipline compounds: better prompts lead to better discovery questions, cleaner call notes, and follow-ups that sound like you—not like a template.

Chapter milestones
  • Write a one-page product snapshot (who, what, why)
  • Describe your ideal customer in simple terms
  • List outcomes, proof, and boundaries (what you can’t promise)
  • Build a reusable prompt that includes context + tone
  • Test and improve: compare two prompt versions
Chapter quiz

1. Why do beginners often feel that AI-generated sales scripts are “generic”?

Show answer
Correct answer: Because the prompt lacked specific product, market, and standards details
Generic outputs usually come from generic prompts that don’t include clear specifics about the product, audience, and expectations.

2. What is the main purpose of creating a one-page product snapshot (who/what/why) before prompting AI?

Show answer
Correct answer: To give AI a clear brief so it can draft usable, on-position scripts
The snapshot reduces ambiguity and helps the AI write messages aligned with your positioning.

3. Which input best helps AI avoid making risky or misleading sales claims?

Show answer
Correct answer: Clear outcomes, proof, and boundaries (what you can’t promise)
Outcomes, proof, and boundaries define what you deliver and what you cannot promise, preventing incorrect assumptions.

4. What does the chapter recommend as a practical habit for improving prompt quality?

Show answer
Correct answer: Test two prompt versions side-by-side and keep the one requiring fewer fixes
Comparing two prompt versions helps you choose the one that produces more usable output with less downstream editing.

5. According to the chapter, what is your real job when working with AI on sales scripts?

Show answer
Correct answer: Reduce ambiguity by providing clear context and tone
Treat AI like a junior sales assistant: better inputs (context + tone) lead to better, more accurate outputs.

Chapter 3: Discovery Questions That Don’t Sound Robotic

Discovery is where beginners either build trust—or accidentally sound like a checklist. If your questions feel generic (“What are your goals?”), prospects give generic answers (“More leads”), and you can’t tailor a next step. The goal of this chapter is to help you create a discovery question bank that fits your offer, generate follow-up questions that dig deeper without pressure, and practice active listening lines so the conversation feels human. You’ll also learn how to turn answers into a clear next step, then use AI to role-play a 10-minute discovery call simulation so you can practice without waiting for real prospects.

A practical way to think about discovery: you are mapping a before-and-after story. The “before” includes the current situation, pain, constraints, and decision process. The “after” is the outcome they want, why it matters now, and what success would look like. Your questions should move naturally through that map—like a conversation, not an interview. When you use AI, your job is to provide enough context (offer + customer + constraints) so the model generates questions in your voice and for your market. Without that context, AI often defaults to robotic templates.

Throughout this chapter, you’ll build three assets you can reuse: (1) a question bank for calls, emails, and DMs; (2) a set of follow-up “dig deeper” probes; and (3) a summary-and-next-step template that turns messy answers into clean call notes and a tailored follow-up message.

Practice note for Create a discovery question bank for your offer: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Generate follow-up questions that dig deeper (without pressure): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice active listening lines and summaries: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Turn answers into a tailored next step: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Mini role-play: 10-minute discovery call simulation: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a discovery question bank for your offer: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Generate follow-up questions that dig deeper (without pressure): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Practice active listening lines and summaries: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Turn answers into a tailored next step: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 3.1: What discovery is and why it matters

Section 3.1: What discovery is and why it matters

Discovery is a structured conversation that helps you decide whether you can help, and helps the prospect decide whether they want to proceed. It is not a pitch. The biggest beginner mistake is treating discovery as a “pre-pitch warmup,” where every question is secretly designed to corner the prospect into saying yes. That approach triggers resistance and short answers.

Good discovery does three jobs at once: it clarifies the prospect’s situation, it builds credibility through relevance, and it produces the raw material for a tailored next step. In practice, your questions should uncover: (1) the problem and impact, (2) what they’ve tried, (3) what they want instead, (4) what constraints exist (time, budget, tools, approvals), and (5) what a reasonable next step is (demo, audit, trial, proposal, or a polite “not a fit”).

Engineering judgment matters here: you do not need every detail. You need enough signal to recommend a next action that is low-friction and aligned with their urgency. A useful rule is “diagnose to the level needed to prescribe the next step.” If your offer is simple (e.g., a one-time service), discovery can be short. If your offer is complex (e.g., multi-stakeholder software), discovery must include decision process and success criteria.

Practical outcome: by the end of discovery, you should be able to summarize their current state, desired outcome, and the next step in two sentences. If you can’t, your questions weren’t targeted enough.

Section 3.2: Open vs. closed questions (simple rules)

Section 3.2: Open vs. closed questions (simple rules)

Open questions create range; closed questions create precision. Beginners often pick one style and overuse it. If you ask only open questions, you get long stories with missing facts. If you ask only closed questions, you sound like a form. The skill is sequencing: open first to learn what matters, then closed to quantify and confirm.

Use open questions when you need context, language, and priorities. Examples: “Walk me through how you’re handling this today,” or “What prompted you to look at this now?” Use closed questions when you need a decision-relevant detail: “How many inbound leads per month?” “Is the goal to reduce churn, yes or no?”

Simple rules that prevent robotic delivery:

  • One question at a time. Avoid stacking: “What are your goals and challenges and timeline?”
  • Prefer “what” and “how” over “why” early. “Why” can feel accusatory; “what led to…” is softer.
  • Ask for examples. “Can you share a recent example?” makes answers concrete.
  • Mirror their words. Reuse the exact phrase they used (e.g., “quality leads”) to show you’re listening.

Workflow tip: create a question bank in three formats—call, email, DM. Calls can handle broader questions (“Talk me through…”). Email and DM should be shorter and more specific, often ending with a light binary to make replying easy (e.g., “Is this mostly a volume problem or a conversion problem?”). This is where AI helps: you can generate variations that keep intent but change tone.

Section 3.3: Problem, process, and priority questions

Section 3.3: Problem, process, and priority questions

A practical discovery bank is easiest to build if you categorize questions. In beginner sales, three categories cover most conversations: problem questions, process questions, and priority questions. Together, they create a clear “before” picture and reveal whether action is likely.

Problem questions identify the pain and impact. The key is to move from symptoms to consequences without sounding dramatic. Start broad, then narrow: “What’s not working as well as you’d like?” → “Where does that show up—in lead quality, speed, or close rate?” → “What happens if it stays the same for the next quarter?”

Process questions reveal how work actually happens today (tools, handoffs, workflow). These prevent you from proposing something that won’t fit their reality. Examples: “How are leads currently captured and routed?” “Who owns follow-up?” “What tools are involved?” “Where do things typically slow down?” Process questions also create natural follow-ups because each answer contains a new branch (“You mentioned spreadsheets—what’s hardest about keeping them updated?”).

Priority questions determine urgency and importance without pressure. They’re about trade-offs: “What else is competing for attention right now?” “If you fixed one part of this first, what would you choose?” “What would make this a ‘must-do’ instead of a ‘nice-to-have’?”

Common mistake: asking “How urgent is this?” too early. Better: ask about triggers and deadlines: “Is there a date this needs to be working by?” or “What’s changing in the business that makes this timely?” Practical outcome: after 5–7 well-chosen questions across these categories, you should know what they want, what blocks them, and what they’ll prioritize next.

Section 3.4: Budget and decision questions without being awkward

Section 3.4: Budget and decision questions without being awkward

Budget and decision questions feel awkward when they appear out of nowhere or when they sound like an interrogation. The fix is framing: explain that you’re asking to recommend the right next step and avoid wasting their time. Then ask in a way that matches your offer type.

For budget, start with range and constraints instead of demanding a number. Examples: “Have you set aside a budget range for solving this?” “Are you looking for something lightweight to start, or are you prepared to invest if the ROI is clear?” If your offer has a typical range, you can anchor gently: “Projects like this are usually between X and Y depending on scope—does that feel in the right neighborhood?” The goal is not to win a negotiation in discovery; it’s to confirm feasibility.

For decision process, ask about steps, people, and criteria. Examples: “Besides you, who else needs to weigh in?” “What does the approval process look like?” “What would you need to see to feel confident moving forward?” “If we get to a recommendation, what are the next steps on your side?” These questions reduce ghosting because you learn how decisions actually get made.

Common mistakes: (1) asking budget before value is clear, (2) skipping decision questions and then being surprised by a hidden stakeholder, and (3) sounding defensive (“Do you even have budget?”). Practical outcome: you should finish discovery knowing whether a next step is realistic and who needs to be involved, so your follow-up message includes the right attendees, materials, and timeline.

Section 3.5: Summaries that confirm understanding

Section 3.5: Summaries that confirm understanding

Active listening is what makes discovery feel human. It’s also what turns your notes into a tailored next step. Your summaries should do two things: confirm you understood, and invite correction. The easiest pattern is: Situation → Impact → Desired outcome → Next step.

Use short listening lines throughout: “That makes sense.” “Got it.” “Tell me more about that.” But the higher-skill move is the mid-call summary: “Let me pause and make sure I’m tracking.” Then deliver a 20–40 second recap in their words. For example: “So right now leads are coming in from ads and referrals, but follow-up is inconsistent because it’s split across two inboxes. The biggest impact is missed demos, and you want a simple system where every lead gets a response within an hour. Did I get that right?”

When they confirm, you earn the right to propose a next step that fits: “Based on that, the most useful next step is a 30-minute workflow audit where we map routing and response times. If it looks like a fit, I’ll share a plan and a fixed price.” That is tailored, calm, and non-pushy.

Common mistake: summarizing only the problem (“Sounds like things are messy”) without impact or outcome. Another mistake is turning the summary into a pitch. Keep it neutral and precise. Practical outcome: your summary becomes your follow-up email/DM almost verbatim: recap, agreed next step, who’s involved, and timing.

Section 3.6: AI prompt patterns for discovery and follow-ups

Section 3.6: AI prompt patterns for discovery and follow-ups

AI is best used as a question generator and role-play partner—not as the person conducting your call. Your inputs determine whether it produces natural questions or robotic scripts. Before prompting, prepare a simple brief: your offer, ideal customer, common pains, proof points, constraints (price range, onboarding time), and your preferred tone (friendly, direct, consultative).

Prompt pattern 1: Discovery question bank. Ask for categories and channels: “Create 25 discovery questions for [offer] sold to [ICP]. Group into problem/process/priority/budget/decision. Provide versions for (a) live call, (b) email, (c) LinkedIn DM. Keep tone conversational and avoid jargon.” This builds your reusable bank.

Prompt pattern 2: Dig-deeper follow-ups. Feed a sample answer and request probes: “Prospect said: ‘We’re getting leads but they’re low quality.’ Give 10 non-pushy follow-up questions that diagnose sources, qualification, and conversion points. Include 3 that ask for examples and 2 closed questions to quantify.” This is how you generate depth without pressure.

Prompt pattern 3: Active listening lines + summaries. Provide notes and ask for a recap: “Here are my raw notes… Write (a) two mid-call summary options in the prospect’s language, (b) a 3-sentence end-of-call recap, and (c) a follow-up email with next steps.” This directly supports clean call notes and a tailored follow-up message.

Prompt pattern 4: Mini role-play (10-minute discovery call simulation). Instruct the AI to act as the prospect and to resist realistically: “Role-play as a [ICP] prospect. I’m the seller. Start with a brief context and answer naturally. Only reveal information if I ask good questions. After 10 minutes, stop and critique my questioning: where I sounded robotic, what I missed, and 5 better follow-ups.” This creates deliberate practice and builds your instinct for sequencing open-to-closed questions.

Chapter milestones
  • Create a discovery question bank for your offer
  • Generate follow-up questions that dig deeper (without pressure)
  • Practice active listening lines and summaries
  • Turn answers into a tailored next step
  • Mini role-play: 10-minute discovery call simulation
Chapter quiz

1. Why do generic discovery questions (e.g., “What are your goals?”) often lead to weak next steps?

Show answer
Correct answer: They produce generic answers that don’t give enough detail to tailor a next step
Generic questions tend to trigger generic replies (like “more leads”), leaving you without specifics to personalize what happens next.

2. In this chapter’s framework, what is the practical way to think about discovery?

Show answer
Correct answer: A before-and-after story you map through conversation
Discovery is described as mapping a before-and-after story so the conversation flows naturally and leads to a tailored next step.

3. Which set best represents what belongs in the “before” part of the discovery map?

Show answer
Correct answer: Current situation, pain, constraints, and decision process
The “before” includes where they are now: situation, pain, constraints, and how decisions get made.

4. According to the chapter, what helps AI generate discovery questions that don’t sound robotic?

Show answer
Correct answer: Providing enough context (offer + customer + constraints) so questions match your voice and market
Without specific context, AI tends to default to robotic templates; context helps it produce natural, market-fit questions.

5. Which option correctly lists the three reusable assets you build in this chapter?

Show answer
Correct answer: A question bank; follow-up “dig deeper” probes; a summary-and-next-step template
The chapter specifies three assets: a discovery question bank, a set of deeper follow-ups, and a summary-and-next-step template.

Chapter 4: Objection Handling—Stay Calm and Be Helpful

Objections are not interruptions to “get past.” They are signals: the buyer is telling you what is currently unsafe, unclear, or untrue in their mind. Your job is to keep the conversation steady, reduce confusion, and help them make a good decision—even if that decision is “not now” or “not us.”

AI is useful here because it can generate many calm, structured replies quickly, and it can role-play to help you practice. But AI cannot read a buyer’s real intent, detect legal/compliance boundaries, or decide what’s ethical in your market. You still own judgment: when to press, when to pause, and when to recommend a safer alternative.

In this chapter you’ll learn a simple 4-step structure for handling objections, use AI to generate replies for your top objections, run short objection drills, rewrite AI drafts into your voice, and build a respectful “safe alternatives” script for when it isn’t a fit. The goal is not to “win.” The goal is clarity and next steps.

  • Use one repeatable structure so you don’t panic or ramble.
  • Generate 10+ objection replies with AI, then edit them to match your tone and ethics.
  • Practice with short, realistic role-plays until calm responses become automatic.
  • Know when to stop pushing and how to exit gracefully without burning trust.

The best objection handling sounds like a helpful colleague, not a debater. You’ll ask one good question, answer precisely, and confirm what happens next.

Practice note for Learn the 4-step objection handling structure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Generate objection replies for your top 10 objections: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Role-play: short objection drills with AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Rewrite replies to match your voice and ethics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build a “safe alternatives” script when it’s not a fit: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Learn the 4-step objection handling structure: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Generate objection replies for your top 10 objections: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Role-play: short objection drills with AI: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Rewrite replies to match your voice and ethics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 4.1: What an objection really means

An objection is usually a proxy for something else. People rarely say, “I don’t trust you yet,” or “I’m worried I’ll look bad if this fails.” Instead you hear: “Too expensive,” “We’re all set,” “Send me something,” or “We’re using a competitor.” Treat objections as information, not rejection.

Most objections fall into four buckets: (1) Value (they don’t believe the outcome is worth the cost), (2) Fit (they’re unsure it works for their situation), (3) Risk/Trust (they fear downside, vendor lock-in, or hidden costs), and (4) Priority/Timing (they have other fires or no decision process). Your first task is to identify the bucket before you answer.

Common mistake: replying to the literal words instead of the underlying concern. Example: “It’s too expensive.” If you immediately discount, you may confirm their fear that your pricing isn’t grounded in value. Another mistake: over-explaining. Long speeches sound defensive and create new questions.

Practical workflow with AI: paste 10 anonymized objections you hear most often and ask the model to categorize them into value/fit/risk/timing and propose one clarifying question for each. This simple step improves your engineering judgment: you respond to the real issue, not the surface phrasing.

Outcome: you stay calm because you know what an objection is—data. You also protect ethics by not “handling” objections that indicate misfit. Some objections are actually boundary statements; your job is to respect them.

Section 4.2: The 4 steps: acknowledge, clarify, respond, confirm

Use a consistent 4-step structure so every objection feels manageable: AcknowledgeClarifyRespondConfirm. Think of it as conversational control without being controlling.

  • Acknowledge: validate the concern without agreeing with false assumptions. “That makes sense.” “I hear you.”
  • Clarify: ask one short question to locate the real issue. “When you say expensive, compared to what?”
  • Respond: answer in a few sentences with evidence, options, or a next step. Avoid a monologue.
  • Confirm: check if it’s resolved and propose a clear next action. “Does that address it?” “If yes, should we…”

Here’s a template you can reuse in calls, email, and DMs:

Acknowledge: “Totally fair.”
Clarify: “Is the main concern budget this quarter, or not seeing enough ROI to justify it?”
Respond: “If it’s ROI, we can map the outcome to one metric you care about and compare it to your current baseline. If it’s budget timing, we can discuss a smaller start or a pilot.”
Confirm: “Which of those is closer? And if we solve that, are you open to the next step?”

How AI helps: ask for 3 variations per step (more direct, more warm, more concise) so you can match different buyer personalities. Then run short objection drills: prompt the AI to role-play a skeptical buyer and only allow yourself two sentences per step. This builds the habit of clarity under pressure.

Common mistake: skipping the confirm step. Without confirmation, you can “answer” perfectly and still drift into vague endings. Confirmation creates momentum and clean next steps for your call notes and follow-up message.

Section 4.3: Price objections: value, options, and framing

Price objections are often value objections wearing a budget mask. The buyer is saying, “I don’t yet believe the outcome is worth the trade-off.” Your job is to connect price to a concrete result, then offer sensible options without racing to discount.

Use the 4 steps:

Acknowledge: “That’s fair—budget matters.”
Clarify: “Is it out of range entirely, or do you need help justifying it internally?”
Respond: tie to one metric: “Most teams use us to reduce X or increase Y. If we move Y by even Z%, it typically covers the cost.” Offer options: pilot, phased rollout, or a lower tier.
Confirm: “If we can show a realistic path to ROI, would you want to review a simple plan?”

Framing techniques that stay ethical:

  • Anchor to baseline: compare to their current spend, time, or loss—only with numbers you can justify.
  • Option value: “Start small” should still create a meaningful result; avoid “cheap entry” that can’t succeed.
  • Cost of inaction: use cautiously, as a question: “What happens if nothing changes for 6 months?”

AI prompt to generate replies for your top 10 objections (edit later): “You are my sales coach. For each objection, write a reply using Acknowledge/Clarify/Respond/Confirm. Keep it under 90 words. Provide one version for phone and one for email.” Paste your real objections. Then rewrite in your voice: remove hype, add truthful specifics (case study, typical range, pilot length), and align to your policies (no deceptive scarcity, no pressure).

Common mistake: negotiating against yourself. If you discount before clarifying, you teach buyers to object to get a better deal. Earn the right to discuss pricing by first making the value legible.

Section 4.4: Timing and priority objections: next steps and nudges

“Not now” can mean “not ever,” but it can also mean “I can’t see the path from here to a decision.” Timing objections are often process problems: unclear stakeholders, missing urgency, or a busy quarter. You can be helpful by creating a low-friction next step.

Apply the structure:

Acknowledge: “Makes sense—timing is everything.”
Clarify: “What would need to be true for this to become a priority—budget cycle, a project kickoff, or an internal approval?”
Respond: propose a tiny action: a 15-minute scoping call, a pilot plan, or an email summary for their boss. Share a decision map: “Typically it’s step A → B → C; we can stop after A if it’s not compelling.”
Confirm: “Should we pencil a check-in for the week after your planning meeting, or would you prefer I send a 3-bullet recap now?”

Engineering judgment: “nudge” without pestering. Set a date and a purpose. A follow-up that says “just checking in” is lazy and forces them to do the work. Instead, bring value: a short ROI model, a one-page implementation outline, or answers to the last unanswered risk.

Role-play drill with AI: ask the model to be a prospect who is busy and slightly evasive. Practice ending with a concrete next step in under 30 seconds. Then ask AI to convert the conversation into clean call notes: decision timeline, stakeholders, risks, and the exact follow-up message you should send.

Common mistake: accepting “circle back later” with no date. If there is no scheduled next step, you have no pipeline—only hope.

Section 4.5: Trust and competitor objections: proof and differentiation

Trust objections show up as: “We haven’t heard of you,” “How do we know this will work?” or “We’re already using X.” Your goal is not to attack the competitor or over-promise. Your goal is to reduce perceived risk with credible proof and clear differentiation.

Acknowledge: “Totally fair—choosing a vendor is a risk.”
Clarify: “What would make you confident—references, security details, a pilot, or seeing it on your data?”
Respond: provide proof: a relevant case study, reference call, security documentation, trial terms, or a transparent limitation. For competitors: “If X is working well, keep it. Where we tend to help is when teams need A and B without C.”
Confirm: “If we can validate this with a pilot or reference, would you be open to comparing side-by-side?”

Practical differentiation method: pick one axis that matters to them (time-to-value, integrations, support model, accuracy, compliance posture). Then contrast calmly using verifiable claims. Avoid vague statements like “we’re better” or “more innovative.”

Use AI carefully: it can draft comparison language, but it may hallucinate competitor features or invent metrics. Your job is to supply facts. Prompt: “Draft a neutral competitor comparison table with only these verified points: [paste]. Add 3 ethical talk tracks that do not insult the competitor.” Then edit to ensure accuracy and compliance.

Common mistake: dumping testimonials. Proof works when it matches their context (industry, size, use case) and addresses the specific risk they named (security, adoption, results).

Section 4.6: When to stop pushing: respectful exits and referrals

Some objections are actually disqualifiers: lack of budget with no path, a required feature you don’t have, a timeline you can’t meet, or values misalignment. Pushing past these doesn’t “save the deal”—it creates churn, refunds, bad reviews, or compliance risk. Advanced selling includes the ability to exit cleanly.

Build a safe alternatives script: a short set of sentences you can use when it’s not a fit, plus a helpful next direction. This protects your reputation and often keeps the relationship for later.

Example (call or email):
Acknowledge: “Thanks for being direct—this is helpful.”
Clarify: “Just to confirm, the non-negotiable is [requirement]?”
Respond: “In that case, we’re probably not the right tool today. We don’t want you to buy something that can’t succeed. Two alternatives I’ve seen work are [option A] and [option B], depending on your constraints.”
Confirm: “Would it be useful if I introduced you to someone who specializes in that, or should we reconnect if your needs change?”

How AI helps: generate a few respectful exit scripts for different scenarios (feature gap, budget mismatch, compliance risk, timeline). Then rewrite to match your voice and ethics: remove manipulation (“last chance”), remove guilt (“you’ll fall behind”), and add clarity (“here’s what we can and cannot do”).

Common mistake: ghosting after disqualification. A short, kind close-out message with notes and referrals can turn a “no” into future inbound or a referral. Practical outcome: cleaner pipeline, less stress, and a brand that feels trustworthy.

Chapter milestones
  • Learn the 4-step objection handling structure
  • Generate objection replies for your top 10 objections
  • Role-play: short objection drills with AI
  • Rewrite replies to match your voice and ethics
  • Build a “safe alternatives” script when it’s not a fit
Chapter quiz

1. In this chapter, how should you interpret a buyer’s objection?

Show answer
Correct answer: As a signal that something feels unsafe, unclear, or untrue to the buyer
The chapter frames objections as signals about the buyer’s current concerns, not obstacles to bulldoze through.

2. What is the primary goal of objection handling according to the chapter?

Show answer
Correct answer: To create clarity and agree on next steps—even if it’s “not now” or “not us”
The stated goal is clarity and next steps, not “winning.”

3. Which approach best matches the recommended objection response style?

Show answer
Correct answer: Use a repeatable structure, ask one good question, answer precisely, and confirm what happens next
The chapter emphasizes calm structure: one good question, precise answer, and a clear next step.

4. What is AI useful for in objection handling, and what can it NOT do for you?

Show answer
Correct answer: It can generate structured replies and role-play practice, but you must supply judgment on intent, compliance, and ethics
AI helps generate drafts and practice scenarios, but you still own judgment around intent, boundaries, and ethics.

5. When should you use a “safe alternatives” script?

Show answer
Correct answer: When it isn’t a fit, so you can exit gracefully without burning trust
The chapter advises knowing when to stop pushing and offering a respectful exit with safer alternatives.

Chapter 5: Scripts by Channel—Call, Email, and DM

Beginners often try to reuse one “good script” everywhere. That’s the fastest way to sound robotic and get ignored. The same message must change shape depending on the channel. A call is synchronous: you can ask, listen, and adjust in real time. Email is asynchronous: you need crisp context and a single next step. DMs sit in between: they’re informal and quick, but also high-trust and easy to abuse.

This chapter gives you practical, channel-specific script building blocks: a call opener and agenda that earns permission, follow-up emails for common situations, non-spammy DMs, a short voicemail, and a simple A/B testing method. You’ll also learn how to use AI as a drafting partner—fast at options and formatting, but only accurate when you provide a clear product-and-customer brief and strong constraints.

Engineering judgment matters here: choose the channel that matches urgency and relationship, then choose the shortest script that still creates clarity. The goal is not “more persuasive words.” The goal is an easy decision for the buyer: what this is, whether it’s relevant, and what happens next.

Practice note for Create a call opener + agenda that earns permission: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write 3 follow-up emails (no reply, recap, next step): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Draft 5 DM messages that don’t feel spammy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a short voicemail script and subject lines: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for A/B test: generate two versions and choose based on clarity: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a call opener + agenda that earns permission: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Write 3 follow-up emails (no reply, recap, next step): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Draft 5 DM messages that don’t feel spammy: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a short voicemail script and subject lines: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for A/B test: generate two versions and choose based on clarity: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 5.1: Choosing the right channel for the situation

Section 5.1: Choosing the right channel for the situation

Channel choice is strategy. Before you ask AI to write anything, decide what you’re trying to accomplish in this moment: start a conversation, confirm understanding, move to a next step, or recover a stalled thread. Then pick the channel that makes that outcome easiest.

Use a call when you need nuance: discovery, multi-stakeholder dynamics, negotiation, or any objection where tone matters (price, trust, competitors). Use email when you need a record: recaps, proposals, scheduling, or internal forwarding. Use DM when you have light rapport and want a low-friction “permission check” without forcing a meeting.

  • High urgency + high complexity → call first, then recap email.
  • Low urgency + medium complexity → email with clear CTA.
  • Low complexity + early-stage → DM to ask permission, then move to email/call.

Common mistakes: defaulting to DM because it feels easier (but can read as spam); sending long emails to “explain everything”; or calling without a stated agenda. Practical outcome: you’ll reduce wasted touches by matching the channel to the decision the buyer needs to make.

Section 5.2: Phone/call scripts: openings, transitions, closings

Section 5.2: Phone/call scripts: openings, transitions, closings

A good call script is mostly structure, not paragraphs. Your job is to earn permission, set an agenda, ask beginner-friendly discovery questions, and close with an explicit next step. AI can draft variations, but you choose the one that sounds like you and fits the context.

Call opener + agenda (permission-based):
“Hi {Name}, it’s {You} from {Company}. Did I catch you with 30 seconds? … Thanks. The reason I’m calling: we help {ICP} with {problem}. If it’s relevant, I’d love to ask 2–3 questions and then we can decide if a longer chat makes sense. Fair?”

Discovery transitions (keep it simple):
“To make sure I’m not guessing—how are you handling {process} today?”
“What prompted you to look at this now?”
“What’s the impact if nothing changes?”
“Who else would weigh in on a decision like this?”

Closing (clear next step):
“Based on what you said, I think it’s worth a deeper look. Next step: a 20-minute call with {role} to walk through {specific outcome}. Are you open to Tuesday or Thursday?”

Short voicemail script:
“Hi {Name}, {You} at {Company}. Calling because we help {ICP} reduce {pain} by {mechanism}. If you’re the right person, call me at {number}; if not, who should I speak with? Again {number}. I’ll also send a quick email.”

Common mistakes: launching into a full pitch, asking five questions in a row, or “checking in” at the end with no decision. Practical outcome: you’ll run calls that feel calm, efficient, and permission-based—making objections less likely and easier to handle when they appear.

Section 5.3: Email scripts: structure, brevity, clear CTA

Section 5.3: Email scripts: structure, brevity, clear CTA

Email wins when it is skimmable and specific. Aim for 4–8 short lines, one topic, one CTA. AI often over-writes; you must constrain it: “max 90 words,” “one question,” “no buzzwords,” “single CTA.”

Follow-up email #1: No reply
Subject: “Quick check—{topic}”
“Hi {Name}—circling back on {original context}. We typically help {ICP} with {problem} by {method}. Is this a priority this quarter, or should I close the loop?”

Follow-up email #2: Recap (after call)
Subject: “Recap + next step: {desired outcome}”
“Thanks for your time today. My notes: (1) {pain} (2) {current approach} (3) {success metric}. Next step we agreed: {meeting/action} by {date}. Did I capture that correctly?”

Follow-up email #3: Next step (scheduling)
Subject: “Time options for {meeting name}”
“If helpful, I can walk you through {specific thing} and share a 2-minute example. Do any of these times work: {Option A}, {Option B}? Or send a link and I’ll book around you.”

Subject lines (practical set): “{Name}—question about {process}”, “Following up on {company}”, “Recap: {metric}”, “Next step: {date}”, “Is this a ‘not now’?”

Common mistakes: multiple asks in one email, vague CTAs (“let me know”), or writing a mini-brochure. Practical outcome: you’ll get more replies because the buyer can answer in one sentence.

Section 5.4: DM scripts: personalization and respectful pacing

Section 5.4: DM scripts: personalization and respectful pacing

DMs require restraint. The goal is not to pitch; it’s to earn permission for a next step. Your DM should feel like a human note: specific personalization, short message, and an easy “yes/no” question. AI helps by generating variants, but you must provide a real personalization token (post they wrote, role change, hiring signal) and keep the tone natural.

Five DM messages that don’t feel spammy:

  • 1) Permission check: “Hi {Name}—quick one. Are you open to a brief question about how you’re handling {process} at {Company}?”
  • 2) Personalization + relevance: “Saw your post on {topic}. Curious—does {problem} show up for your team too, or is it mostly solved?”
  • 3) Value-first (no link): “We’ve seen {ICP} reduce {metric} by fixing {issue}. If helpful, I can share the 3-step checklist here—want it?”
  • 4) Right-person routing: “Not sure if you’re the right contact for {area}. Who owns {decision/process} on your side?”
  • 5) Gentle follow-up: “Should I pause this thread, or is {month} better to revisit {topic}?”

Respectful pacing: one DM, wait, one follow-up, then stop. Common mistakes: sending links immediately, using “Just following up,” or writing paragraphs. Practical outcome: you’ll convert DMs into permission to email or book a call without damaging trust.

Section 5.5: Handling objections in writing (without long paragraphs)

Section 5.5: Handling objections in writing (without long paragraphs)

Written objections are easy to mishandle because you can over-explain. Your rule: acknowledge, clarify with one question, offer a small next step. Keep it to 3–5 lines. If the objection is emotionally loaded (trust, competitors), consider moving to a call: “Happy to talk live for 10 minutes.”

Price:
“Totally fair. To sanity-check fit: is the bigger issue budget, or proving ROI? If you tell me your target outcome ({metric}), I can share whether we’re a match and the leanest option.”

Timing / not now:
“Makes sense. What would need to be true for this to be a priority—{event}, {budget}, or {headcount}? If it helps, I can follow up in {specific month}.”

Trust / risk:
“I get the concern. Would it help if I shared (1) a relevant reference, and (2) a small pilot plan with clear exit criteria? If yes, what’s the main risk you’re trying to avoid?”

Competitor:
“Good choice to compare. When you evaluate {competitor} vs. alternatives, is your top criterion {speed}, {cost}, or {control}? If you share that, I’ll be direct about where we win/lose.”

Common mistakes: debating, sending a wall of text, or ignoring the objection and pushing for a meeting. Practical outcome: you’ll keep momentum while sounding calm and credible.

Section 5.6: Prompt templates for each channel and length

Section 5.6: Prompt templates for each channel and length

AI outputs improve when you specify: channel, audience, context, tone, length, and CTA. Include your product-and-customer brief (who it’s for, problem, outcomes, proof, constraints). Then ask for two versions (A/B) and choose based on clarity: the version that a busy buyer can answer fastest usually wins.

Call opener prompt:
“Write 2 call openers for a cold call. ICP: {role/industry}. Problem: {pain}. Offer: {what we do}. Tone: calm, permission-based. Include a 10-second agenda and ask to proceed. Max 45 seconds.”

Email prompts (three follow-ups):
“Draft 3 follow-up emails: (1) no reply, (2) recap after call, (3) propose next step. Include subject lines. Constraints: 80–110 words, 1 CTA, no buzzwords, no attachments.”

DM prompt (5 messages):
“Generate 5 LinkedIn DMs with respectful pacing. Personalization token: {token}. Goal: permission to ask 1 question. Each under 240 characters. No links.”

Voicemail + subject lines prompt:
“Write a 20-second voicemail and 8 subject lines that match it. Must be specific to {ICP} and {pain}. Avoid hype.”

A/B testing workflow:

  • Ask AI for Version A (direct) and Version B (softer).
  • Choose by clarity: can the buyer reply with “yes/no” or pick a time?
  • Keep the winner, then iterate one variable (subject line, CTA, first sentence).

Common mistakes: vague prompts (“write a sales email”), judging by what sounds clever, or changing multiple variables at once. Practical outcome: you’ll build a repeatable system that produces channel-appropriate scripts you can actually use.

Chapter milestones
  • Create a call opener + agenda that earns permission
  • Write 3 follow-up emails (no reply, recap, next step)
  • Draft 5 DM messages that don’t feel spammy
  • Create a short voicemail script and subject lines
  • A/B test: generate two versions and choose based on clarity
Chapter quiz

1. Why does reusing the same “good script” across call, email, and DM often fail?

Show answer
Correct answer: Because each channel requires the message to change shape to match how people read/respond
Calls, emails, and DMs work differently, so a one-size script can sound robotic and get ignored.

2. What best describes how a strong email script should be structured in this chapter?

Show answer
Correct answer: Provide crisp context and a single next step
Email is asynchronous, so it needs clear context and one clear action to take next.

3. What is the key advantage of a call compared to email, according to the chapter?

Show answer
Correct answer: You can ask, listen, and adjust in real time
Calls are synchronous, enabling live discovery and adaptation based on what the buyer says.

4. Which guideline best fits creating DMs that don’t feel spammy?

Show answer
Correct answer: Treat DMs as high-trust and easy to abuse, so keep them quick and respectful
DMs are informal and fast but high-trust, so misuse quickly feels spammy.

5. What does the chapter say is the real goal of these channel-specific scripts?

Show answer
Correct answer: To create an easy decision by clarifying what it is, relevance, and what happens next
The aim is clarity and an easy next step, not piling on persuasion or features.

Chapter 6: Practice, Improve, and Use Scripts Responsibly

By now you have discovery questions, objection replies, and a sense for how AI can help you draft and organize sales language. This chapter turns those pieces into a practical operating system: a weekly practice routine, a method to improve scripts without losing your authentic voice, and a safe process for turning messy conversations into CRM-ready notes and follow-ups.

Beginners often treat scripts like “final answers.” In reality, scripts are hypotheses. You test them in role-play, refine them with feedback, and then use them in real conversations where you must still listen, adapt, and stay honest. AI is excellent at generating variations and spotting gaps, but you are responsible for accuracy, tone, privacy, and ethical use. Your goal is not to sound like a bot—it’s to sound prepared.

We’ll work through six practical steps that match a real sales week: set up role-plays with clear scenarios, score what you wrote, iterate using AI critiques, convert calls into clean notes and next steps, apply privacy rules, and finish with a personal “script kit” you can reuse in calls, emails, and DMs.

Practice note for Create a weekly practice routine with AI role-play: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Turn calls into clean notes, next steps, and follow-ups: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build your personal script library and quick prompts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set quality checks: accuracy, honesty, and privacy basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Final project: your complete beginner sales script kit: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Create a weekly practice routine with AI role-play: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Turn calls into clean notes, next steps, and follow-ups: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Build your personal script library and quick prompts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Set quality checks: accuracy, honesty, and privacy basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Practice note for Final project: your complete beginner sales script kit: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.

Sections in this chapter
Section 6.1: Role-play setups: buyer personas and scenarios

A weekly practice routine works when your role-plays feel like the conversations you actually have. Start by defining 2–3 buyer personas and 3–5 scenarios. Keep them simple, specific, and tied to your product-and-customer brief so AI has enough context to play a realistic buyer.

Persona template (copy/paste): Role, industry, daily responsibilities, top KPI, current tools, constraints (budget, security, time), likely objections, and “what success looks like.” Then define a scenario: inbound demo request, cold outbound reply, renewal risk, competitor comparison, or “price pushback after proposal.”

  • Scenario A: New lead is curious but busy; you have 10 minutes to qualify.
  • Scenario B: Prospect says “We already have a vendor.” You need to explore gaps without arguing.
  • Scenario C: Prospect likes it but says “Not this quarter.” You need next steps and a future date.

Role-play prompt: “Act as a {persona}. I’m the seller of {product}. Your goal is to evaluate if this is worth a meeting. Be realistic, slightly skeptical, and answer briefly unless I ask good questions. Include one objection: {price/trust/timing/competitor}. After the role-play, provide what I did well and what I missed.”

Common mistake: practicing only “happy path” calls where the buyer agrees quickly. Instead, schedule one hard role-play per week. Over time, you’ll build calmness under pressure and learn which discovery questions actually unlock useful details.

Section 6.2: Scoring your script: clarity, tone, and flow

Practice without measurement turns into repetition. After each role-play or real call, score your script using three criteria: clarity, tone, and flow. This keeps improvement objective and beginner-friendly.

  • Clarity: Did the buyer understand what you do and why it matters in one or two sentences? Did your questions avoid jargon? Did you ask for a concrete next step?
  • Tone: Did you sound calm, curious, and respectful? Did you acknowledge concerns before explaining? Did you avoid defensiveness when objections came up?
  • Flow: Did the conversation move from context → problem → impact → fit → next steps? Or did it jump around? Did you “stack” too many questions at once?

Practical scoring method: rate each category 1–5, then write one sentence: “If I could change one line, it would be ___.” That single sentence is your improvement target for the week.

Engineering judgment matters here. A script can be “clear” but too long; it can be “friendly” but vague; it can “flow” but miss qualification. Your scores should reflect your sales motion: a 10-minute qualification call needs faster clarity; a high-trust enterprise deal needs slower pacing and more proof points.

Common mistakes to catch with scoring: opening with a feature dump, asking leading questions (“You’d want to save time, right?”), skipping impact (“What happens if this stays the same?”), and ending without a specific calendar action.

Section 6.3: Iteration loop: ask AI to critique and revise

Once you score your script, run an iteration loop. The goal is to revise with intention, not randomly request “make it better.” Give AI your original lines, your scores, and the situation. Ask for a critique first, then a rewrite, then a version that preserves your voice.

Iteration loop prompt (3 steps):

  • Step 1 – Critique: “Here’s my opener + 6 questions + my objection reply. Critique for clarity, tone, flow. Point out any risky claims or assumptions.”
  • Step 2 – Rewrite: “Rewrite with fewer words. Keep it beginner-friendly. Provide two variants: direct and warm.”
  • Step 3 – Voice lock: “Now rewrite again but keep my phrasing style: short sentences, no hype, ask one question at a time.”

Add constraints so the output is usable: maximum length for an opener, number of questions, and what must be included (a permission-based transition, a pricing deflection line, a competitor-neutral response). If you sell via email or DMs, also ask for channel-specific versions that keep the same intent but adjust formatting and brevity.

Common mistake: letting AI introduce promises you cannot guarantee (“We’ll cut costs by 30%”). Build a habit of scanning for absolutes, numbers, and compliance language. Replace them with honest, evidence-based wording: “Typically,” “in similar teams,” “depending on,” and “we can validate during a pilot.”

Practical outcome: by the end of a month, you should have a small set of tested lines (openers, transitions, objection replies) that feel natural and consistently score higher.

Section 6.4: Turning conversations into CRM-ready notes

One of the highest-leverage uses of AI is post-call cleanup: turning a messy conversation into clean notes, next steps, and a follow-up message. The key is structure. If you feed AI a transcript (or rough notes), tell it exactly what format your CRM expects and what counts as a “decision-worthy” detail.

CRM note structure (example):

  • Summary: 2–3 sentences, no fluff.
  • Pain / goal: what they’re trying to achieve and why now.
  • Current approach: tools, workflow, gaps.
  • Decision process: stakeholders, timeline, procurement steps.
  • Risks/objections: price, timing, trust, competitor comparison.
  • Next steps: exact actions, owners, dates.

Prompt: “Convert the notes below into CRM-ready fields using the structure above. Separate facts from assumptions. Flag missing info as questions for my next call. Then draft a follow-up email that recaps value in their words, confirms next steps, and asks one clear question.”

Engineering judgment: insist on factual language. AI will sometimes infer motivations or budgets that weren’t stated. Ask it to label uncertainties explicitly (“Not confirmed”). This protects accuracy and builds trust internally when others rely on your CRM notes.

Common mistakes: capturing only what you said (product features) instead of what they said (needs and constraints), forgetting stakeholders, and sending follow-ups that restate your pitch rather than confirming agreements. When done well, this workflow shortens your admin time and improves handoffs.

Section 6.5: Privacy, sensitive data, and safe sharing rules

Using scripts responsibly means protecting customer data and representing your offering honestly. Treat AI like a powerful assistant that should only see what it must see. Your default posture should be: minimize, anonymize, and verify.

  • Minimize: Don’t paste full contracts, full call recordings, or complete customer lists when a summary will do.
  • Anonymize: Replace names and identifiers with placeholders (Company A, Buyer 1). Remove emails, phone numbers, addresses, and account IDs.
  • Verify: Never send AI-generated claims to prospects without checking. If you cite security, compliance, pricing, or results, confirm against official sources.

Sensitive data to avoid sharing: payment details, passwords, API keys, health information, legal disputes, non-public financials, and any confidential roadmap. Also be cautious with transcripts that include private employee information or internal customer strategy.

Honesty basics: scripts must not imply endorsements, guaranteed outcomes, or capabilities you don’t have. If the buyer asks a question you can’t answer, the responsible script is: “I don’t want to guess. I’ll confirm and follow up by {time}.” That line protects trust better than a polished but wrong answer.

Finally, align with your company policies. If your org has approved tools, retention settings, or redaction requirements, use them. Responsible use is part of your professionalism, not an optional extra.

Section 6.6: Your final deliverables: scripts, objections, and prompts

Your final project is a complete beginner sales script kit you can use immediately and improve over time. You are building a personal library: a small number of high-quality assets, not a giant folder of untested text.

  • 1) Product-and-customer brief (1 page): who it’s for, problem, outcomes, differentiators, proof points, constraints, and “not a fit” cases.
  • 2) Discovery script set: one call opener, one agenda, 10 core questions, and 6 follow-up probes (why/impact/process/priority/timeline).
  • 3) Objection handling sheet: price, timing, trust, competitor, and “need to think” with calm acknowledgments + 2 questions each.
  • 4) Role-play pack: 3 personas × 3 scenarios with prompts you can reuse weekly.
  • 5) Post-call workflow: CRM note template + follow-up email and DM templates.
  • 6) Quick prompts library: rewrite prompts (shorter/warmer/more direct), critique prompts, and “turn notes into next steps” prompts.

Quality checks before you ship anything: Is it accurate? Is it honest? Does it respect privacy? If you can’t confidently answer “yes” three times, revise. Then test the kit in one real conversation, capture what happened, and run the iteration loop again. This is how beginners become consistent: a repeatable practice system that produces better conversations, cleaner follow-ups, and stronger trust.

Chapter milestones
  • Create a weekly practice routine with AI role-play
  • Turn calls into clean notes, next steps, and follow-ups
  • Build your personal script library and quick prompts
  • Set quality checks: accuracy, honesty, and privacy basics
  • Final project: your complete beginner sales script kit
Chapter quiz

1. In Chapter 6, why are scripts described as “hypotheses” rather than “final answers”?

Show answer
Correct answer: Because they should be tested in role-play, refined with feedback, and adapted in real conversations
The chapter emphasizes testing and iterating scripts, then adapting them while listening in real conversations.

2. What is the best way to use AI during weekly practice, according to the chapter?

Show answer
Correct answer: Set up role-plays with clear scenarios, score what you wrote, and iterate using AI critiques
Chapter 6 outlines a practice loop: role-play with scenarios, evaluate, and improve using AI feedback.

3. When turning messy conversations into CRM-ready output, what does the chapter recommend producing?

Show answer
Correct answer: Clean notes, next steps, and follow-ups
The chapter highlights converting calls into usable notes plus clear next steps and follow-ups.

4. Which responsibility remains with you even if AI generates strong variations and points out gaps?

Show answer
Correct answer: Ensuring accuracy, tone, privacy, and ethical use
The chapter stresses that you are accountable for honesty, accuracy, privacy, and overall responsible use.

5. What is the main goal of the final project described in Chapter 6?

Show answer
Correct answer: Finish a personal “script kit” you can reuse across calls, emails, and DMs
The chapter ends with assembling a reusable beginner sales script kit for multiple channels.
More Courses
Edu AI Last
AI Course Assistant
Hi! I'm your AI tutor for this course. Ask me anything — from concept explanations to hands-on examples.