Prompt Engineering — Beginner
Turn everyday goals into clear prompts that deliver usable plans.
This beginner course is a short, book-style guide to prompt engineering you can use immediately. You won’t learn coding. You won’t need technical terms. Instead, you’ll learn how to “talk to AI” in a clear, structured way so it can produce plans you can actually follow—trip itineraries, meal plans, and simple budgets.
Many first-time users try AI once, get a generic answer, and give up. The real difference is not the tool—it’s the prompt. When you learn how to provide the right context, limits, and format, the quality of the output improves fast. This course teaches that step-by-step, with reusable templates you can keep using after you finish.
By the final chapter, you’ll have your own small “prompt library”: fill-in-the-blank prompts that reliably generate usable results. You’ll know how to start with a messy idea (like “I want to eat healthier” or “I need a cheap weekend trip”) and turn it into a clear request that produces a plan, a checklist, or a table—without feeling overwhelmed.
The six chapters build on each other. You start with the simplest possible prompt and learn what an AI response really is: a draft you can shape. Next, you add constraints and quality checks so your answers become more reliable. Then you apply the same approach to three everyday domains—travel, food, and money. Finally, you combine everything into reusable templates and a capstone workflow.
Each chapter includes clear milestones (small wins) and focused subtopics. You’ll practice by copying proven prompts, then editing them to match your life. This makes it easy to learn even if you’ve never used AI tools before.
You’ll also learn simple safety habits: how to avoid sharing sensitive information, how to ask the AI to state assumptions, and how to spot common issues like vague answers or made-up details. The goal is not perfection—it’s getting practical value while staying in control of the final decision.
This course is for individuals who want to use AI for personal productivity: planning a trip, organizing meals, or getting clarity on a budget. If you’ve ever stared at a blank page and didn’t know where to start, the prompt templates in this course will give you a clear first step every time.
If you’re ready to learn prompt engineering in plain language and create plans you can use today, join the course and start practicing. Register free or browse all courses to find your next skill.
Prompt Engineering Educator and Productivity Systems Designer
Sofia Chen designs beginner-friendly AI workflows that help people make decisions faster and communicate more clearly. She has trained teams and individuals to turn messy goals into structured prompts that produce reliable plans and checklists.
You don’t need technical terms to use AI chat well. Think of it like asking a very fast assistant to produce a first draft. You write a message (a prompt), the assistant replies with a draft you can accept, edit, or steer. This chapter shows how that “steering” works using plain language and a simple template you will reuse for trips, meals, and budgets.
The key mindset shift is this: AI chat tools are not mind-readers, and they are not guaranteed to be correct. They are pattern-based draft-makers. Your job is to be a clear manager: give the goal, provide the situation, set limits, and request a specific kind of output. When you do that, you get results that feel surprisingly tailored—without starting over each time.
By the end of this chapter, you’ll write your first practical prompt, fix a vague request with one extra sentence, and learn how to save the good outputs so you can build on them later.
Let’s start with what an AI chat tool actually does.
Practice note for You send a message, you get a draft: how AI chat works in plain language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for The 5-part prompt: goal, context, constraints, format, tone: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for A first win: ask for a simple checklist you can use today: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Fixing vague prompts with one extra sentence: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Saving good outputs: notes, copy/paste, and versioning basics: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for You send a message, you get a draft: how AI chat works in plain language: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for The 5-part prompt: goal, context, constraints, format, tone: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for A first win: ask for a simple checklist you can use today: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Fixing vague prompts with one extra sentence: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
An AI chat tool takes your message and predicts a helpful reply based on patterns it learned from lots of examples of writing. In everyday terms: it’s like an autocomplete that can finish not just a sentence, but an email, a plan, or a checklist. That’s why it feels “smart”—it usually produces a coherent answer in the style you expect.
But it’s important to know what it is not. It is not a human expert with real-life experience, and it is not automatically connected to your bank account, calendar, or local grocery store inventory. Unless you provide the details (or explicitly use a tool that connects data), the AI will fill gaps with reasonable-sounding assumptions. That can be useful for brainstorming, but risky for decisions where accuracy matters (prices, opening hours, medical advice, legal rules).
Good prompt engineering is simply good communication. You’re training the assistant on your situation through the message you send: what you want, what matters, what you can’t do, and what “done” looks like. When the prompt includes those pieces, the tool’s output becomes more reliably usable for practical outcomes like “a three-day itinerary under $500” or “a week of dinners that take 20 minutes.”
Engineering judgment for beginners: treat AI output as a draft to review. If something would require you to check a website or do math, assume you still should verify it. Use the AI for speed, structure, and options—then apply your common sense and constraints.
AI chat has a simple loop: you send a message, you get a response. The response should be treated as a draft, not a final product. This one idea will save you frustration. If you think of the response as a final answer, you’ll be annoyed by imperfections. If you think of it as a draft, you’ll naturally edit, ask for tweaks, and improve it quickly.
Your prompt is the input. The output is the assistant’s best attempt at a response given the words you used. If your prompt is missing details, the assistant has two choices: ask questions, or guess. Many tools guess because it keeps the conversation moving. That’s why vague prompts create vague or overly generic outputs.
A practical workflow looks like this:
Common beginner mistake: rewriting the entire prompt from scratch each time. You usually don’t need to. If the first draft is close, your best tool is a targeted follow-up like: “Keep the same structure, but make it cheaper and slower-paced.” That preserves what already works and focuses the next output on what doesn’t.
“Asking” is what most people do: “Plan me a trip to Chicago.” The assistant can respond, but it must guess your dates, budget, interests, pace, and lodging style. “Specifying” is what gets you usable results: you provide the variables that matter so the tool can make good tradeoffs.
Here’s the difference in practice:
Notice what changed: the second version sets boundaries and success criteria. The AI can now choose meals that share ingredients, avoid meat, and stay under a time limit. You didn’t become “technical”—you just became clear.
Fixing vague prompts with one extra sentence often works. Keep your original request, then add a single line that forces the assistant to make choices the way you would. Examples:
Engineering judgment: specify the constraints that would cause you to reject the answer. If you hate early mornings, say so. If you can’t exceed $60 for groceries, say so. If you need kid-friendly options, say so. Constraints are not “extra”—they are the difference between generic output and a plan you’ll actually follow.
To make prompting repeatable, use a simple five-part template. You can write it in one paragraph or as labeled lines. The parts are: Goal, Context, Constraints, Format, and Tone. This template works for trips, meals, and budgets because those tasks all require tradeoffs.
Example prompt (you can copy and adapt):
Goal: Create a simple weekly personal budget starter plan.
Context: I’m paid $3,200/month after tax. I pay $1,450 rent, $120 internet/phone, and $260 minimum debt payments. I’m trying to save for an emergency fund.
Constraints: Keep spending realistic; include groceries, transport, fun, and misc; leave at least $300/month for savings.
Format: A table with category, monthly amount, and short notes, then 3 “what-if” adjustments (e.g., if rent increases, if income drops).
Tone: Clear and practical.
What this does: it tells the assistant what “good” looks like, and it prevents common failure modes (forgetting categories, not doing totals, ignoring savings). In later chapters you’ll refine this template for itinerary pacing, dietary restrictions, and budget scenarios—but the structure stays the same.
One of the fastest ways to improve AI output is to ask for structure. Structure turns a long paragraph into something you can use immediately: a checklist you can follow, a table you can copy into notes, or step-by-step instructions you can execute.
A “first win” prompt is to request a checklist you can use today. For example:
Goal: Make a travel packing checklist for a 3-day city trip.
Context: I’ll be walking a lot, weather 45–60°F, staying in a hotel, traveling with a carry-on only.
Constraints: Keep it minimal; include essentials, clothing, and tech; avoid duplicates.
Format: Checklist grouped by category; include a short “last-minute before leaving” section.
Tone: Simple and encouraging.
For meals, structure might be: “Give me a 5-day dinner plan in a table with: day, meal, cook time, and leftover plan, then a shopping list grouped by produce/protein/pantry.” For budgets: “Output a table, then bullet-point recommendations.”
Follow-up questions should preserve structure while changing requirements. Examples:
Saving good outputs matters because you’ll iterate. Copy the structured result into a notes app, a document, or a spreadsheet. Add a short label at the top like “Meal Plan v1” and date it. If you make a change, save “v2” rather than overwriting. Versioning sounds formal, but it’s simply keeping your best drafts so you don’t lose progress.
This short lab builds the habit of improving prompts without jargon. Start with one vague prompt and rewrite it three ways: (1) vague, (2) better with one extra sentence, (3) best using the full 5-part template. Choose a task you actually need—trip, meals, or budget—so the result is useful.
Example task: meal planning
Version A (vague): “Make a meal plan for this week.”
Version B (one extra sentence that fixes it): “Make a meal plan for this week. Dinners only, vegetarian, and each dinner must take 25 minutes or less.”
Version C (full template):
Goal: Create a 5-dinner plan for Monday–Friday.
Context: Two adults. I have a stove and microwave. I can cook once per day and want leftovers for lunch twice this week.
Constraints: Vegetarian, no mushrooms, max 25 minutes active cooking time, keep the ingredient list efficient (reuse at least 6 ingredients across multiple meals), estimated cost under $75 total.
Format: Table with day, dish, active time, leftover note; then a shopping list grouped by produce/dairy/pantry; then a 10-minute prep plan for Sunday.
Tone: Direct and practical.
Now practice the follow-up habit. Don’t restart. Ask one targeted revision question, such as: “Keep everything the same, but replace the most expensive dinner with a cheaper option and update the shopping list.” This is how you build better outputs step by step—like editing a document—rather than hoping the first draft is perfect.
Finally, save the output with a clear name (e.g., “Dinner Plan Mar 27 v1”). When you revise it, save “v2.” That simple versioning habit makes AI useful over time, not just in one-off chats.
1. In this chapter, what is the most accurate way to think about how AI chat works?
2. What is the main mindset shift the chapter asks you to adopt when prompting?
3. Which set matches the chapter’s 5-part prompt template?
4. When a prompt is vague, what does the chapter recommend to improve results without starting over?
5. Why does the chapter suggest saving good outputs (notes/copy-paste/versioning)?
In Chapter 1 you learned how to “ask” an AI for help. In real life, the next problem is reliability: you want a trip itinerary that fits your dates, not a generic list; a meal plan that respects your dietary needs, not a random recipe dump; a budget that adds up, not a set of rough guesses. This chapter teaches you how to make AI answers sturdier using two tools you already understand from everyday life: rules and checking your work.
Think of an AI chat tool like a fast assistant who has read a lot, but doesn’t have your calendar, your bank balance, or your pantry unless you provide it—and it may confidently fill in missing details. The fix isn’t to “argue” with the tool. The fix is to engineer the conversation so the model has fewer chances to guess and more chances to confirm. You will learn how to add constraints (time, money, preferences, limits), choose the right output format (table vs checklist vs short plan), and run a quick quality check to catch errors early. You’ll also learn how to correct wrong answers without restarting and how to set boundaries so you don’t share more than you should.
A reliable workflow is simple: (1) state your goal, (2) provide context, (3) write constraints as rules, (4) request a specific output format, and (5) ask for a built-in review step. If you practice those five moves, you’ll save time and reduce frustration across trips, meals, and budgets.
Practice note for Add constraints that matter: time, money, preferences, and limits: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Choose the right output: table vs checklist vs short plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Create a quick “quality check” prompt to catch errors: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for When the answer is wrong: how to correct without frustration: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Set boundaries: what not to share and how to stay safe: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Add constraints that matter: time, money, preferences, and limits: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Choose the right output: table vs checklist vs short plan: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Create a quick “quality check” prompt to catch errors: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
AI chat tools produce answers by predicting what text should come next based on patterns in training data and your prompt. That means they can sound confident even when details are missing. If you ask, “Plan a 3-day trip to Chicago,” the tool has no idea whether you like museums or sports, whether you arrive at 8 a.m. or 8 p.m., or whether your budget is $150/day or $500/day. In the absence of those details, it will often guess what most people might want.
Everyday analogy: imagine telling a friend, “Pick dinner for us.” If you don’t mention you’re vegetarian, they might choose a steakhouse—not because they ignored you, but because you didn’t share the constraint. AI behaves similarly, except it tries to be helpful by filling gaps with plausible defaults.
Common beginner mistake: treating an AI response as a verified plan rather than a draft. Another mistake is assuming the tool can “remember” your constraints unless you repeat them clearly, especially when the conversation gets long.
The practical outcome: you will get better results when you treat AI outputs as proposals that must be pinned down with rules and checked. Reliability comes from reducing uncertainty, not from asking the same question louder.
Constraints are the single biggest lever you have for reliability. A constraint is a rule that narrows the solution space: dates, time windows, budget caps, pace, dietary needs, “must include” items, and “do not include” limits. When you provide constraints, you are not being picky—you are supplying the missing inputs that make a plan workable.
Write constraints in measurable terms whenever possible. “Cheap” is vague; “under $120/day including lodging and local transit” is actionable. “Quick meals” is vague; “weeknight dinners in 25 minutes, one pan preferred” is actionable. “Not too busy” is vague; “max 2 major activities per day, 60–90 minutes of downtime in the afternoon” is actionable.
Also name constraints that often get forgotten: opening hours, travel time between stops, cooking equipment, and recurring commitments. For budgets, specify what categories are included and excluded (rent, utilities, subscriptions, debt payments). For trips, define the pace and transport assumptions (walking vs rideshare vs metro).
Finally, choose an output format that makes constraints visible. A table is ideal for schedules and budgets because you can scan for totals and time conflicts. A checklist is ideal for packing, shopping, or “steps to do.” A short plan is best when you need a quick overview before you commit to details.
When you cannot supply all details upfront, ask the AI to list assumptions and ask you clarifying questions before it generates a final plan. This prevents the common loop where you get an answer, notice it doesn’t fit, and start over.
A practical prompt move is to separate “draft” from “final.” For example: “First, ask up to 5 clarifying questions. If I don’t answer, proceed with clearly stated assumptions.” This keeps the conversation moving while still making uncertainty visible.
Ask for assumptions in categories: time, budget, preferences, and limits. This mirrors real planning. For trips, assumptions might include average transit times, typical ticket prices, or the “center” neighborhood you stay in. For meal plans, assumptions might include pantry staples, spice availability, and whether leftovers are acceptable. For budgets, assumptions might include monthly averages for utilities or variable spending.
Engineering judgment tip: you do not need to answer every question. Answer the ones that affect decisions. If an AI asks, “Which museums do you like?” and you only care about “one museum max,” that rule matters more than the museum name. Provide the rule.
This habit makes the tool collaborative: it surfaces gaps early and reduces the chance of confident-but-wrong details.
You do not need to become a professional researcher to fact-check AI output. You need a small set of repeatable checks that catch the most common failures: math errors, missing constraints, and unrealistic timing. Think of it like proofreading a document—quick passes for known trouble spots.
Start with “constraint checks.” Scan the answer and confirm the rules you gave are actually followed. For a trip: dates match, activities fit arrival/departure times, pace is reasonable, and the budget is not quietly exceeded. For meals: dietary restrictions are respected, cooking times are within limits, and the shopping list matches the recipes. For budgets: totals add up, caps are clear, and the remainder is computed correctly.
Next do “reality checks.” Does an itinerary include three neighborhoods in one morning with 10 minutes between them? Does a meal plan assume you already own specialty ingredients? Does a budget double-count a bill? These are common because AI is good at structure but not always good at physical logistics.
Use the AI itself to run a quality check prompt. You can ask it to verify its own output against your constraints and highlight conflicts. Then you, the human, do a quick external verification for critical items: opening hours, prices, reservation requirements, and current transit routes. Beginners often over-check trivia while missing the big risks; focus on anything that would waste money or time.
The outcome is not perfection; it’s confidence that the plan is safe enough to act on.
Reliability is not only about correctness; it is also about safety. When you plan trips, meals, and budgets, it is easy to share sensitive details without noticing. A good boundary is: share what the AI needs to generate a useful draft, but avoid unique identifiers and anything that could be used for fraud or tracking.
Do not paste: full names tied to addresses, passport or driver’s license numbers, booking confirmation codes, bank account numbers, full credit card numbers, or screenshots containing barcodes/QR codes. Be careful with exact home address, exact daily routines, and your employer’s confidential details. For budgets, you rarely need to share specific account logins or complete transaction histories—summaries and category totals are enough.
Use safer alternatives. Replace identity details with placeholders (“Traveler A”), replace exact addresses with general areas (“staying near Union Square”), and round numbers if precision is not required (“about $3,200 net monthly”). If you need tailored results without personal exposure, provide constraints that describe the situation rather than revealing it: “red-eye flight arrival,” “mobility limit,” “no alcohol,” “low-sodium,” “needs toddler-friendly options.”
Also set boundaries on what you want the AI to do. You can instruct it: “Do not request personal identifiers. If needed, ask for generalized inputs.” This keeps the conversation aligned with your comfort level and reduces accidental oversharing while still producing practical plans.
When an answer is wrong, you do not need to restart. The fastest path is to keep the parts that work and revise the parts that violate constraints. Professionals do this by using a “review and revise” pattern: first diagnose, then edit, then re-check.
Here is a reusable prompt you can paste after any AI-generated plan. It works for itineraries, meal plans, and budgets:
This pattern reduces frustration because it turns “wrong” into “debuggable.” You are not debating; you are specifying a test and requesting a fix. Two common mistakes to avoid: (1) adding new constraints every round without confirming priorities (which can make the plan impossible), and (2) asking for a completely new plan instead of a targeted revision (which discards good work).
Practical outcome: you will iteratively converge on plans that fit your dates, pace, dietary needs, and budget limits—with the AI doing most of the rewriting and you doing the decision-making and final verification.
1. Why do AI outputs often feel unreliable in real-life planning tasks like trips, meals, and budgets?
2. Which prompt change best reduces the AI’s chances of making incorrect assumptions?
3. According to the chapter, what is the main purpose of choosing a specific output format (table vs checklist vs short plan)?
4. What is a “quality check” prompt meant to do in your workflow?
5. When the AI gives a wrong answer, what approach does the chapter recommend?
Trip planning is where prompt engineering starts to feel “real.” A good itinerary isn’t a list of famous places—it’s a plan that fits your dates, energy level, budget, and the messy details of real life: check-in times, transit delays, weather, and the fact that you don’t want to sprint across town three times a day.
In this chapter you’ll learn a repeatable workflow: (1) write a trip brief, (2) ask for an itinerary in a usable format with time blocks and travel time, (3) make it budget-aware with a daily spend limit, (4) generate packing and prep timelines, and (5) keep it flexible with swaps and rainy-day backups. You’ll also practice asking follow-up questions that improve results instead of restarting from scratch.
Remember what AI chat tools are and are not. They are great at assembling options, structuring plans, and adapting to constraints you give. They are not a live travel agent with guaranteed up-to-date hours or availability. Your prompts should therefore include “assumptions” and “verification steps” (for example: “flag anything that likely needs reservations” and “estimate transit time but label it as an estimate”). That mindset—engineering judgment—turns a generic plan into one you’d actually take.
Common mistake: asking “Plan my trip to Tokyo” and hoping the AI guesses everything. Instead, treat your prompt like a briefing note you’d give a smart friend. The more clearly you specify pace, priorities, and limits, the more the itinerary will match your reality.
Practice note for Build a trip brief: dates, pace, interests, and non-negotiables: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Generate an itinerary with time blocks and travel time: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Budget-aware travel prompts: set a daily spend limit: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Packing list and prep timeline prompts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Make it flexible: one-day swaps and rainy-day backups: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Build a trip brief: dates, pace, interests, and non-negotiables: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Generate an itinerary with time blocks and travel time: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Budget-aware travel prompts: set a daily spend limit: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
The fastest way to get a useful itinerary is to start with a “trip brief.” This is a compact set of inputs that prevents the AI from filling gaps with assumptions you don’t want. Your brief should include destination (and neighborhoods if relevant), dates and arrival/departure times, and your travel style: relaxed, moderate, or packed. Add non-negotiables (must-do) and hard constraints (budget, walking tolerance, dietary needs, kid nap times, etc.).
Here’s a practical prompt template you can reuse. Notice how it forces clarity without becoming a novel:
Engineering judgment: choose constraints that actually drive the plan. “We like museums” is helpful; “We like fun” is not. “We have a 09:00 meeting on day 2” matters more than “We like coffee.” If you’re unsure, ask the AI to help you build the brief: “Ask me up to 8 questions to complete a trip brief; prioritize questions that affect pacing and cost.”
Common mistake: leaving out starting point and time windows. If you land at 17:00 and the itinerary starts with a 10:00 museum, you’ll spend time fixing avoidable errors. Put arrival/departure and check-in/check-out right in the brief.
Once your brief is set, request an itinerary in a format you can execute. The two most usable formats for beginners are (1) day-by-day with time blocks and (2) morning/afternoon/evening (MAE). Both help you see pacing and prevent the “laundry list” problem.
Ask explicitly for travel time and buffers. Without that, the AI may schedule back-to-back activities across town. A strong output request sounds like:
Prompt: “Create a 3-day itinerary in morning/afternoon/evening blocks. For each block include: activity, neighborhood, estimated transit time from prior stop (walk/metro/taxi), and a 20–40 minute buffer. Keep to max 12k steps/day. End each day with 2 dinner options in different price ranges.”
If you want more precision, ask for a time-block schedule. For example: “09:00–11:00, 11:00–12:00 transit/break, 12:00–13:30 lunch…” This makes it easier to spot overload and to align with real-world constraints like museum hours or family routines.
Common mistake: asking for “hidden gems” without defining what counts. Add a constraint: “avoid anything requiring a car,” or “include 1 lesser-known spot per day within 20 minutes of the main area.” Practical outcome: you get novelty without logistical chaos.
Follow-up technique: instead of restarting, tweak one variable. Example: “Day 2 feels too busy. Reduce walking by 30%, keep the same non-negotiables, and swap in one seated activity (boat tour or viewpoint café).”
AI can generate endless ideas, which is exactly why you should limit it on purpose. A simple rule: ask for “3 choices” whenever you’re deciding. This keeps you in control and makes comparison easy.
Use “3 choices” at key decision points: neighborhoods to stay in, day-trip candidates, dinner options, or which museum fits your interests. Specify the comparison criteria so the choices are meaningfully different:
Prompt: “Give me 3 options for a half-day activity on Day 1 afternoon that fit: low walking, mostly indoors (weather risk), and under €25 per person. For each option include: why it matches, estimated duration, transit from Baixa, and one nearby casual dinner suggestion.”
This pattern is also perfect for budget-aware planning. Instead of one itinerary, ask: “Provide 3 versions of Day 2: budget, mid-range, and splurge, each with estimated cost per person.” You’ll quickly see where the money goes and can choose intentionally.
Common mistake: requesting “top 10” lists and then feeling overwhelmed. Ten items still require you to do the hard work of narrowing. Asking for three with clear constraints forces the AI to do the first round of prioritization—where it shines.
Practical outcome: you get decision-ready output that matches your pace and daily spend limit, without spending an hour scrolling through suggestions you won’t use.
Trips succeed or fail on comfort factors: stairs, heat, restroom access, seating breaks, sensory overload, or whether someone gets motion sick. Good prompts name these needs plainly and ask the AI to plan around them without judgment.
Start by stating the need as a constraint, then tell the AI how to adapt: “minimize stairs,” “prefer elevators,” “include a seated break every 2 hours,” or “avoid long bus rides.” If you have mobility limitations, ask for routes that cluster activities by neighborhood and prioritize accessible transit. For families, include nap windows and playground time.
Prompt: “Plan Day 1 for Barcelona with these comfort constraints: max 8k steps, avoid steep hills and long stair climbs, include restroom access at least every 2–3 hours, and schedule a 60-minute seated break mid-afternoon. Provide 2 lunch options with vegetarian-friendly choices. Output in time blocks with short walking segments.”
Common mistake: hiding constraints until the end (“Also we can’t walk much”). Put them near the top of the prompt so they guide every choice. Practical outcome: the itinerary feels humane—built around your body and preferences, not an idealized traveler.
Real trips include friction: attractions close unexpectedly, transit runs late, and popular places require reservations. You can prompt the AI to reduce risk by identifying “fragile” parts of the plan and adding buffers, alternatives, and checkpoints.
First, ask for estimated transit times and label them as estimates. Second, ask the AI to flag anything likely to need advance booking. Third, request a “verification checklist” you can complete in 10 minutes (hours, ticket links to search, holiday closures, etc.).
Prompt: “Build a 4-day itinerary for Rome. For each day: include estimated transit time between stops, a 30-minute buffer before any timed entry, and mark items that likely require reservations. Add a short verification checklist at the end: what to confirm (hours, closures, ticket requirements), and what to book first.”
Common mistake: over-optimizing the schedule with no slack. A plan that only works if everything is perfect will fail. Practical outcome: you get an itinerary that survives delays, long lines, and weather changes.
This is also where one-day swaps and rainy-day backups belong. Ask: “For each day, include one indoor backup within 15 minutes if weather turns.” Or: “Provide a swap plan: if Day 3 morning gets rained out, swap with the best indoor block from another day while keeping reservations intact.”
This lab walks you from zero to a usable 3-day itinerary using the same template every time. Choose any destination you like; the point is practicing the prompting workflow, not the city itself.
Step 1 — Write a trip brief (copy/paste and fill):
Step 2 — Generate the first itinerary: “Using the trip brief above, create a 3-day itinerary in morning/afternoon/evening blocks. Include estimated transit time between blocks and a rough cost per day that stays within my daily spend limit. Keep activities clustered by neighborhood.”
Step 3 — Improve with follow-ups (don’t restart): Pick one improvement at a time. Examples: “Day 2 afternoon is too expensive—replace it with 3 cheaper options under $20.” “We want more rest—add a café break each afternoon and reduce walking.” “Add a packing list and a prep timeline: 2 weeks, 3 days, and morning-of.”
Step 4 — Add flexibility: “Provide 2 one-day swap options that keep reservations safe, plus one rainy-day backup per day.”
Common mistake: changing five things at once and getting an unpredictable rewrite. Engineering judgment here means iterating in small, testable edits. Practical outcome: a plan with clear time blocks, realistic transit, a daily spend limit, a packing list, and backup options—something you can actually follow when you land.
1. Which prompt best reflects the chapter’s recommended workflow for planning a trip?
2. Why does the chapter recommend requesting time blocks and travel time in an itinerary output?
3. What is the main purpose of setting a daily spend limit in your travel prompt?
4. Which statement best matches the chapter’s guidance on what AI chat tools are and are not?
5. According to the chapter, what’s the best way to improve a travel plan after the first AI response?
Meal planning is a perfect “real life” use case for prompt engineering because the problem has clear goals, lots of constraints, and a need for structured outputs. A good plan should match your diet, your schedule, your cooking skill, and your budget—while still being realistic to shop for and cook. This chapter gives you a repeatable workflow and prompt patterns that produce (1) a weekly meal plan, (2) a leftovers strategy so you cook once and eat twice, (3) beginner-friendly recipes, and (4) a categorized shopping list with quantities and priorities.
The key idea: don’t ask the AI for “healthy meals.” Ask it to solve a planning brief. Your brief is the “contract” that defines what counts as a good answer. Then you choose an output format that is easy to use (a table beats a paragraph), and you add follow-up questions that tighten the plan instead of restarting.
Throughout this chapter, you’ll reuse the same prompt template you learned earlier: Goal (what you want), Context (who/where/how you cook), Constraints (diet, time, budget), and Output (table, list, quantities). When you build prompts this way, you’re doing engineering judgment: deciding what variables matter, defining them clearly, and requesting outputs you can act on.
Common mistakes to avoid: leaving portions ambiguous (“for a family”); mixing non-negotiables (allergies) with preferences (dislikes) without ranking them; forgetting pantry items you already have; and accepting a meal plan that doesn’t convert cleanly into a shopping list. The sections below fix those problems with concrete prompt patterns.
Practice note for Meal-planning brief: dietary needs, cooking time, and budget target: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weekly meal plan prompts that reuse ingredients to save money: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Turn meals into a categorized shopping list: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Leftovers plan: cook once, eat twice prompts: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Recipe simplifier: make meals beginner-friendly and faster: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Meal-planning brief: dietary needs, cooking time, and budget target: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Weekly meal plan prompts that reuse ingredients to save money: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Turn meals into a categorized shopping list: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Start meal planning by locking down constraints. This is not busywork—it prevents unusable results. AI will happily suggest peanut satay to someone with a peanut allergy unless you state it clearly and early. In your brief, separate hard constraints (must follow) from soft constraints (preferences). Hard constraints include allergies, medical dietary rules, religious restrictions, and “must be gluten-free.” Soft constraints include dislikes, “try to reduce sugar,” or “prefer chicken over beef.”
Next, define portions and appetite assumptions. “Serves 4” is not enough if two people are adults and one is a toddler, or if you want leftovers for lunch. Specify: number of people, whether lunches are included, and whether you want leftovers. This one change dramatically improves shopping quantities and prevents underbuying protein or vegetables.
Prompt pattern (meal-planning brief): “Goal: Create a 7-day dinner plan. Context: 2 adults; beginner cook; typical grocery store. Constraints: peanut allergy (avoid all peanuts/peanut oil); dislikes: mushrooms, olives; target 4 servings per dinner so we have 2 leftover lunches each weeknight; include at least 3 vegetarian dinners. Output: table with day, meal, prep time, and note on leftovers.”
Engineering judgment tip: if something is safety-critical (allergies), repeat it in both the constraints and the output request: “Add an ‘allergen check’ note per meal.” Repetition reduces error.
Most meal plans fail because they ignore time and energy. Tuesday night is not the same as Sunday afternoon. Your prompt should reflect two cooking modes: weeknight (low time/low energy) and weekend (batch cooking, longer recipes). A practical prompt asks the AI to assign meals based on these modes and to include prep notes that move work to easier days.
Define time limits in minutes, not vague terms like “quick.” Also define your equipment: stovetop, oven, slow cooker, air fryer, rice cooker, microwave. Equipment constraints let the AI recommend realistic methods (e.g., sheet-pan meals if you have an oven; slow-cooker chili if you don’t want active cooking).
Prompt pattern (weeknight vs weekend): “Goal: Plan dinners for 7 days. Context: I cook Mon–Fri after work; weekends flexible. Constraints: Mon/Wed/Thu must be ≤25 minutes active time; Tue is ‘no-cook’ or microwave-friendly; Sat can be a 90-minute batch cook that creates components used in 2 other meals. Output: table with day, meal, active time, total time, and a ‘prep ahead’ note.”
Common mistake: asking for “30-minute meals” and then getting plans that require 45 minutes of chopping. Fix it by specifying “active time” and requesting that the AI suggest shortcuts (pre-chopped veg, frozen vegetables, rotisserie chicken) when appropriate.
Cost control is easier when you treat it as a design constraint. Instead of “make it cheap,” pick a weekly target (for example, “$85 for dinners + lunches”) or a per-serving ceiling (for example, “≤$3.50 per serving”). Then ask the AI to use budget levers: seasonal produce, store brands, frozen vegetables, dried beans, eggs, and stretching meat with grains and legumes.
One of the best money-saving tactics is ingredient reuse. If you buy cilantro, plan two meals that use it. If you open a jar of salsa, plan a second recipe that finishes it. Your prompt should instruct the model to reuse ingredients across multiple meals and to avoid “one-off” ingredients that appear once and then sit in the fridge.
Prompt pattern (cost + swaps): “Goal: 7-day plan under $90 total for 2 adults (dinners + 4 lunches). Constraints: prioritize low-cost proteins (beans, eggs, canned tuna, chicken thighs); avoid one-off ingredients; reuse at least 8 ingredients across 3+ meals. Output: for each meal, include ‘cost-saving swap’ suggestions and list the top 10 reused ingredients.”
Engineering judgment tip: ask for alternatives, not perfect accuracy. Grocery prices vary by location, so request “relative cost tiers” (low/medium/high) and swaps that reduce cost without changing the dish’s core flavor.
Output format is part of the prompt, not an afterthought. A paragraph plan is hard to execute and almost impossible to shop from. You want a table that makes decisions visible: what you’re eating, when you’re cooking, and what you should prep ahead. Think like a project manager: each day is a task with dependencies (marinate chicken, cook rice, chop vegetables).
Ask for a table with consistent columns. At minimum: day, meal, servings, active time, and prep notes. If you’re using leftovers, include “makes leftovers? (Y/N)” and “leftovers used when.” If you want beginner-friendly cooking, request “skill level” and “key technique” (boil, roast, sauté, assemble). This is also where you add your “cook once, eat twice” plan: tell the AI to schedule a larger cook on one day and to assign the leftovers to another day.
Prompt pattern (table output): “Output: Provide a Markdown table with columns: Day, Dinner, Servings, Active Time, Total Time, Prep-Notes (1 sentence), Leftovers (what/when), Beginner Shortcut. Keep prep notes short and action-based (e.g., ‘Roast extra broccoli for Thursday bowls’).”
Common mistake: letting the AI add extra commentary between rows. Prevent this by saying “No extra text outside the table except a 5-bullet prep summary.”
A meal plan becomes useful when it turns into a shopping list you can trust. Don’t just ask for “a shopping list”—ask for a categorized list with quantities and priorities. Categories should match how stores are laid out (produce, dairy, meat/seafood, pantry, frozen, bakery, spices/condiments). Quantities should be in common shopping units (e.g., “2 lb,” “1 bunch,” “2 cans”). Priorities help when you need to cut cost or the store is missing items.
Also tell the AI what you already have. Otherwise it will buy salt, oil, and rice every time. Include a “pantry on hand” line in your prompt and request that the shopping list exclude those items. If you want to minimize waste, request that the AI consolidate overlapping ingredients (e.g., “1 large onion” instead of “½ onion” twice).
Prompt pattern (shopping list): “Using the 7-day table above, generate a shopping list grouped by store section. Include quantities summed across all meals, and label each item as P1/P2/P3. Exclude pantry items I already have: olive oil, salt, pepper, soy sauce, rice. After the list, add a ‘swap list’ with cheaper substitutes for the top 8 cost drivers.”
Engineering judgment tip: quantities are estimates. If precision matters (baking), ask for exact units; otherwise, ask for “practical shopping units” and a note where rounding happens (e.g., “buy 2 limes; plan uses ~1.5”).
This lab shows how to generate a complete one-week plan and shopping list from just five inputs. The goal is not to create the “perfect” plan on the first try; it’s to practice a workflow where you can adjust one variable (time, budget, diet) without rewriting everything.
Your 5 inputs:
Copy/paste lab prompt (fills every lesson in this chapter):
“Goal: Create a 7-day meal plan (dinners + 4 leftover lunches) and a shopping list. Context: People/portions: [e.g., 2 adults; dinners produce 4 servings so we have leftovers]. Cooking level/tools: [beginner; oven + stovetop; no slow cooker]. Constraints: Allergies/must-avoid: [e.g., no peanuts, no shellfish]. Dislikes: [e.g., mushrooms, olives]. Time: Weeknights ≤25 minutes active time; weekend batch cook up to 90 minutes that creates leftovers for 2 meals. Budget: ≤$[X] for the week; reuse ingredients across meals; avoid one-off ingredients. Output: (1) A table with Day, Dinner, Servings, Active Time, Total Time, Prep Notes, Leftovers Plan, Beginner Shortcut. (2) A categorized shopping list with quantities summed for the week and P1/P2/P3 priorities. (3) A short ‘cook once, eat twice’ summary describing what is cooked in batches and where it’s reused. (4) A ‘recipe simplifier’ note for any meal that seems complex: reduce steps to 5–7 and suggest one shortcut.”
Follow-up questions (improve results without starting over): If the plan is too expensive, ask: “Replace the two highest-cost dinners with cheaper options that reuse existing ingredients; keep the rest unchanged.” If time is the issue, ask: “Convert Wednesday and Thursday to no-chop meals using frozen/pre-cut items; keep calories similar.” If a meal is intimidating, ask: “Rewrite that recipe for a beginner: 6 steps max, list tools, and include timing cues.”
When you can reliably go from five inputs to a table + shopping list, you’ve learned the core skill: turning a vague desire (“eat better this week”) into a constrained plan you can execute and afford.
1. Why does the chapter describe meal planning as a strong “real life” use case for prompt engineering?
2. What is the key shift the chapter recommends instead of asking the AI for “healthy meals”?
3. Which prompt structure does the chapter say you should reuse to build effective meal-planning prompts?
4. Which output format does the chapter highlight as typically more usable for a meal plan than a paragraph?
5. Which is a common mistake the chapter warns against that can make the meal plan hard to shop for?
Budgets are perfect “prompt engineering” territory because they are mostly structured thinking: you decide categories, define constraints, and ask for clear outputs. The AI can’t see your bank account, and it shouldn’t be trusted as a financial authority—but it can act like a fast worksheet builder and a scenario simulator. In this chapter you’ll learn a repeatable way to (1) create a simple monthly budget with categories and totals, (2) spot “spending leaks” with small changes, (3) plan toward goals like savings, debt payoff, or a trip fund, (4) run what-if scenarios (rent change, new subscription, lower groceries), and (5) write a reusable weekly money check-in script so you improve your plan without starting over.
The core skill is asking for math and structure explicitly. Many beginners type “make me a budget” and get generic advice. Instead, you’ll use the same prompt template you’ve practiced: goal (what you want), context (your numbers and situation), constraints (rules like “savings first” or “no dining out cuts below $X”), and output (tables, totals, and next actions). When you do this, the AI becomes a spreadsheet assistant: it organizes categories, totals them, summarizes, and helps you compare options side-by-side.
One more important mindset: budgeting is iterative. You don’t need a “perfect” plan on day one. Your goal is a plan that is easy to check weekly, easy to adjust, and honest about uncertainty. You’ll build that step-by-step in the sections below.
Practice note for Create a simple monthly budget with categories and totals: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Find spending leaks: prompts that suggest small changes: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Plan for goals: savings, debt payoff, or a trip fund: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Run what-if scenarios: rent change, new subscription, lower groceries: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Write a weekly money check-in script you can reuse: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Create a simple monthly budget with categories and totals: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Find spending leaks: prompts that suggest small changes: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Plan for goals: savings, debt payoff, or a trip fund: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Run what-if scenarios: rent change, new subscription, lower groceries: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
A simple monthly budget starts with four building blocks: income, fixed costs, variable costs, and savings/financial goals. The prompt mistake to avoid is mixing them together. If you list expenses without separating fixed vs variable, it becomes hard to run what-if scenarios later (because rent behaves differently than groceries).
Workflow: First, tell the AI what “monthly” means for you (paycheck timing matters). Then provide your after-tax income and the expenses you already know. Finally, set a default savings rule (for example, “aim for 10% savings” or “save $300 first, then allocate the rest”).
Example prompt (copy and fill):
Goal: Build a simple monthly budget with totals and leftover cash.
Context: My take-home income is $____ per month. Fixed costs: rent $____, utilities $____, insurance $____, phone $____, transit pass $____. Variable costs: groceries $____, dining out $____, gas $____, entertainment $____, personal care $____. Current savings/debt: I want to save $____ and pay $____ toward debt monthly.
Constraints: Treat fixed costs as non-negotiable. Keep groceries realistic. If there is a deficit, suggest 3 small reductions from variable categories only.
Output: A table with category, monthly amount, and a final line showing totals and leftover/deficit.
This structure directly supports the chapter outcomes: you get a full monthly budget with categories and totals, plus a starting point for “spending leak” detection (small, realistic changes) and goal planning (savings or debt payoff).
Categories are not about judgment; they are about making decisions easier. A practical approach is to sort spending into needs and wants using plain language you can stick to. Needs are the costs that keep life running (housing, basic groceries, necessary transport). Wants are optional or flexible (streaming, restaurants, upgrades, hobbies). Many people fail at budgeting because categories are either too vague (“misc”) or too detailed (“coffee—latte—oat milk”).
Engineering judgment: Choose categories at the level where you can take action. If “groceries” is consistently high, you can act on it. If you split groceries into 15 subcategories, you will stop tracking. Start simple and add detail only where it changes behavior.
Prompt to refine categories:
Goal: Help me set up budget categories I will actually use.
Context: Here are my recurring expenses and common spending areas: ____ (list).
Constraints: Use 8–12 total categories. Label each as Need or Want. Create one “Sinking Funds” category for irregular expenses (car repairs, annual fees).
Output: A simple category list with 1–2 examples per category and a one-sentence rule for what belongs there.
This step makes later leak-finding easier: the AI can’t fix what isn’t visible. If you group all flexible spending into a few clear buckets, you can reduce the right thing without feeling like you’re cutting “everything.”
AI chat tools will do math only as well as you specify the task. If you ask, “Is this budget good?” you’ll get opinions. If you ask, “Total these categories, calculate percentages, and summarize,” you’ll get usable analysis. The key is to request the calculations and the format, then ask for a short interpretation.
Common mistakes: (1) forgetting to state whether numbers are monthly, weekly, or per paycheck; (2) mixing pre-tax and after-tax income; (3) not asking the AI to show its totals; (4) accepting a calculation without checking it. Use the AI to compute quickly, but verify important totals (especially if you’ll use them for real decisions).
Prompt for clear math:
Goal: Summarize my budget with totals and percentages of income.
Context: Monthly take-home income: $____. Categories and amounts: rent $____, utilities $____, groceries $____, dining $____, transport $____, insurance $____, subscriptions $____, savings $____, debt payoff $____, other $____.
Constraints: Use monthly math. If any category is missing, ask me a clarifying question rather than guessing.
Output: 1) A table with amount and % of income, 2) totals for Needs, Wants, Savings/Debt, 3) a 5-bullet summary of what stands out.
Once you have percentages, you can set practical targets (for example, reduce “wants” from 22% to 18%) and create goal-focused plans, like increasing a trip fund by reallocating a single category rather than trying to “spend less” everywhere.
“What-if” planning is where AI feels like a superpower—because it can duplicate your budget and adjust one variable fast. This is how you decide between options: a rent increase, adding a subscription, lowering groceries, changing debt payments, or speeding up savings for a trip. The best scenario prompts keep everything constant except the specific change, then compare outcomes side-by-side.
Workflow: Start from your baseline budget (a single source of truth). Then define scenarios A/B/C with only the changes listed. Ask for a comparison table and a short recommendation based on your constraints (like “keep at least $200 buffer” or “save $500/month for travel”).
Example scenario prompt:
Goal: Run budget scenarios and compare them side-by-side.
Context: Baseline monthly income $____. Baseline expenses: (paste your table).
Constraints: Maintain minimum savings of $____ per month and a $____ buffer (leftover cash). Do not reduce fixed costs. Only adjust variable categories as needed.
Scenarios: A) Rent +$150. B) Add a $18 subscription. C) Reduce groceries by $60 and dining out by $40.
Output: A 4-column table (Baseline, A, B, C) showing totals, savings, leftover/deficit, plus a brief note on tradeoffs for each.
Use this to plan for goals: if your trip fund needs $1,200 in six months, ask the AI to translate that into a monthly target and show which scenario gets you there with the least pain. You’re not asking the AI to “decide for you”—you’re using it to expose the consequences quickly.
Budgeting with AI requires guardrails because the tool will happily produce neat tables even when inputs are incomplete. Your job is to control uncertainty. Good prompts tell the AI when to estimate, when to ask questions, and how to label assumptions.
Guardrail rules you can include in prompts:
Example guardrail prompt block (add to any prompt):
Constraints: Do not invent numbers. If you must use an estimate, label it “ESTIMATE,” provide a range, and ask me to confirm. Show calculation steps for totals. Remind me to verify against bank/credit statements.
Practical outcome: your budget becomes trustworthy enough to act on. You’ll also avoid a common beginner trap: making decisions based on a polished-looking budget that is missing irregular expenses (annual fees, car maintenance, gifts). This is why “sinking funds” matter and why you should schedule a weekly check-in to correct the plan before small errors become big surprises.
In this practice lab, you’ll feed the AI a simple list of numbers and have it build a budget, identify leaks, create a goal plan, run a what-if, and draft a reusable weekly check-in script. Treat this as a template you can reuse monthly.
Use these sample numbers (or replace with your own): Income (take-home) $3,200. Rent $1,350. Utilities $160. Phone $55. Internet $60. Insurance $120. Transit $90. Groceries $420. Dining out $180. Subscriptions $35. Gas/ride-share $70. Personal/household $95. Entertainment $60. Minimum debt payment $150.
Lab prompt (one message):
Goal: Build my monthly budget, find small spending leaks, and create a weekly check-in script.
Context: Here are my monthly numbers: Income $3,200. Expenses: rent 1,350; utilities 160; phone 55; internet 60; insurance 120; transit 90; groceries 420; dining 180; subscriptions 35; gas/ride-share 70; personal/household 95; entertainment 60; minimum debt 150.
Constraints: Separate fixed vs variable. Add a “sinking funds” line item equal to 3% of income. Aim to save at least $250/month. If savings is below target, suggest 5 leak fixes of $10–$30 each (no extreme cuts).
Output: 1) Budget table with totals and leftover, 2) Needs vs Wants totals, 3) leak fixes with estimated monthly impact, 4) a goal plan: “trip fund $900 in 6 months” and “debt payoff extra $50/month,” 5) two what-if scenarios side-by-side: groceries -$50 and rent +$100, 6) a reusable weekly money check-in script (5–8 bullet steps) I can run every Sunday.
What to look for in the result: Are fixed costs clearly identified? Are the totals correct? Did the AI keep leak fixes small and realistic? Did the weekly check-in script tell you what to review (recent transactions, category drift, upcoming irregular expenses) and what decision to make (adjust one category, cancel/keep one subscription, increase/decrease the trip fund)? If anything feels off, don’t restart—follow up: “Recalculate totals,” “Move this expense to fixed,” or “Replace estimates with my actual numbers.” That iterative loop is the skill you’re building.
1. Why does the chapter describe budgets as good “prompt engineering” territory?
2. What is the key improvement over typing “make me a budget”?
3. Which description best matches the role the AI should play in this chapter?
4. What is the main purpose of running “what-if” scenarios in budgeting prompts?
5. What mindset does the chapter recommend for budgeting over time?
By now you can produce solid itineraries, meal plans, and budgets with AI. The next step is keeping your best work so you can repeat it on demand. This chapter is about building a personal prompt library: a small set of prompts you trust, written as templates, organized so you can find them quickly, and designed to produce outputs you can act on immediately.
A prompt library is not a folder of random “good chats.” It’s a set of reusable tools. Each tool has a job (plan a week of dinners, draft a weekend itinerary, run a budget what‑if), a consistent input format (your variables), and a predictable output format (tables, checklists, next steps). When you treat prompts like tools, you reduce rework and avoid the common trap of rewriting from scratch every time.
In this chapter you will: (1) standardize your prompt structure, (2) turn your best prompts into fill‑in‑the‑blank templates, (3) create a one‑message “planner” prompt that asks you questions first, (4) use small examples to show what “good” looks like, (5) iterate with a draft‑critique‑revise loop, and (6) combine trips, meals, and budgets into one capstone weekly plan.
As you read, copy the templates into a notes app and start naming them. Your goal is not to memorize prompts; your goal is to build a system that makes planning faster than thinking about planning.
Practice note for Turn your best prompts into fill-in-the-blank templates: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Create a one-message “planner” prompt that asks you questions first: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Improve results with examples: show the AI what “good” looks like: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Make outputs actionable: next steps, deadlines, and checklists: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Capstone: a complete week plan (meals + spending + one outing): document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Turn your best prompts into fill-in-the-blank templates: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Create a one-message “planner” prompt that asks you questions first: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
Practice note for Improve results with examples: show the AI what “good” looks like: document your objective, define a measurable success check, and run a small experiment before scaling. Capture what changed, why it changed, and what you would test next. This discipline improves reliability and makes your learning transferable to future projects.
The fastest way to improve results is to stop improvising your prompt format. A consistent structure reduces ambiguity and makes it easier to diagnose what went wrong. Use a simple template every time: Goal → Context → Constraints → Output. When you store prompts in a library, this structure becomes your “house style.”
Goal is a single sentence that defines success. “Create a 3-day itinerary” is okay; “Create a 3-day itinerary that balances museums and food with a relaxed pace” is better. Context is what the AI needs to know: dates, location, who’s going, preferences, and what you already have (hotel booked, pantry staples, fixed bills). Constraints are the rules: budget ceilings, dietary requirements, time limits, walking limits, cooking equipment, and must-do items. Output is the format: a table, a checklist, a shopping list grouped by store section, or a budget with categories and totals.
Engineering judgment: write constraints like you would write acceptance criteria. “Cheap meals” is vague; “$80 total groceries for 5 dinners + 5 lunches” is testable. Common mistake: mixing constraints into the goal without numbers (“on a budget,” “quick”) and then being surprised by results. Practical outcome: once you standardize the structure, you can swap in new variables (city, diet, paycheck) while keeping the same prompt skeleton.
Tip: add one line that prevents overreach: “If information is missing, ask up to 7 questions first before planning.” This single sentence dramatically reduces hallucinated assumptions (wrong opening hours, unrealistic grocery totals) and keeps the conversation efficient.
Templates are prompts with blanks. Your goal is to capture the “shape” of a great request and leave only the personal details as variables. Use clear placeholders like {CITY}, {DATES}, {BUDGET}, {DIET}, and {PACE}. This turns one good prompt into a permanent tool.
Start by taking a prompt that produced a result you liked. Copy it into your notes app and replace specifics with variables. Then add a short “variables block” at the top so you can fill it quickly. Example variables blocks are faster than searching through paragraphs for things to edit.
Now create a one-message “planner” prompt that asks you questions first. This is the opposite of guessing: you instruct the AI to interview you before it plans. The trick is to limit the questions and make them multiple choice when possible.
Planner prompt template (copy/paste): “You are my planning assistant. First, ask me up to 8 clarifying questions (grouped: Trip, Meals, Budget). Use checkboxes and short answer fields. After I answer, produce: (1) trip/outing plan, (2) meal plan with shopping list, (3) weekly spending plan. Don’t assume prices or schedules without stating assumptions.”
Common mistakes: using too many placeholders (you spend more time filling blanks than planning), and forgetting defaults. Add defaults like “If {PACE} is blank, use moderate pace” or “If {COOK_TIME_MAX} is blank, use 30 minutes.” Practical outcome: templates make your work repeatable and reduce the fear of “starting over” because you never start from zero again.
Sometimes your prompt is clear, but the output is the wrong style: too wordy, not actionable, missing the tone you want, or organized poorly. Few-shot prompting fixes this by giving the AI tiny samples that demonstrate what “good” looks like. You are not teaching facts; you are teaching format and decision style.
A few-shot example can be as small as one mini-output. For trips, you might show a day plan in a compact table with time blocks and transit notes. For meals, you might show how you prefer recipes summarized (“ingredients + steps + time + leftovers”). For budgets, you might show categories and a final “safe-to-spend” number.
Engineering judgment: few-shot examples should be short and consistent. If your example includes unrealistic detail (exact ticket prices or opening hours) the model may imitate that certainty. Use examples to guide structure, and keep factual parts clearly labeled as placeholders or assumptions.
Common mistake: providing examples that conflict with your constraints. If your example shows fancy restaurants but your budget is low, the AI will try to imitate the restaurant-heavy style. Practical outcome: one good example inside your template acts like a “style lock,” reducing the need for follow-up edits.
When storing examples in your library, label them “Example output (style)” so you remember why they exist. They are part of the tool, not extra fluff.
You do not need a perfect prompt on the first try. You need a reliable loop. The simplest loop is: Draft → Critique → Revise. This keeps you from restarting the whole conversation and teaches the AI to self-check.
Draft: ask for the initial plan using your standard template. Critique: ask the AI to evaluate the plan against your constraints and identify gaps, risks, and assumptions. Revise: instruct it to update the plan while preserving what already works.
Engineering judgment: decide what you will optimize for first. You can’t maximize cheap, healthy, fast, and exciting all at once. Tell the AI your priority order: e.g., “Priority: (1) stay under budget, (2) keep cook time under 30 minutes, (3) include variety.” This makes revisions coherent instead of chaotic.
Common mistakes: vague feedback (“make it better”), changing multiple constraints at once (you won’t know what caused the improvement), and letting the AI introduce new assumptions during revision. Practical outcome: you end up with a “final” output that is traceable—each change has a reason—and you also end up with a better template because you can copy the critique questions into your library for next time.
Store your iteration loop prompts as separate tools: “Planner v1 Draft,” “Planner v1 Audit,” “Planner v1 Revise.” You will reuse them constantly.
A prompt library only saves time if you can find the right tool in seconds. Treat your prompts like files in a toolbox: clear names, consistent tags, and a tiny usage guide. A notes app works fine; a document with headings works too. The key is retrieval, not software.
Use a naming convention that starts with the domain and ends with the version. Examples: TRIP_WeekendItinerary_v2, MEAL_WeeknightPlan_30min_v3, BUDGET_MonthlyBaseline_50-30-20_v1, WEEK_AllInPlanner_v1. Put the highest-signal terms first so search works.
Engineering judgment: don’t store everything. Keep a “Favorites” set of 5–10 prompts you actually use. Archive the rest. Too many options increases decision time and defeats the purpose.
Common mistakes: saving full chat transcripts (hard to reuse), forgetting to capture the variables that made it work (you can’t reproduce it), and allowing templates to drift (different output formats each time). Practical outcome: within a month, you’ll have a small, stable library where each prompt produces a predictable format you can paste into a calendar, grocery app, or spreadsheet.
Finally, add an “actionability” checklist to your library standard: every plan must end with next steps, deadlines, and a checklist. If an output can’t be executed, it isn’t finished.
Your capstone is a single weekly plan that combines meals, spending, and one outing. The goal is coordination: the outing affects the budget; the budget affects grocery choices; the meal plan affects time available for the outing. A combined prompt prevents conflicting plans.
Use a one-message framework that (1) asks clarifying questions, then (2) outputs three connected deliverables with action steps. Here is a practical capstone template you can store as WEEK_AllInPlanner_v1 and reuse weekly.
Capstone prompt (fill-in-the-blank):
“You are my weekly planning assistant. First ask up to 8 clarifying questions (Trip/Outing, Meals, Budget). Use short fields and multiple-choice options. After I answer, create a 7-day plan that includes:
(A) Meals: {DIET} meals for {PEOPLE_COUNT} people, max {COOK_TIME_MAX} minutes on weekdays, include leftovers for lunches {LUNCHES_YN}. Provide a shopping list grouped by aisle and a prep schedule (what to prep on {PREP_DAY}).
(B) Budget: Weekly spending plan with caps: groceries {GROCERY_CAP}, dining/coffee {DINING_CAP}, transport {TRANSPORT_CAP}, entertainment {FUN_CAP}, misc {MISC_CAP}. Include a ‘safe-to-spend’ number and 3 cut options if we exceed caps.
(C) Outing/Trip: One outing on {OUTING_DAY} in/near {CITY} matching {INTERESTS} and {PACE}. Include estimated costs (clearly labeled assumptions), transit/parking notes, and a fallback option if weather is bad.
Output format: 1 table for meals, 1 table for budget, 1 itinerary block. End with next steps + deadlines + a checklist. If any info is missing, ask questions before planning.”
Common mistakes: asking for “a perfect week” without setting caps, forgetting to specify cook-time limits, and not requiring a fallback plan. Practical outcome: you get one integrated plan you can execute—meals aligned to busy nights, spending aligned to pay cycle, and a realistic outing that fits both time and money.
When this capstone works well, freeze it as your baseline. Then create variants: “WEEK_AllInPlanner_LowSpend,” “WEEK_AllInPlanner_HighProtein,” or “WEEK_AllInPlanner_NoCar.” That is how a personal prompt library becomes a personal planning system.
1. What best describes a “prompt library” in this chapter?
2. Why turn your best prompts into fill-in-the-blank templates?
3. What is the main purpose of a one-message “planner” prompt that asks you questions first?
4. How do examples improve results according to the chapter?
5. Which set of elements makes AI outputs more actionable in this chapter?