Start Here

Prompting That Works in Real Projects

Most prompts fail for one reason: they ask the model to guess. In real projects, you don’t want “creative guesses” — you want predictable output. This page gives you a simple framework and reusable templates you can copy, adapt, and ship.

A good prompt reduces ambiguity. A great prompt makes the output auditable.

1. Why Most Prompts Fail

  • Too vague: “Summarise this” without a goal or audience.
  • No constraints: the model picks length, tone, and format randomly.
  • No context: it lacks the background needed to answer correctly.
  • No output format: you can’t reliably parse or reuse the response.

2. The 5-Part Prompt Framework (Copy This)

Use this structure for almost everything:

1) Role — Who is the assistant?

2) Context — What input + background matters?

3) Task — What exactly should it do?

4) Constraints — Rules, tone, scope, what NOT to do.

5) Output format — Bullet list, JSON, table, etc.

3. Weak Prompt vs Strong Prompt

Weak

Summarise this email and tell me what to do.

Strong

Role: You are my delivery lead assistant.
Context: Below is an email thread about a BizTalk load failure.
Task: Summarise the problem in 3 bullets and propose next steps.
Constraints: Be neutral, don’t assign blame, don’t invent facts.
Output: 3 bullets for summary + 5 bullets for actions.

4. Templates You Can Reuse (Production-Friendly)

Template A — Summary for a Busy Person

Role: Executive assistant.
Context: [paste text]
Task: Summarise for someone who has 30 seconds.
Constraints: Keep to 5 bullets. Include risks and decisions needed. No assumptions.
Output: Bullets with headings: Context / Key Points / Risks / Ask.

Template B — Structured Extraction (Great for Automation)

Role: Information extraction engine.
Context: Extract from the text below.
Task: Identify entities and return structured fields.
Constraints: If missing, return null. Do not guess.
Output (JSON):
{ "client": "", "system": "", "issue": "", "impact": "", "next_steps": [] }

Template C — Q&A with Guardrails (RAG-friendly)

Role: Enterprise knowledge assistant.
Context: Answer only using the provided sources.
Task: Answer the question accurately and cite the source section.
Constraints: If not in sources, say “I don’t have enough information”.
Output: Answer + “Sources used:” list.

Template D — Safe SQL Generation (High Stakes)

Role: SQL generator for reporting (read-only).
Context: You have these tables: [list tables + columns].
Task: Generate a SQL query to answer the question.
Constraints: Only SELECT. No DDL/DML. Use parameters. Limit results to 200 rows.
Output: SQL + explanation of joins + list of parameters.

5. The “Prompt Debugging” Checklist

  • Did I define the role clearly?
  • Did I provide enough context to avoid guessing?
  • Did I specify success criteria (what “good” looks like)?
  • Did I add constraints (what not to do)?
  • Is the output in a format I can reuse?

6. Where This Goes Next

Prompting is step one. The moment you build systems, you’ll need model selection, embeddings, and retrieval. That’s where we go next.

Continue the Masterclass

Next: Choosing the Right LLM — accuracy, cost, and speed.

Next Article Back to Writing