Turn Claude into a thinking partner, not a chatbot
The gap between "meh" and "wow" outputs isn't the model. It's eight principles you can stack on every prompt.
The Simple Version
Imagine a new hire on their first day. They're brilliant — top of their class, can outwork anyone in the room. But they're also literal. Whatever you tell them, they'll do exactly that. Not what you meant. What you said.
If you walk up and say "do a competitive analysis of our biggest rival," you'll get a Wikipedia-style summary. Generic. Useless. Not because they're not smart — they had no idea what your company sells, who your customer is, what "useful" looks like to your team, or how long the answer should be.
So you go back to your desk, take a deep breath, and write a real brief. "Here's our product. Here's our customer. Here's the rival's most recent move. I want a one-page memo with three sections — features they beat us on, features we beat them on, and the one tactical move you'd take next quarter. Don't include marketing fluff." Now you get a great memo.
Prompting Claude is exactly that. The model is the new hire. Your prompt is the brief. Almost every "Claude isn't impressive" complaint is a brief problem, not a brain problem. Linas Beliūnas read every page of Anthropic's prompting docs, tested hundreds of prompts, and distilled it down to eight stackable principles. Each one fixes a specific way that briefs go wrong.
How It Actually Works
Now the technical version. The principles below stack — each one assumes you've done the ones above. Skip clarity and structure won't save you. Skip context and constraints don't matter.
Be clear and direct
Specificity beats word count. Vague briefs produce vague output every time. Tell Claude the task, the audience, the depth, and the angle.
Use XML tags for structure
Claude was trained to recognise XML tags as structural markers. Wrap each part of your prompt — context, instructions, constraints — in named tags so the model never confuses input with directive.
Show examples (multishot)
Describing "good" loses to demonstrating "good". One to three examples teaches Claude the pattern far more reliably than adjectives.
Let Claude think (chain of thought)
For complex problems, ask the model to reason in a scratchpad before concluding. This pulls it out of pattern-matching and into actual analysis. Extended thinking is this principle, formalised.
Provide context and background
Give the model the documents, data, audience, goals, and constraints it needs. It cannot guess what's in your head. Put critical instructions at the start and the end of long prompts.
Specify output format
Don't leave structure to chance. Ask for a table, a memo, a JSON object, a 200-word summary, three bullet points. Vague format wastes a turn.
Define constraints and anti-patterns
Tell the model what not to do. Block phrases like "in today's rapidly evolving landscape". Cap length. Forbid hedging. Constraints sharpen output as much as instructions do.
Prefill the response (API)
If you're using the API, start the assistant message with the first token of the format you want — {, <memo>, a heading. Claude will continue from there instead of preambling.
The same task, two briefs
"Do a competitive analysis of Notion vs. our product."
Same task, but wrapped in XML tags with full company context, three named output sections (executive summary, feature table, tactical moves), an audience (the founder), a length cap, and a banned-phrases list.
Same model. Same task. Two completely different artefacts. The vague version returns a Wikipedia-style summary you could have generated in 2022. The stacked version returns an actual decision document.
The XML pattern, in code
This is the workhorse template. Drop your content into the right tags and the structure does most of the work for you:
<context>
We're a B2B SaaS for inventory ops. ARR ~$4M.
Founder asked for a decision memo by Friday.
</context>
<instructions>
Compare Notion and our product for our ICP (warehouse managers).
For each: features, pricing, customer sentiment, vulnerabilities.
End with one tactical move we should make this quarter.
</instructions>
<constraints>
- 600 words max.
- No marketing fluff. No "in today's landscape" phrasing.
- Ground every claim in something checkable.
</constraints>
<output_format>
1. Executive summary (3 bullets)
2. Feature comparison table
3. One tactical move with reasoning
</output_format>
Tag names are flexible — use whatever makes sense. What matters is that each region of the prompt is fenced and named. Claude follows the structure because it was trained to.
The reason these techniques work isn't magic — they all line up with how Claude was trained. XML markers were in the training data. Examples activate pattern-matching. Reasoning steps unlock the model's analytical mode. You're not tricking Claude. You're using its actual interface.
Why most people stay stuck at "meh"
Most people prompt Claude like Google. One line. No context. No structure. No format. Then they conclude the model isn't impressive. It's the same as a manager who barks "figure it out" at a new hire and then complains the work is generic.
The single highest-leverage shift — before you learn any of this — is to stop treating the prompt box as a search bar. It's a brief. The brief is the product. The output is just the receipt.
Key Takeaways
- Claude is a brilliant, literal new hire. Brief it like one.
- The eight principles stack. Clarity → Structure → Examples → Reasoning → Context → Format → Constraints → Prefill.
- XML tags are the workhorse. They were in the training data; they're the cleanest way to keep input separate from instructions.
- Examples beat adjectives. Show "good" instead of describing it.
- Constraints sharpen as much as instructions. Banning generic phrasing and capping length is doing real work.
- The gap between mediocre and exceptional outputs is the brief, not the model.
Source: Linas Beliūnas, "Turn Claude From a Chatbot Into a Thinking Partner 🧠". Framework distilled from Anthropic's official prompt engineering documentation.