Hook
Similarity in, similarity out.
Problem
LLMs are highly sensitive to the context they are given, but they are also strongly pulled toward familiar patterns. When tasks share the same audience, framing, tone, and constraints, the outputs begin to converge.
This is why landing pages start sounding alike, blog posts repeat the same shape, and code suggestions lean on the same implementation patterns. The model is not necessarily failing. It is following the strongest signals available.
Users often respond by asking for the output to be more creative, more original, or more detailed. But more volume does not necessarily create more variation. If the underlying context remains similar, the model will continue producing answers from the same neighbourhood of likely outputs.
The real lever is contrast. If you want a different result, you need to provide meaningfully different conditioning information.
Why it matters
Without contextual contrast, LLM outputs tend to collapse into defaults.
In marketing, that produces generic copy and weak positioning.
In engineering, that produces generic implementations and hidden assumptions.
In both cases, it increases editing effort because the user must force distinctiveness back into the output.
It also reduces strategic value because the answer sounds competent while failing to reflect what is unique about the business, the system, or the audience.
Contrast matters because it changes the model's decision space.
Instead of choosing between many generic possibilities, it is guided toward a narrower, more specific, and more differentiated answer.
Signals you are here
- Different outputs share the same structure or tone
- Landing pages feel interchangeable
- Blog posts repeat the same arguments
- Code suggestions default to familiar patterns
- The model uses brand-safe but undifferentiated language
- Requests for more creativity produce only superficial variation
Anti-patterns
- Asking for originality without changing the brief
- Using the same audience and framing across multiple tasks
- Relying on adjectives like creative or unique as the main instruction
- Leaving the model to infer what makes this task different
- Using generic prompts for differentiated work
Try this
- Change at least two dimensions of context before regenerating
- Specify the audience and their exact tension
- State what language, framing, or patterns to avoid
- Anchor the task in distinct evidence, constraints, or examples
- Use structure as a lever, not just tone
- Tell the model what makes this piece different from the last one
Example
A marketer asks:
> Write landing page copy for our IAM service.
That prompt invites default SaaS security language.
A higher-contrast version is:
> Write landing page copy for a CTO at a mid-sized SaaS company dealing with identity sprawl after acquisitions. Lead with operational control and audit readiness, not fear. Avoid generic cybersecurity buzzwords. Do not use the phrases digital landscape or words seamless, robust, or powerful.
A developer asks:
> Write a webhook handler.
That prompt invites a default implementation.
A higher-contrast version is:
> Write a webhook handler for a high-volume Node service running in Lambda. It must be idempotent, safe for retries, reject malformed payloads early, emit structured logs, and avoid class-based design.
In both cases, contrast produces output that is more specific and more useful.
Reflection prompt
What meaningful differences have I given the model, beyond just asking it to say more?
More like this
Heuristic
Fail Closed, Log Everything, Recover Gracefully
Safe failure beats quiet failure.
Heuristic
Runbooks Are a Bridge Between Dev and Ops
Runbooks turn knowledge into action.
Heuristic
State Is Your Enemy, Treat It Carefully
Less state, fewer surprises.
Heuristic
You Cannot Rely on People Under Stress
Design for tired humans.
Heuristic
Blame the Process, Not People
Fix the system.
Heuristic
Every Output Is Someone Else's Input
Handoff quality sets the pace of flow.