
For months I refined my prompts. System prompts, few-shot examples, chain-of-thought reasoning — I used every trick. Then I realized the bad outputs weren't bad because of the prompt.
They were bad because the model wasn't getting the information it needed.
Prompt engineering is dead — well, almost
Prompt engineering didn't become useless. But the center of gravity shifted. A well-written prompt accounts for maybe 10–15% of the context window. The remaining 85%? Retrieved documents, conversation history, tool outputs, user state.
If the retrieved documents are irrelevant, or the conversation history is truncated at the wrong point, or the tool results are incomplete — your perfect prompt won't save you.
I experienced this daily in my own product. The question was never how to ask the AI. It was what context to give it alongside the ask.
Context engineering: what actually matters
Context engineering is the deliberate design of what the model sees during each call. You're not polishing the prompt text — you're designing the entire input space:
- Which documents to retrieve, and how many
- How to compress prior conversation
- Which tool outputs to include
- How to persist relevant memory across calls
Every token entering the context window is a decision. Context engineering is the discipline of making those decisions deliberately.
Why this matters for product builders
Because users don't see the prompt — they see the result. And the quality of the result depends on the quality of the context.
When an AI feature in Phora underperformed, the fix was rarely rewriting the prompt. It was restructuring the context: what the model receives, in what order, in what quantity.
Prompt engineering tells the model how to talk. Context engineering determines what it talks about.
The first is a skill. The second is architecture.
There's always a next level.
If you like what you see — whether you're building a product or a team — I'd love to hear about it.