I can't read code. When my AI agent pushed API keys to a public GitHub repo for the third time, I couldn't spot it in the diff. I couldn't open the file and think "that line shouldn't be there." I just got the alert, felt the panic, and rotated the keys.

That's when I stopped trying to write better prompts and started building the system around them.

The internet is calling it "context engineering" this week. Harvard and Bloomberg just ran pieces on the state of vibe coding. Indie hackers are debating whether the real AI skill is writing prompts or designing the information environment around them. I have been doing this for months without having a name for it. To me it was just "the thing I had to build because the agents kept making the same mistakes."

The problem is not the prompt

A prompt says "build me a login page." Context engineering is everything else.

It is the security rule that loads automatically so the agent does not store passwords in plaintext. It is the memory from last session where the same agent broke authentication by piping a trailing newline into an environment variable. It is the architecture document that tells it which database to use and how the auth flow works.

Without that context, the agent will build you a login page. It just might not be the right one. And if you can't read code, you won't know until something breaks in production.

Three things that actually matter

After five months of building an AI system from the command line with zero coding experience, I have found that context engineering comes down to three things.

1. Turn every failure into a rule

The API keys in a public repo? That became a rule that loads every session: "All repositories MUST be created as private. No exceptions." Untested code breaking my app for twenty minutes? That became a gate: nothing deploys without tests passing first. An agent fabricating credentials I don't have in a public Reddit comment? That became a constraint on what the agent can claim.

I have a growing file of these rules. Every one traces back to a real failure. The agents read them before they do anything. It is not documentation - it is scar tissue encoded as context.

You can start this today. Next time your agent makes a mistake, don't just fix it. Write the rule. Put it somewhere the agent reads automatically. That failure never happens again.

2. Not everything belongs in the conversation

Most people paste everything into one context window. Their whole codebase, every previous conversation, all their notes. Then they wonder why the agent loses focus.

The skill is deciding what to load automatically, what to search on demand, and what to leave out entirely.

I use three layers. The first is a small set of files the agent reads every session - identity, rules, and current state. Lightweight. Always loaded. The second is a searchable database where agents store important decisions and findings. They don't load it all - they search when they need something specific. The third is an index file that points to deeper context the agent can pull up if relevant.

The key insight: if the agent doesn't need it for this task, don't load it. Context pollution is real. A marketing task doesn't need engineering architecture docs. A code review doesn't need last week's content calendar.

3. What you exclude matters more than what you include

This one took me a while to learn.

When I build a product, the agent that writes the tests is deliberately blocked from seeing the implementation plan. It only sees the requirements. If it sees how the code was built, it writes tests that confirm the implementation instead of verifying the requirements. That is useless.

The quality reviewer always runs on a different AI model from the builder. Same reason. I don't want the reviewer confirming its own work.

These are context engineering decisions. Not what to add - what to withhold and why.

The non-coder advantage

Here is the part nobody talks about. Not being able to code is an advantage for context engineering.

Developers trust themselves to catch errors. So they under-invest in the system. They write a prompt, read the output, spot the bug, fix it manually. The context stays thin because the human is the safety net.

I can't be the safety net. So my context has to be. And that means my agents are more consistent and make fewer repeated mistakes than setups where a developer is hand-holding every session. Not because the model is better. Because the information environment is better.

Context engineering is not a new skill. It is what you are forced to build when you can't fall back on reading the code yourself.