There’s an old saying: “A craftsman is only as good as their mastery of their tools.” Carpenters master their saws and chisels, surgeons master their scalpels, and software engineers master their IDEs, languages, and frameworks. Today, AI coding assistants are our newest tools. The question is: do you know how to master them?
The AI revolution has levels
We’ve moved through distinct levels of AI assistance over the past few years. We started at level zero with no AI at all, then progressed to copy-pasting from ChatGPT. We got autocomplete, then inline editing. Now we have tools that can make project-wide changes and act as agents. The most advanced level is where AI helps with architectural thinking itself.
But here’s the thing: more power requires more skill. And that’s where we’re seeing a problem.
The AI-generated slop problem
AI coding tools are great for greenfield projects. Give them a blank slate and they can scaffold out entire applications. But they struggle with complex, brownfield codebases. The real world is messy.
I’ve seen it happen repeatedly: junior engineers use AI to generate code quickly, but skill gaps lead to what I can only describe as “slop.” Then senior engineers get frustrated cleaning up the mess. The promise was that AI would make us faster. Instead, it’s often making things slower and introducing technical debt at scale.
The problem isn’t the AI itself. It’s how we’re using it.
What is context engineering?
Context Engineering is the practice of managing AI context windows to maximize output quality. It matters because LLMs are stateless. Better tokens in equals better tokens out. Every tool selection is influenced by conversation history, and there are hundreds of potential right or wrong steps at each turn.
I think about context engineering across four key dimensions:
- Correctness - No incorrect information should be in the context
- Completeness - All necessary information must be present
- Size - Stay lean and focused, don’t bloat the context
- Trajectory - Mind the pattern of how the conversation evolves
These dimensions aren’t independent. They interact in complex ways.
The smart zone and the dumb zone
Here’s something most people don’t realize about context windows: they’re typically around 170k tokens total (though this varies by model). Some of that is reserved for output and compaction. But there’s a critical threshold at around 40% utilization.
Working above 40% of your context window leads to diminishing returns. You start seeing poor tool selection, repeated mistakes, and context overload. I call this the “dumb zone.” Below that threshold is the “smart zone” where AI assistants perform well.
Context engineering is fundamentally about staying in the smart zone. It’s about being intentional with what goes into that context window and when.
Context engineering as a hammer
Context engineering enables three key phases: research, planning, and implementation. In the research phase, you compress truth from codebases. In the planning phase, you create high-leverage, explicit roadmaps. In the implementation phase, you execute reliably with minimal context.
The key techniques include intentional compaction (compress existing context), sub-agents (isolate expensive operations), on-demand documentation (generate truth from code, not stale docs), and progressive disclosure (only load what you need, when you need it).
Think of it as AI’s hammer. It’s a tool that makes other tools work better.
The research-plan-implement method
The research phase is about understanding how the system works. You find relevant files and patterns, use sub-agents for vertical slices of the codebase, and output a compressed truth document. This isn’t just reading code; it’s synthesizing understanding.
The planning phase is where you outline exact steps with file names and line numbers. Include code snippets and test steps. The purpose is two-fold: mental alignment and leverage. It’s much easier to review a plan than to review 1000+ lines of generated code. A bad line in a plan leads to 100+ bad lines of code. A bad line in research can waste the entire effort.
Only after research and planning do you move to implementation. And the implementation phase can be relatively straightforward if the previous phases were done well.
Tools that help
For the research phase, let AI summarize what code does. Tools like LLMFeeder (a Firefox extension) and Paste To Markdown (a web page converter) can help you feed information to AI in the right format from external sources, such as official documentation. You can also dump a data model to SQL, or ask AI for a “project brief,” or a set of “design principles.”
The key is to iterate before starting the execution phase. Don’t rush to implementation.
Don’t fall for the spec-driven development trap
There are frameworks that promise to “fix” AI coding. OpenSpec (14k stars) offers lightweight specs with change tracking for brownfield projects. Spec-Kit (58k stars) provides structured /specify and /plan commands for greenfield work. BMAD (26k stars) is a full multi-agent framework simulating entire dev teams.
I think these tools can be useful, but they’re putting the cart before the horse. Master the fundamentals first. Tooling can come later. These frameworks are trying to solve the context engineering problem, but you need to understand the problem deeply before you can evaluate whether a given framework actually helps.
Human thinking cannot be outsourced
This is the most critical point: a bad line of code is just a bad line of code. A bad line in a plan becomes 100+ bad lines of code. A bad line in research can waste the entire effort. The leverage multiplies at each phase.
Human-in-the-loop requirements are non-negotiable. Review research for correctness. Validate plans before implementation. Ensure understanding at each phase. The goal isn’t to remove humans from the process; it’s to move human effort to the highest leverage points.
AI amplifies thinking. It cannot replace your judgment, only amplify what you bring to it.
Not everything is a nail
Context engineering is a powerful hammer, but not everything is a nail. Use it for complex features across multiple files or repositories. Use it for brownfield codebases requiring deep understanding. Use it when you’re hitting the “dumb zone” with your AI assistant.
Don’t use it for simple one-line changes like button colors. Don’t use it for quick experiments or prototypes where you’re exploring ideas. And definitely don’t use it when human architectural thinking needs to come first.
The craft is knowing when to use which tool.
Conclusion
Quality over quantity. AI makes volume easy, but quality requires discipline. I’ve seen too many projects drown in AI-generated slop because teams didn’t apply the fundamentals of context engineering.
Stay in the smart zone by managing your context windows intentionally. Master compaction, research, and planning. Never skip the human review at each phase. Remember that AI amplifies your thinking… it doesn’t replace it.
Context engineering is the skill that separates those who struggle with AI coding assistants from those who master them effectively. It’s the difference between generating slop and generating true value.
Links
- Youtube - No Vibes Allowed: Solving Tough Problems in Complex Codebases
- Inspired-IT - The AI Coding Ladder - From Copy-Paster to Architect
- Don’s Blog - Spec-Driven Development: OpenSpec vs Spec-Kit vs BMAD
- Github - OpenSpec - Spec-driven development for AI coding assistants
- Github - Spec Kit - Build high-quality software faster
- Github - BMAD-METHOD - AI-Driven Agile Development That Scales