
Why Coding Agents Still Need Context: The Missing Link in AI Development
Why Coding Agents Still Need Context: The Missing Link in AI Development
The era of agentic software development has arrived. From Cursor to GitHub Copilot, software teams are enabled to move faster than ever. As so many of us have experienced, the agents do so much better when we can be very specific in providing context, plans, and instructions. Now that we can run multiple of them in parallel, we’re all starting to drown in reviews and rewrites as the agents go off the rails while they veer further away from context.
When Coding Agents Go Wrong: Real Examples
The agents are awesome when they’re grounded, but its tough to get them back on track once they’ve started down the wrong path. Here are some of the most common things we’ve seen:
The Security Nightmare
The three most common security issues we see are:
- Hardcodes API keys and database credentials directly in frontend JavaScript files. The agents know that this is not the best practice, but while they’re working, they decide to make a compromise to make things work.
- Creates SQL queries using string concatenation, opening massive SQL injection vulnerabilities. Again, when asked to review this, the agents often know this isn’t best practice. Our team sees the agents doing this because they don’t have the context about ORMs, access patterns, etc., used elsewhere in the codebase.
- Implements payment workflows in client side code. For example, setting the price amount on the client side means that a user could edit the front end code before sending the request in order to specify their own price.
The Spaghetti Code Generator
As you build, the coding agent:
- Continues editing the same files, which end up thousands of lines long. On top of being thousands of lines long, they mix responsibilities… data logic, business logic, UI components, etc.
- The same logic is rewritten over and over again across the codebase instead of creating common helper files and functions
Why This Happens: The Context Problem
As we mentioned above, this isn’t a knowledge problem. The agents and models actually know that these are not the best practices. It’s a context and reward function problem. Without the agents being grounded in what you’re trying to build and what principles to follow, they will always fall into this trap.
Here’s why:
1. Tunnel Vision on Goals
Coding agents are optimized to autonomously reach specific objectives. When you say “make this faster,” they focus intensely on that goal without understanding:
- How this code fits into the larger system
- What trade-offs are acceptable in your specific context
- What constraints and requirements aren’t explicitly stated
- How changes might ripple through other parts of your application
2. Missing Architectural Understanding
Most coding agents work with isolated code snippets or single files. They can’t see:
- The overall system architecture and design patterns
- Dependencies between different components
- Business logic that spans multiple modules
- Non-functional requirements like scalability, security, or compliance needs
3. Lack of Historical Context
Agents don’t understand why code was written the way it was:
- Previous architectural decisions and their reasoning
- Performance optimizations based on real-world usage patterns
- Workarounds for specific bugs or limitations
- Evolution of requirements over time
4. No Domain Knowledge
Without context about your business domain, agents might:
- Violate industry-specific regulations or standards
- Ignore domain-specific performance requirements
- Make assumptions that don’t match your users’ needs
- Implement solutions that work technically but fail business requirements
So What Can You Do About It?
The solution isn’t to abandon coding agents—they’re incredibly powerful tools when used correctly. Instead, you need to provide them with the context they need to make good decisions. At gjalla, we believe that the models and agents will continue to get better, however this problem will not be solved by better models and agents alone. Why not? Because your context is not the same as the next company’s context. So how can you fix this?
1. Start with Architecture Documentation
When your agent has access to system-level architectural information, it can ground its designs and decisions in the context of the broader system. You should maintain up-to-date architecture documentation, data traces, and product requirements.
2. Document Decisions and Priorities
In addition to the functional pieces above, agents are making technical decisions left and right. To keep them aligned with where you want to go, you should provide them:
- Architectural decision records (ADRs)
- Why certain patterns were chosen over alternatives
- Business tradesoffs you are willing to make vs those you are not
- Principles you want the entire team to follow
How gjalla Bridges the Context Gap
This is where gjalla unlocks productivity for your team. Instead of spending hours rewriting AI generated code or maintaining manual documentation, gjalla automatically generates and maintains the contextual documentation that agents need to make good decisions:
- Live architecture documentation that is up to date as your codebase changes
- Functional capabilties, goals, and principles to keep the agents aligned
- A source of truth for human and AI team members
Now what?
Coding agents are the biggest unlock we’ve seen at gjalla in decades, but they’re not magic. They need the same contextual understanding that human developers rely on to make good decisions. The way our teams work is changing, and the way we keep everything aligned so we can seamlessly ship without worry is moving up the stack.
The gap between code and context is getting bigger. Stop the chaos. Align your teams. Move fast and with confidence with gjalla.