Better Software Engineering with AI: Tools That Actually Work
Tired of AI hallucinating outdated code? Context7 eliminates guesswork with real-time docs. Coding agents like Roo, Cline, and Aider handle the rest. Here's what actually works.
November 24, 2024
•
4 min read
•
Updated June 18, 2025
AI writes confident garbage. Deprecated APIs. Hallucinated functions. Code that looks right but breaks.
I fixed this with two things: Context7 for real-time docs, and coding agents that respect project rules.
Context7: Stop the Hallucinations
Context7 injects the latest official documentation directly into the AI's context. Version-specific. Not guesses.
Building with Next.js? It pulls current Next.js docs. Using Supabase? Real-time API reference. Switch libraries? It adjusts.
Before: "Use getServerSideProps."
Deprecated in App Router.
After: "Use generateMetadata per Next.js 15 docs."
Correct. Works.
Add Context7 as an MCP to your coding agent: https://context7.com/
The more projects indexed, the better AI coding gets for everyone.
Coding Agents: Automation That Respects Context
Autocomplete tools suggest lines. Agents complete workflows.
I've tried several—Roo Code, Cline, Goose. Each has strengths, but I landed on Roo for daily use.
Why Roo?
It ships features faster than alternatives. The team iterates constantly. Features I needed showed up within weeks.
Security by default. .rooignore works like .gitignore—add patterns for sensitive files like .env, credentials, or secrets, and they're kept out of the LLM's reach. I can also filter which commands the agent can run. This matters when you're giving an AI access to your entire codebase.
Context management actually works. I've run Roo continuously for hours on complex refactors. It doesn't lose track. It doesn't forget decisions made 50 messages ago. The semantic memory implementation is solid.
Built-in browser use. The LLM can open webpages, interact with them, and verify changes. I've had it check my deployed site, find UI issues, and fix them—all without me manually testing.
Bring your own key. Roo lets you use your own API keys for any model—Claude, GPT, Gemini, or open source models like Llama and Mistral. You're not locked into one provider. It also supports Copilot Models, Z.AI coding subscription, and Claude Code as backends.
Open Source Matters
I prefer open source tools in this space. With superintelligence on the horizon, the stakes are too high to trust closed systems. When AI becomes powerful enough to write entire codebases, we need transparency. We need to understand how these tools make decisions, what data they collect, and who controls them.
Open source means auditability. Community verification. No vendor lock-in.
What matters isn't which agent—it's how well they handle context, security, and long-running tasks.
The Context Problem
Every AI conversation hits token limits. Long chats lose coherence. The agent forgets earlier decisions. You repeat yourself.
Good agents condense context without losing information. They summarize completed work, retain critical details, and keep conversations focused.
Bad agents dump entire chat histories into every request. Tokens explode. Costs spike. Performance degrades.
The best agents maintain semantic memory—what matters, not just what was said.
AGENT.md: Teach Your Agent Once
Here's the key: standardize your project rules in an AGENT.md file.
Think of it as a configuration file for AI. Most coding agents honor it—Roo, Cline, Aider, and others automatically read AGENT.md for project-specific instructions.
Example from my Next.js project:
1# Agent Rules Standard (AGENT.md)23## Setup4- Next.js app using Chakra UI5- Static content for portfolio + blogging67## Instructions8- No background/foreground colors (use Chakra themes for dark/light mode)9- Use Context7 MCP for documentation verification10- Use pnpm for package manager11- Server runs with `pnpm run dev` (webpack hot reload)
Now every agent knows:
- Don't hardcode colors (breaks dark mode)
- Verify docs with Context7
- Use pnpm, not npm/yarn
One file. Every agent obeys. No repeating yourself per conversation.
The Workflow
- AGENT.md defines project rules. Stack, conventions, constraints.
- Context7 provides real-time docs. No hallucinated APIs.
- Coding agent handles implementation. Reads codebase, makes changes, iterates.
- Model (Claude/GPT/Gemini/GLM) powers it all. Pick based on task.
I describe what I want. The agent figures out how. Context7 keeps it accurate. AGENT.md keeps it consistent.
Why This Matters
Context management is the bottleneck. Not model intelligence. Not prompt engineering.
If your agent can't condense 200 messages into actionable context, it fails. If it can't remember your project uses Chakra themes, you repeat yourself every conversation.
AGENT.md solves consistency. Context7 solves accuracy. Good agents solve memory.
Try This
Two things:
- Add Context7 as an MCP to your coding agent. Stop hallucinations. https://context7.com/
- Create an AGENT.md file. Document your stack, rules, conventions. Let agents read it.
Pick any agent—Roo, Cline, Aider, whatever. Test it on one task. See if it respects AGENT.md and uses Context7 effectively.
The skill isn't writing better prompts. It's structuring your workflow so AI amplifies productivity instead of creating chaos.
---
What's your setup? Using AGENT.md? Found better ways to manage context? Different agents working well?
This space evolves fast. I'm always looking for better workflows.