What I learned using AI to code for a year
Tips and tricks for software engineers using AI to code
In my last post/mail I tried to explain a bit how I use AI to code daily for over a year now. While running through that workflow, I kept noticing patterns that might be valuable for you too:
Learn or review the fundamentals. This is crucial. Learn how to design systems. Learn how to review code fast, how to architect a project. Read existing open source projects. Go deeper in whatever framework or tech stack you’re using. The AI will be constrained by how much you know about a field, you’re the only one able to validate what it’s generating. So you need to get good at it.
Use AI to learn. I have this habit of opening the ChatGPT app in my phone while lying in bed and asking it to generate snippets of code for some design I have in mind. I do that for hours and learn a lot during the process. Give it an API documentation or existing code bases and ask about what you want to know. Sketch little systems. Discuss the design with it. Ask it to adapt to your own case. You’ll learn a lot, molded to what you need and really fast!
Review every diff and every plan. Think that you’re pair programming with a Junior engineer who is eager to dump hundreds of lines of code. You’re the experienced engineer, you’re the one in control to tame the beast and every single line of code is your responsibility still. So, review with care. Refactor code and clean your code base frequently if needed.
Create experiments. With AI coding you have a button that dumps hundreds of lines of code for you in a blink. Leverage that! Specially when you’re trying some new API, or not sure about a design, create experiments to validate it and throw it away when finished, if needed.
Git is your best friend. Create branches for every new experiment. Branch again for a feature branch after you’re done with your experiment. Merge only when you’re happy with your experiment. Rollback or just ditch out entire branches if you’re not OK with it.
Use an editor. Even if you use Claude Code or other terminal-based agent, make sure to be able to follow along what it’s generating in your preferred editor. Remember, you should be in control.
Take good care of the context. Yep, context engineering is really important. But it’s also super easy to do. Create one chat/session for each little thing you need. Select only the source files you know are important for the current feature you’re implementing. Break large features into smaller ones, and create specific chats for each one of them. Keep an eye of the context metrics in your tool and don’t cross more than 50-60% of it.
Use Plan mode. If your AI agent/tool doesn’t support it, just “simulate” it by asking it to plan only and do not edit any code. Let the LLM leverage the chain of thought.
MCPs are good but most of the time having good tools is much better. Specially if you’re in a Linux/Unix box. Agents thrive in an Unix environment because of it’s phylosophy/design: small programs doing one thing well and communicating through text files. So all those terminal programs you see around? They all talk text and do one good thing well. They can give your text hungry LLM agent everything it needs!
Give as much valuable context as you can. Help the agent to help you. Visual clues are pretty great here. If you’re doing Web dev, plugging the agent to a browser is really valuable! Or at least give it screenshots of what you need to be fixed or changed in a given page.
Persist. While you can trash as much generated code you want, remember that you are still adding pieces to a long term project. Try to incorporate the AI tools/agents in your existing code base. Do it in small bits. In safe branches you can ditch later. Just like a musical instrument, these things take time to get used to, but when you do, it feels like it’s part of you.
Keep yourself updated. AI for coding is one of the most profitable markets for labs like OpenAI and Anthropic. So you should expect that this field will not stop to move fast any time soon. Follow their updates (and this newsletter ;-) and take new LLM models and tools for a spin, check for their official guidelines, keep learning from other engineers and share your own experience as much as you can.
Those are the ones that come to my mind now, but I’m sure I forgot some and others more patterns will emerge, and tools will change, so please follow along here and on X for more.

