Reviewing generated code

Published on June 26, 2025

These past few weeks several friends and colleagues have asked me for advice on how to review AI-generated code. This post summarizes my thoughts on the matter. I will add to this post when tooling evolves and my thinking matures.

Keeping up with the pace

Reviewing code used to be a steady, manageable part of the engineering cycle. Now, with coding agents and generative tools, the volume and speed of pull requests have increased dramatically. senior engineers are often left wondering how to keep up, and juniors are questioning the value of code quality when machines can generate so much, so quickly.

While generative engineering techniques will eventually enable faster PR review and feedback incorporation, the current tooling is not quite there yet, for example GitHub Copilot's integration with GitHub and their Coding Agent, as well as Cursor's BugBot.

You will always be the villain

Being the reviewer can sometimes feel like playing the bad guy. Trust me, this feeling is nothing new, I have been on both sides long before GenAI came along. These days, fixing code based on review feedback can take longer than the initial writing though, which leads to frustration on both sides. Pick your battles, but stay firm on what really matters.

Automate the obvious (now with GenAI)

The best way to reduce friction is to automate what you can, so you do not have to repeat yourself in every review. The more you automate, the more you can focus on what matters. Agree on coding guidelines with the team, then enforce them in CI by breaking the build.

This approach is not new. However, today's tooling allows us to share guidelines in a more "actionable" way than a Markdown file in your docs repository. Consider implementing Cursor rules or GitHub's instruction files. By integrating your coding guidelines into these files, you can significantly reduce the number of review comments needed. This is especially effective for establishing consistent design patterns across your codebase.

While traditional CI cannot enforce these nuanced guidelines, instruction files can now guide coding agents to avoid over-engineering solutions and maintain appropriate complexity levels.

Shift the burden

Use a PULL_REQUEST_TEMPLATE.md to set clear expectations and ensure the basics are covered before reviewers dive in. Ask for self-reviews, screenshots, logs, and metrics in a data science context; anything that proves the author has thoroughly scrutinized their code.

Share the burden

If coding agents allow engineers to write code faster, the result should not only be more PRs. As coding agents take on more traditional junior-level tasks, junior engineers need to develop senior-level skills more quickly.

Let juniors learn by reviewing each other, followed by a senior, then close the loop by having them reflect on what they missed. This builds stronger teams and distributes the review load.

Moving target

The tools and techniques for code review will keep evolving, and so will the challenges we face. For now, focus on what you can automate, set clear expectations, and help your team grow into their expanding responsibilities.