AI coding best practices

#ai

Treat AI coding assistants as very enthusiastic junior developers who have a tendency to get carried away

AI assistants are eager, fast, and have broad knowledge, but they lack context about your specific codebase, business logic and the consequences of their suggestions. Assume it will confidently do the wrong thing unless constrained. You need to set boundaries, define “done,” and verify.

Keep tasks small and atomic

The single most important practice is constraining scope. Each task should be the smallest viable change that is independently testable, reviewable, and deployable. Chunks of work larger than this should be broken down into a series of tasks. Once a task has been manually verified, commit the changes and move on to the next one.

Use an explore, plan, code, verify workflow

While the details will change depending on the size of what you're trying to achieve, the core workflow remains the same.

Each step will involve a dialogue with the assistant as you refine the outputs until you're happy with them.

Explore

  • Provide a prompt describing the issue or feature using as much detail as possible and
  • Define acceptance criteria
  • Pass relevant files and messages to the assistcant and
  • Use an AGENTS.md file for constraints that the
  • For bug fixes you can ask the assistant to create a test that fails due to the bug.

Plan

Before the assistant changes any files have list what it's going to do. Critique this plan closely. Ask if there are alternatives ways to achieve the same goal. If you don't understand why something is being done in a particular way ask it to explain it. Ensure things are broken down into atomic tasks with thier own tests and acceptance criteria For bigger bits of work have it save the plan so it can be referred back to in new seasons

Code

Tell the assistant to implement a single task with accompanying tests.

Verify

This is the most important part. Go through the code line by line, ensuring you understand what's going on. If you don't understand why something has been done in a particular way ask about it.

Git diffs are good for this, but it can be easier to use a merge request for the review. Check the tests are actually testing for the things you want. Ensure all automatic tests and linting checks pass. Manually test the changes to ensure you're happy with the way they work Using a separate ai agent to evaluate the changes can be good, but everything also needs to be manually verified as well.

Once you're happy the task ensure all the changes have been committed and move on to the next task.

Define acceptance criteria before you ask for code

Make the AI propose a plan before it writes code

Verify with diffs and merge requests

Provide Rich Context

AI assistants don't know your coding standards, your team's conventions, or why that seemingly redundant validation exists. Before asking for code, briefly explain the context: what the code needs to do, what constraints exist, what patterns you're already using. "We're using dependency injection throughout this project" or "this needs to work with PHP 8.1" saves a lot of back-and-forth.

Use an AGENTS.md file to provide information about the project and the tools and standards used

Symlink AGENTS.md to CALUDE.md so the same file can be used by Claude Code and other AI agents.


Links

  • [[2025-W52]]