WorkflowsarticleDecember 10, 20259 min read

Code Review Shouldn't Catch Lint Errors: What to Automate

Senior engineers waste hours catching formatting issues in code review. Learn what should be automated so code review focuses on what humans do best.

"Can you fix the formatting on line 47?"

"This import isn't sorted correctly."

"You're using single quotes but we use double quotes."

These comments appear in code reviews every day. They're correct. They're also a waste of senior engineer time.

Code review exists for humans to catch what automation can't: logic errors, design issues, security vulnerabilities, and maintainability concerns. When reviewers spend time on formatting and style, they have less attention for what actually matters.

The principle is simple: automate the automatable. But teams struggle with where to draw the line.

The Hierarchy of Code Review Value

Not all code review comments provide equal value. There's a clear hierarchy:

High Value (Humans Only)

These require human judgment and can't be automated:

  • Business logic correctness: Does this code do what it's supposed to?
  • Design decisions: Is this the right abstraction? Will it scale?
  • Security implications: Could this be exploited? Are there edge cases?
  • Performance concerns: Will this be slow with real data?
  • Maintainability: Will future developers understand this?

Medium Value (Humans with AI Assistance)

These benefit from human judgment but AI can help:

  • Error handling completeness: Are all failure cases covered?
  • Test coverage adequacy: Are the right things being tested?
  • API design: Is this interface intuitive?
  • Naming quality: Are these names clear and consistent?

Low Value (Automate Completely)

These should never reach human reviewers:

  • Formatting: Indentation, spacing, line length
  • Import organization: Order, grouping, unused imports
  • Style rules: Quotes, semicolons, bracket placement
  • Simple type issues: Missing annotations, obvious type errors
  • Unused code: Dead code, unused variables

If a computer can detect it with 100% accuracy, a computer should fix it.

What Reviewers Currently Waste Time On

Surveys of code review comments reveal a frustrating pattern. Significant percentages of comments are about issues that should be automated:

Style and Formatting (15-25% of comments)

  • "Fix indentation"
  • "Add trailing comma"
  • "Line too long"
  • "Wrong quote style"

These are objectively right or wrong. No human judgment needed.

Simple Syntax Issues (10-15% of comments)

  • "Missing semicolon"
  • "Unused import"
  • "Variable declared but never used"
  • "Console.log left in code"

Linters catch these with 100% accuracy. Why are humans spending time on them?

Obvious Type Errors (5-10% of comments)

  • "This should return a string, not number"
  • "Missing required property"
  • "Type mismatch on argument"

TypeScript or similar type systems catch these automatically.

Documentation of the Obvious (5-10% of comments)

  • "Add JSDoc to this function"
  • "Missing parameter description"
  • "Update the README"

Some documentation is valuable. Documentation that just restates the obvious is busywork.

Total Waste: 35-60% of Review Time

In many teams, over a third of code review effort goes to issues that could be caught and fixed automatically. That's a senior engineer's afternoon spent on work a computer should do.

Building the Automation Stack

Here's how to automate each category so reviewers can focus on what matters.

Layer 1: Formatting (Zero Human Involvement)

Tools: Prettier, Black, gofmt, rustfmt

Configure formatters to run automatically:

// package.json
{
  "scripts": {
    "format": "prettier --write ."
  },
  "husky": {
    "hooks": {
      "pre-commit": "prettier --write --staged"
    }
  }
}

Formatting happens on save or commit. No human ever sees formatting issues because they're fixed before code is committed.

Rule: If Prettier (or equivalent) can handle it, humans never discuss it.

Layer 2: Linting (Automated Detection, Automated Fixing)

Tools: ESLint, Pylint, Rubocop

Configure linters with auto-fix:

// .eslintrc.js
{
  "rules": {
    "no-unused-vars": "error",
    "import/order": ["error", { "newlines-between": "always" }],
    "no-console": "warn"
  }
}

Run with auto-fix in CI:

- name: Lint and fix
  run: npm run lint -- --fix

- name: Commit fixes
  run: |
    git add -A
    git commit -m "Auto-fix lint issues" || true

Most lint issues have deterministic fixes. Automate them.

Rule: If ESLint --fix handles it, humans never comment on it.

Layer 3: Type Checking (Strict by Default)

Tools: TypeScript, mypy, Flow

Enable strict mode:

// tsconfig.json
{
  "compilerOptions": {
    "strict": true,
    "noImplicitAny": true,
    "noImplicitReturns": true
  }
}

Type errors block CI. No human needs to comment "this should be typed" because untyped code doesn't pass CI.

Rule: If TypeScript catches it, the PR doesn't reach reviewers until it's fixed.

Layer 4: Security Scanning (Automated Detection)

Tools: npm audit, Snyk, Dependabot alerts

Run in CI:

- name: Security scan
  run: npm audit --audit-level=high

Known vulnerabilities are detected automatically. Reviewers don't need to memorize CVEs.

Rule: Security scanners catch known issues. Reviewers focus on novel security concerns.

Layer 5: AI-Powered Fixes (Complex but Patterns)

Tools: Devonair, AI maintenance tools

For issues that are pattern-based but complex to fix manually:

@devonair fix all lint violations
@devonair update deprecated API usage
@devonair standardize error handling

AI handles the tedious transformation work. Reviewers verify the output.

Rule: If AI can reliably fix it, let AI fix it. Humans verify rather than create.

What Human Reviewers Should Focus On

With automation handling the mechanical, reviewers can focus on high-value feedback.

Logic and Correctness

  • Does this algorithm handle edge cases?
  • Are the business rules implemented correctly?
  • What happens when the input is unexpected?
  • Is this behavior consistent with the rest of the system?

Design and Architecture

  • Is this the right level of abstraction?
  • Will this approach scale with usage?
  • Does this fit with existing patterns?
  • Will this be maintainable in six months?

Security Thinking

  • Could this be exploited by malicious input?
  • Are there authorization checks where needed?
  • Is sensitive data handled appropriately?
  • What's the blast radius if this fails?

Performance Implications

  • Will this be fast with production data volumes?
  • Are there N+1 queries or similar issues?
  • Is this caching strategy appropriate?
  • What are the memory implications?

Knowledge Sharing

  • Would a less experienced developer understand this?
  • Are there non-obvious gotchas to document?
  • Is there context that should be captured in comments?
  • Does this align with team conventions?

The Culture Shift

Moving from "review everything" to "automate the automatable" requires culture change.

Stop Commenting on Automated Issues

If your pipeline has Prettier configured, don't comment on formatting. Even if you see something that looks weird. The formatter made a choice. Trust the tool or change the tool's configuration.

Update Your Review Checklist

Old checklist:

  • [ ] Code is properly formatted
  • [ ] No linting errors
  • [ ] Types are correct
  • [ ] Logic is correct
  • [ ] Design is appropriate

New checklist:

  • [ ] CI passes (covers formatting, linting, types)
  • [ ] Logic handles edge cases
  • [ ] Design is appropriate for the problem
  • [ ] Security implications considered
  • [ ] Performance is acceptable

Praise Different Things

Old praise: "Nice clean code, well formatted!" New praise: "Good abstraction choice here—this will scale well."

Celebrate the things humans contribute that automation can't.

Trust the Pipeline

If CI passes, the mechanical stuff is fine. Start your review assuming the code is clean, and focus on whether it's correct.

Measuring the Shift

How do you know you've successfully shifted review focus?

Comment Category Analysis

Categorize review comments for a month. Track what percentage are about:

  • Formatting/style (should approach 0%)
  • Lint issues (should approach 0%)
  • Type errors (should approach 0%)
  • Logic/design (should increase)
  • Security (should stay consistent or increase)

Review Time vs. Value

Track time spent reviewing vs. issues caught.

Before automation: 2 hours review → 15 comments, 10 about style After automation: 1 hour review → 5 comments, all about logic/design

Less time, higher value.

Reviewer Satisfaction

Survey reviewers:

  • "Do you spend time on mechanical issues?"
  • "Do you have enough time for deep design review?"
  • "Is code review valuable or tedious?"

Answers should improve as automation takes over grunt work.

The End State

In a well-automated codebase:

  1. Developers commit code → Pre-commit hooks format and fix simple issues
  2. CI runs → Linting, type checking, security scanning block bad code
  3. AI fixes patterns → Complex but mechanical issues are auto-fixed
  4. Human reviews → Focuses exclusively on logic, design, security, performance
  5. Better software ships → Humans spent their time on what matters

Code review becomes a high-value activity where experienced engineers share knowledge and catch issues that require judgment. Not a gatekeeping exercise where everyone nitpicks semicolons.

Your senior engineers didn't spend years learning software design to comment on quote styles. Stop wasting their expertise on work computers should do.


FAQ

What if developers bypass pre-commit hooks?

Make CI the final gate. Even if pre-commit is bypassed, CI runs the same checks. Code that fails linting doesn't merge, regardless of how it got to that state.

What about style preferences the team disagrees on?

Pick a standard, configure the formatter, and move on. The specific choice (tabs vs. spaces, single vs. double quotes) matters less than having a consistent automated standard. Don't spend human review cycles on preferences.

How do we handle legacy code that doesn't pass new rules?

Introduce rules gradually. Use ESLint's per-file overrides or gradually roll out stricter checking. AI tools can help migrate legacy code to new standards without manual file-by-file work.

What if automation makes wrong fixes?

Every fix goes through CI and code review. Wrong fixes fail tests or get caught by reviewers. The safety nets exist whether changes are human or AI generated.

Won't this make code review too fast?

Good. The goal isn't lengthy reviews—it's effective reviews. If automation handles 50% of what reviewers used to catch, reviewers can spend that time on deeper analysis or review more PRs.