
What Semi-Automated Code Reviews with Claude Code Look Like at Grove
P.S. I took the images on my widescreen monitor while working on a real feature. If there’s interest, I’m happy to put together smaller screenshots or a walkthrough video. TL;DR Our workflow today looks roughly like this: Kick off an agent review locally via /grove_*_review The agent reviews the diff, runs best-practice checks, and spins up the system locally Full E2E flows run automatically (happy, sad, chaotic paths) Failures are diagnosed with context instead of just “test failed” A PR sweep catches regressions and tech-debt risks Humans review mission-critical logic manually A final PR description is generated automatically ...