0.6 - Auto Review & Code Bridge #454
Replies: 2 comments 4 replies
-
|
Review works great! One thing I kind of wish is that we could lean on Opus more outside of auto session. I often find that model knows the solution vs codex searching for it. My non auto prompts often include, |
Beta Was this translation helpful? Give feedback.
-
|
@zemaj have you seen 0.6.0 deleting files its trying to edit? I've never seen this before. It restores the version from git without even asking. It also keeps saying it doesn't have time in this session. I only noticed both of these behaviors after the last update. This is with codex-max(medium). |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Two big updates just landed in the 0.6 release: Auto Review and Code Bridge. Both are aimed at making Code spend less time spitting out code and more time checking whether that code actually works.
Auto Review
I have been trying to ship this feature since I first started building Code. I have thrown away a lot of versions. This is the first one that actually feels like it "just works", and it is a huge upgrade.
Auto Review is on by default. Any time an agent turn changes code, a background review thread spins up. It is a bit like having a patient, slightly obsessive pair programmer sitting behind you, watching every change, pointing out mistakes and handing you ready-to-apply patches.
Under the hood it works like this:
/review), but with a carefully trimmed slice of conversation context so it does not undo deliberate changes or introduce functional regressions.codex-5.1-mini-high, which is fast and very light on tokens.It will not catch every single bug, but in real use it has been a very noticeable improvement to code quality. I will run it through Terminal Bench soon to get some numbers, but so far in day to day coding it has been pretty incredible.
Code Bridge
Code Bridge is my attempt to fix the "last mile" problem for coding CLIs.
CLIs are at their best when they can run and see the programs they write. For shell scripts and command line tools the feedback loop is tight: you run a command, Code sees stdout and stderr, and can react. As soon as you move to web apps, mobile apps or desktop apps, that loop breaks. Half the time it feels like I am just copy pasting console output into Code by hand.
Code Bridge is a local version of something like Sentry that plugs your app straight into your coding CLI and streams data back in real time. It is safe to ship in your codebase. It is disabled by default in production, so it only affects dev builds.
You can install it by asking Code to pull in https://github.com/just-every/code-bridge In a lot of setups it will simply work: when you run your app, errors automatically flow into Code without you wiring anything up. In more advanced setups the bridge lets Code "see" into the app, grab screenshots and even control a running app in real time.
We also expose an MCP server, so you can use it with any MCP compatible CLI, not just Code. Code does have a bit of extra integration on top, but the basics are shared.
Code Bridge is newer and not as battle tested as Auto Review, but it is already usable. Over the next few weeks I will be expanding it with things like AI based filters and build loops so you can run full error driven development cycles with almost no manual glue.
I am very excited about this release. It feels like the beginning of a shift away from "can the model write this file" toward "did we actually verify that what it wrote is correct". That is where most of the real leverage is, and I think you will see a lot of movement in this direction from coding CLIs over the next six months.
Beta Was this translation helpful? Give feedback.
All reactions