Why we stopped allowing autonomous fixes in production (even when tests pass)?

Over the last few months, I’ve been experimenting with removing humans from a very narrow engineering execution path: fixing failing CI automatically.

The goal was not speed. It was correctness under pressure.

What I learned surprised me.

Autonomous fixes can work reliably in some cases — but there are classes of failures where allowing automatic execution is actually more dangerous than slow human intervention, even with full test coverage.

Some examples:

- Tests that validate behavior but not in-variants - Implicit contracts that aren’t encoded anywhere - Fixes that restore correctness locally while degrading system-wide guarantees - “Successful” patches that narrow future recovery paths

In several cases, the correct action for the system was to stop entirely, even though a patch existed and tests were green.

I ended up introducing hard constraints where autonomy must halt, and treating refusal as a valid outcome — not a failure.

I’m curious how others reason about this boundary:

Where should autonomous systems stop, even if they can technically proceed?

How do you encode that line without just re-introducing humans everywhere?

1 points | by v_CodeSentinal 2 hours ago

0 comments