Submitted for Your Consideration: AI, Automation, and the Cookbook That Arrives After the Fire
Table of Contents
TL;DR
AI automation doesn’t remove humans from systems, it relocates them.
Responsibility stays human while understanding quietly fades.
History suggests this ends the same way it always does:
with a perfectly reasonable explanation arriving after the damage is done.
There is a certain confidence that appears whenever a system works too well.
It speaks calmly.
It explains itself fluently.
It reassures those watching that everything is under control.
And slowly, almost politely, it changes the role of the human standing nearby.
Uwe Friedrichsen’s essay, “AI and the Ironies of Automation — Part 2,” is not a warning about artificial intelligence going rogue. It is a quieter observation: that we are repeating a familiar pattern, one where efficiency improves faster than understanding, and responsibility lingers long after participation fades.1
Nothing here is malicious.
That is what makes it dangerous.
Act I: The Comfort of Transition#
The first argument is always reasonable.
Every new technology stumbles. Early automation was awkward. Early interfaces were blunt instruments. What we are seeing now — verbose AI plans, clumsy oversight workflows, cognitive overload — is temporary. Transitional. The price of progress.
This is not wrong.
But transitions are where confidence outruns caution. The system begins to perform well enough that questioning it feels unnecessary, even impolite. The human is told not to worry, the rough edges will be smoothed out later.
Later becomes a habit.
Act II: The Mirror Held Up#
Another defense follows close behind.
Humans were never good at this to begin with. Mistakes predate AI. Alerts were missed. Changes were rushed. Decisions were made under pressure with incomplete information. Artificial intelligence did not introduce fallibility, it merely illuminated it.
Again, this is true.
Modern systems can be audited. Their decisions can be replayed. Their intent can be reconstructed in ways human judgment never allowed. Visibility has improved.
But visibility is not the same as comprehension.
A system that explains everything may still be understood by no one.
Act III: The Interface Will Save Us#
A more optimistic claim appears next.
The problem is not automation; it is design. Humans are not meant to read entire plans. They need summaries, uncertainty indicators, risk signals. These are solvable problems. Better interfaces will close the gap between machine speed and human cognition.
They might.
But systems tend to ship at the moment they are useful, not when they are humane. The difference is subtle, and expensive. Over time, the ability to explain becomes optional, while the ability to proceed remains mandatory.
The button is pressed.
The explanation is skimmed.
The system moves on.
Act IV: Oversight as a Profession#
The role of the human is reframed.
No longer an operator, but a supervisor. No longer hands-on, but responsible. Oversight is described as a different skill, one that does not require constant practice, only judgment when it matters.
This can work.
But only when training, rehearsal, and preparation are treated as essential rather than ceremonial. Rare failures demand uncommon competence, and competence does not survive long periods of disuse.
When preparation becomes theoretical, intervention becomes performative.
Act V: The System as the Safety Net#
The strongest reassurance arrives last.
Humans should not be the final line of defense. Good systems protect themselves. Guardrails, policies, rollbacks, constraints, these are the real safeguards. Relying on human intervention is a sign of poor design.
This is largely correct.
Yet even the most resilient system eventually encounters a decision that cannot be reduced to policy. Context appears. Values collide. Consequences extend beyond what was measured.
At that moment, the system will do exactly what it was designed to do.
The question is whether anyone still remembers how to decide differently.
Act VI: The Quiet Shift#
It is tempting to say this anxiety is misplaced. That what we are witnessing is not the erosion of humanity, but a change in role. The human is no longer the center of the action, and that discomfort is mistaken for danger.
There is truth in that.
But the real shift is more subtle. Responsibility remains with the human, even as understanding drifts away. Authority is preserved. Comprehension becomes optional. The human is still accountable, just increasingly distant from the moment where the decision was shaped.
This distance is not enforced.
It is accepted.
Final Act: The Cookbook#
All of the reassurances may be justified.
AI can be designed well.
Oversight can be humane.
Transitions can succeed.
But none of these outcomes are automatic.
There is a familiar ending to stories like this. The system performs as expected. The logs are complete. The explanations are thorough. And afterward, a document appears, clear, rational, and correct, describing what should have been done.
It arrives neatly bound.
It arrives too late.
The fire is already out.
The kitchen is already gone.
The system did not fail.
The understanding did.
You are now leaving a world where automation promised relief, and entering one where it quietly demands wisdom.
Whether anyone notices in time remains an open question.
Footnotes#
Uwe Friedrichsen, “AI and the Ironies of Automation — Part 2”, https://www.ufried.com/blog/ironies_of_ai_2/ ↩︎