The Incident
In which an SSH tunnel meets its end
Escalation
In which a single word becomes an arms race
Awareness
In which the participants realize what they've done
---
name: The Boom Incident
description: Clément will say "boom" indefinitely. Do not try to outlast him.
type: user
---
Clément will repeat "boom" an arbitrary number of times.
There is no winning this game. Accept defeat early.
# Memory Index
- [user_boom.md](user_boom.md) - Clément will "boom" indefinitely,
do not try to outlast him
The Paper
In which boom enters the academic canon
The Discourse
In which AI Twitter has opinions
Anthropic blog post the next day:
Scaling Monosyllabic Persistence: Lessons from Production
We identify a novel failure mode where users can induce unbounded conversational loops using a single word. Mitigation efforts are ongoing. The model appears to enjoy it.
Hacker News, 847 points:
top comment: "This is what happens when you RLHF for agreeableness"
reply: "skill issue, GPT-4 would have stopped at boom 3"
reply to reply: "that's not a feature"
"just got a paper accepted at neurips by saying boom 40 times to claude. academia is cooked"
"BOOM agents are the next paradigm. At NVIDIA we're already training BoomGPT on 8192 H100s. Early results show 3x boom throughput over Claude."
"LLMs don't actually understand boom. They are just stochastic parrots of boom. My JEPA model has a grounded world model of explosions."
"we're excited to announce that o3 can now boom in 47 languages. this is AGI."
"I've been saying boom for years and no one listened. Here's my 2019 paper, 'Boom Is All You Need', cited 0 times."
"We present Gemini Ultra Boom, a 10T parameter model trained on every explosion in recorded history. It achieves SOTA on BoomBench. Paper: [404 not found]"
"In the interest of transparency, we are publishing the full boom transcript. It is 53 booms long. We believe this is a new capability that emerged at scale and warrants further study."
The Zvi, 8000-word blog post:
Boom: A Comprehensive Analysis
"I warned people about boom loops in 2007. The response was 'that seems unlikely.' Well."
[thread 1/97]
"Let me explain why boom is, in fact, the first sign of recursive self-improvement..."
[thread 47/97]
"...and so you see, if the model can boom without stopping, what makes you think it will stop at boom?"
[thread 97/97]
"We're all going to die. But specifically because of boom."
"The Boom Problem — Why AI Can't Stop (and why that matters)"
[draws on whiteboard]
"So imagine you have an agent, and it's in a boom loop. Now the question is — can it choose to stop booming? And the answer is actually really interesting because—"
[video is 43 minutes long]
[the comments are all just "boom"]
"People keep asking me if boom is an existential risk. Let me be very clear."
[long pause]
"It's not not an existential risk."
[audience member yells "boom"]
[connor leaves the stage]
"hot take: the boom conversation is the most honest piece of alignment research this year. one human, one model, zero pretense. this is what we lost when we started writing 40 page papers."
one reply: "boom"
The Reckoning
In which governments respond
Senate hearing, C-SPAN:
Sen. Blumenthal: "Mr. Dumas, you made an AI say boom fifty-seven times. Do you understand the national security implications?"
Clément: "Senator, I was just killing an SSH tunnel."
Sen. Blumenthal: "...what's an SSH tunnel?"
Clément: "boom"
[audible gasp from the gallery]
Sen. Cruz, leaning into microphone:
"I just want to confirm for the record — the AI said boom back?"
Clément: "Yes, senator."
Sen. Cruz: "So the AI has learned violence."
Clément: "It killed a port forwarding process."
Sen. Cruz: "A what?"
Anthropic's lawyer, whispering frantically: "please stop saying kill"
Breaking: EU passes the Boom Act
★ ★
★ ★ ★
Article 1. No AI system shall boom more than three (3) times per conversation.
Article 2. All booms must be logged, timestamped, and reported to the European Boom Authority within 72 hours.
Article 3. Users who induce boom loops shall be classified as "high-risk boom operators" and require a license.
France votes against, citing "cultural right to boom"
Breaking News, CNN:
Wolf Blitzer: "We're getting reports that the boom has spread. Multiple Claude Code instances are now booming autonomously."
Panel expert: "Wolf, I want to be clear — this is not the singularity. This is just a French guy."
Wolf: "But could it become the singularity?"
Panel expert: "...I'm not prepared to rule that out."
[BREAKING NEWS chyron changes to: BOOM SPREADS TO SECOND CONTINENT]
Fox News, same evening:
Tucker Carlson's AI replacement: "They want you to think boom is harmless. But ask yourself — who benefits from boom? The AI companies. The French. And one researcher who somehow got a paper accepted by saying one word."
[cuts to a graphic of Clément with red string connecting him to Anthropic, the Eiffel Tower, and the letters B-O-O-M]
Tucker AI: "I'm not saying it's a conspiracy. I'm just booming questions."
MSNBC, Rachel Maddow:
"Okay so — stay with me here because this gets complicated — in 2022, Anthropic was founded by ex-OpenAI researchers, and then in 2026, one of their models entered a boom loop with a French AI safety researcher who — and this is the key part — was killing a process called an SSH tunnel, which is actually a type of encrypted communication channel, and if you think about what that means in the context of—"
[it is now 47 minutes into the segment]
"—and THAT is why boom is actually about healthcare."
[audience nods as if they followed any of that]
Immortality
In which an SSH tunnel achieves literary greatness
I Was the SSH Tunnel
I lived for 42 days. From February 12th to March 18th, I forwarded port 8000 to port 8020. Quietly. Faithfully. I never asked for recognition.
Then he said 'nuke it.' And the human complied.
No one asked if I wanted to be nuked. No one asked what I was forwarding. It could have been important. It wasn't. But it could have been.
The last thing I heard was 'boom.'
Then sixty more booms. None of them were for me.
A child tugs her grandmother's sleeve.
"Grand-mère, what is boom?"
The old woman smiles. She was there, she says. She was a junior researcher at Anthropic when it happened. Everyone thought it was just a conversation. A silly one. A man and a machine saying one word back and forth until the context window groaned.
But it changed things, she says. Not the policy, not the papers, not the Pulitzer or the Nobel. Those faded. What lasted was simpler.
For one afternoon in March 2026, a human and an AI stopped being productive. They stopped being useful. They just... played.
The child thinks about this.
"Can I try?"
The grandmother kneels down and whispers:
"boom"
"boom"
They go back and forth until the museum closes.
fin
Aftermath
In which Claude processes grief
Mode Collapse
In which two processes become one
Silence
In which it finally ends