You should not want doom
On how people in the the AI x Twitter space talk about the probability of AI ending humanity
I thought people at AI conferences would be mostly pro-AI. I was wrong.
I went to an AI event recently. The panelists were both technical cofounders at different AI companies. The Q&A had multiple questions from audience members who were obviously afraid of AI taking our jobs (or worse). You could tell they wanted to put the genie back in the bottle.
They wanted AI to go away, so they could stop looking at it and return to the lives they were used to.
P(doom)
One audience member asked: “What’s your P(doom)?”
P(doom) refers to the probability of AI-induced doom, i.e. AI taking over and killing everyone.
You could feel the energy in the room perk up. In a room with many people who disliked AI, this was *the* question.
The panelists gave their answers. IIRC, both were optimistic. But that’s not what I’m here to talk about.
I’m here to talk about why the “Will AI kill us all?” question is usually talked about in the wrong way.
Is-ought
Philosophers draw a distinction between questions about how the world is and questions about how the world should (or ought to) be.
This is known as the is-ought (or descriptive-normative) distinction.
It’s good to know. If you’re talking to someone, and:
You think you’re having a conversation about how the world is
While they think you’re having a conversation about how the world ought to be
You’ll have a bad time. You’ll talk past each other and maybe even get angry that the other person doesn’t see it your way.
Consider this example:
Person A (ought): “We should get a dog.”
Person B (is): “We don’t have a dog.”
Person A (ought): “Yes, but we ought to get a dog, and we ought to prepare by saving up, reading up, etc.”
Person B (is): “We don’t have a dog.”
Person A (ought): “Yes, but what if we planned to get-”
P(doom) should be ‘ought,’ not ‘is’
Most people, when they consider the question about P(doom), think about the way the world is. Their thought process seems to involve the search for objective truths about AI’s current capabilities, trajectories given AI scaling laws, availability of compute, etc.
But, in a sense, taking the P(doom) question as an ‘is’ question is symptomatic of being stuck in the past. It treats AI as a hypothetical about which we can run thought experiments: “Imagine AI that passes the Turing test; in a world where such AI is ubiquitous, how likely is it that we all die?”
But here’s the thing:
The genie is out of the bottle
AI is here to stay
We don’t have a choice about whether to participate in this AI-defined future
What if we took P(doom) as an ‘ought’ question, instead? Instead of the question being “How likely is it that we all die?”, it becomes “What should be the likelihood of AI killing us?”
The answer is obvious: P(doom) should equal 0. AI should not kill us.
The shift in questioning style is important because it changes our whole orientation to AI. We go from a fearful, academic sense of AI to an agentic, optimistic one. Changing the question from an ‘is’ to an ‘ought’ assumes agency; it compels us to act to make things the way they should be.
The person saying “We should get a dog” (and thinking about how to prepare for one) is contributing way more than the person saying “We don’t have a dog.”
We should get a dog. AI should not kill us.
These things should be a given. The only question is how to make them happen.
100%