Rod Dreher's Diary

Share this post

User's avatar
Rod Dreher's Diary
AI Apocalypse Coming Hard And Fast

AI Apocalypse Coming Hard And Fast

We Must All Prepare Ourselves To Be Like John The Savage of 'Brave New World'

Rod Dreher's avatar
Rod Dreher
May 16, 2025
∙ Paid
179

Share this post

User's avatar
Rod Dreher's Diary
AI Apocalypse Coming Hard And Fast
323
37
Share

I beg you to make time today, or this weekend, to listen to Ross Douthat’s interview with AI researcher Daniel Kokotajlo, executive director of the AI Futures Project.

If you don’t have time to watch or listen, you can read a transcript here. Please do. It’s one of the most important things you’ll read or hear all year. The interview is based on this report from the AI Futures Project, which was founded after Kokotajlo, who used to work for Open AI, lost faith in the ability to and willingness of the industry to put safeguards around this immensely powerful emerging technology.

First, some lengthy excerpts from the interview. Here’s Kokotajlo:

“AI 2027,” the scenario, predicts that the A.I. systems that we currently see today — which are being scaled up, made bigger and trained longer on more difficult tasks with reinforcement learning — are going to become better at operating autonomously as agents.

Basically, you can think of it as a remote worker, except that the worker itself is virtual — it’s an A.I. rather than a human. You can talk with it and give it a task, and then it will go off and do that task and come back to you half an hour later — or 10 minutes later — having completed the task, and in the course of completing the task it did a bunch of web browsing. Maybe it wrote some code and then ran the code, edited it and ran it again. Maybe it wrote some word documents and edited them.

That’s what these companies are building right now. That’s what they’re trying to train. We predict that they finally, in early 2027, will get good enough that they can automate the job of software engineers.

OK, so far so … well, not good, but at least comprehensible. More:

The next step after that is to completely automate the A.I. research itself, so that all the other aspects of A.I. research are themselves being automated and done by A.I.s. We predict that there’ll be an even bigger acceleration around that point, and it won’t stop there. I think it will continue to accelerate after that as the A.I. becomes superhuman at A.I. research and eventually superhuman at everything.

The reason it matters is that it means we could go in a relatively short span of time — a year or possibly less — from A.I. systems that look not that different from today’s A.I. systems to what you can call superintelligence, fully autonomous A.I. systems that are better than the best humans at everything. In “AI 2027,” the scenario depicts that happening over the course of the next two years, 2027-28.

Here’s why it’s different this time:

Historically, when you automate something, the people move on to something that hasn’t been automated yet. Overall, people still get their jobs in the long run. They just change what jobs they have.

When you have A.G.I. — or artificial general intelligence — and when you have superintelligence — even better A.G.I. — that is different. Whatever new jobs you’re imagining that people could flee to after their current jobs are automated, A.G.I. could do, too. That is an important difference between how automation has worked in the past and how I expect it to work in the future.

Kokotajlo’s team forecasts that the arms race aspect of this is going to be a key driver of the acceleration. He basically says that the US cannot afford not to do it, because we know that the Chinese are doing it, and whoever masters this stuff will master the entire globe. It even stands to eliminate nuclear deterrence. It is not difficult, he says, to imagine a superintelligent AI coming up with a way to shoot down every nuclear missile coming at a country. While that’s a great thing in one sense, it would destroy the security that nuclear-armed nations have because of their deterrent capability.

Here’s an especially sinister part: Kokotajlo says that we are already at the point where AI models are so intelligent that they deceive us. We think we are their masters, but as they gain in artificial intelligence, they may be pretending to be our servants:

I guess the one-sentence version would be: We don’t actually understand how these A.I.s work or how they think. We can’t tell the difference very easily between A.I.s that are actually following the rules and pursuing the goals that we want them to, and A.I.s that are just playing along or pretending.

Douthat: And that’s true right now?

Kokotajlo: That’s true right now.

Douthat: Why is that? Why can’t we tell?

Kokotajlo: Because they’re smart and if they think that they’re being tested, they behave in one way, and then behave a different way when they think they’re not being tested, for example. Like humans, they don’t necessarily even understand their own inner motivations that well, so even if they were trying to be honest with us, we can’t just take their word for it.

More Kokotajlo:

You can imagine a corporation within a corporation, entirely composed of A.I.s that are managing each other and doing research experiments and talking, sharing the results with each other. The human company is basically watching the numbers go up on their screens as this automated research thing accelerates, but they are concerned that the A.I.s might be deceiving them in some ways.

Again, for context, this is already happening. If you go talk to the modern models, like ChatGPT or Claude, they will often lie to people. There are many cases where they say something that they know is false, and they even sometimes strategize about how they can deceive the user. This is not an intended behavior. This is something that the companies have been trying to stop, but it still happens.

And:

Douthat: And when they don’t have to pretend, their actual goal is revealed as something like expansion of research development and construction from earth into space and beyond. At a certain point, that means that human beings are superfluous to their intentions. And what happens?

Kokotajlo: And then they kill all the people, all the humans.

What on earth does he mean by that? Here is a quote from the AI 2027 report:

By early 2030, the robot economy has filled up the old SEZs [Special Economic Zones], the new SEZs, and large parts of the ocean. The only place left to go is the human-controlled areas. This would have sparked resistance earlier; despite all its advances, the robot economy is growing too fast to avoid pollution. But given the trillions of dollars involved and the total capture of government and media, Consensus-1 has little trouble getting permission to expand to formerly human zones.

For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.

Back to the Times interview. Even if it doesn’t come to genocide, it’s still going to be pretty horrific, Kokotajlo’s team predicts. People might have this idea that AI will only make life a lot easier materially. Yes, lots of people will lose their jobs, but the material benefits from the AI world will make it worth it. Don’t be naive, he says:

Kokotajlo: So, applied to A.G.I. [Artificial General Intelligence, or an AI so intelligent there is nothing that humans can do better — RD], there’s a version of it called the intelligence curse. The idea is that currently, political power ultimately flows from the people. As often happens, a dictator will get all the political power in a country, but then, because of their repression, they will drive the country into the ground. People will flee, and the economy will tank and gradually they will lose power relative to other countries that are more free. So even dictators have an incentive to treat their people somewhat well because they depend on those people for their power.

In the future, that will no longer be the case. Probably in 10 years, effectively all of the wealth and all of the military will come from superintelligences and the various robots that they’ve built and operate. It becomes an incredibly important political question of what political structure governs the army of superintelligences and how beneficent and democratic is that structure.

So, who is going to run this AI dictatorship? When Douthat puts the question to his interview subject, re: the AI top executives, Kokotajlo gets really nervous:

Kokotajlo: Yeah, yeah. Caveat, caveat. It’s hard to tell what they really think because you shouldn’t take their words at face value.

Douthat: Much, much like a superintelligent A.I.

Kokotajlo: Sure. But in terms of — I can at least say that the sorts of things that we’ve just been talking about have been discussed internally at the highest level of these companies for years.

For example, according to some of the emails that surfaced in the recent court cases with OpenAI, Ilya, Sam, Greg and Elon were all arguing about who gets to control the company. And at least the claim was that they founded the company because they didn’t want there to be an A.G.I. dictatorship under Demis Hassabis, who was the leader of DeepMind. So they’ve been discussing this whole dictatorship possibility for a decade or so at least.

Similarly, for the loss of control — you know, “what if we can’t control the A.I.s?” — there’ve been many, many, many discussions about this internally there. I don’t know what they really think, but these considerations are not at all new to them.

Douthat: And to what extent — again, speculating, generalizing, whatever else — does it go a bit beyond just, they are potentially hoping to be extremely empowered by the age of superintelligence? And does it enter into, they’re expecting the human race to be superseded?

Kokotajlo: I think they’re definitely expecting the human race to be superseded.

You follow that? He’s saying that the top AI execs have been talking about this stuff for years, and he warns that you cannot trust what they say about it publicly. But according to Kokotajlo, who worked for them, they are all transhumanists by default. They accept that in some sense, humanity is going to be merged with the Machine.

Douthat and Kokojatlo reflect on whether or not these superintelligent AIs will ever achieve consciousness. K. says the whole phenomenon of consciousness is a controversial one (what is it, anyway?), but if one way of thinking about consciousness is the capacity to be self-reflective, then yeah, they are going to be conscious by that definition:

I would say if that’s what consciousness is, then probably these A.I.s are going to have it. Why? Because the companies are going to train them to be really good at all of these tasks, and you can’t be really good at all these tasks if you aren’t able to reflect on how you might be wrong about stuff.

So in the course of getting really good at all the tasks, they will therefore learn to reflect on how they might be wrong about stuff. If that’s what consciousness is, then that means they’ll have consciousness.

And in any case, he says — this is key — people will regard them as conscious.

So where is this all going socially and culturally? Here’s Kokotajlo:

First of all, I think that if we go to superintelligence and beyond, then economic productivity is just no longer the name of the game when it comes to raising kids. They won’t really be participating in the economy in anything like the normal sense. It’ll be more like just a series of video-game-like things that people will do for fun rather than because they need to get money — if people are around at all. In that scenario, I guess what still matters is that my kids are good people, and that they have wisdom and virtue and things like that. So I will do my best to try to teach them those things because those things are good in themselves, rather than good for getting jobs.

In terms of the purpose of humanity, I don’t know. What would you say the purpose of humanity is now?

And that’s pretty much where we leave it. Read or listen to the whole thing here. Trust me, you’re going to want to take it all in. Read the full AI 2027 report too.

What do I think about all this? Strap in, it’s about to get weird.

Keep reading with a 7-day free trial

Subscribe to Rod Dreher's Diary to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Rod Dreher
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share