Skip to content

Stupid AI is still smart enough to kill us

No further advance in AI is required to seal our fate—at least not when it comes to creating a civilization-ending threat.

6 min read

When arguing over the perceived threats and promises of artificial intelligence, the concerns and the hopes are generally directed at the near future. Perhaps AI will crack the design for inexpensive fusion reactors, ushering in a golden age of unlimited clean energy. Perhaps it will develop into something so malignant that the post-apocalyptic world of “The Terminator” seems like Candy Land.

But we don’t need to peer into the mists to divine where things are going. No further advance in AI is required to seal our fate—at least not when it comes to creating a civilization-ending threat. Limited-purpose, narrowly-defined AI, as it exists right this moment in the latter half of 2024, is more than capable of turning out humanity’s lights. It may be inevitable. 

Just over twenty years ago, Oxford philosopher Nick Bostrum delivered a talk in which he argued that we are almost certainly living in a simulation created by some advanced form of computer. The Simulation Argument runs basically like this: once a civilization is capable of convincingly simulating a single universe, the difficulty of simulating more universes becomes increasingly trivial. So, for every physical universe that contains such a civilization, there are more, perhaps almost infinitely more, simulated universes.

Bostrum’s theory suggests a likely suspect for the Great Coder behind our simulated skies. He believed the most likely creator for a simuverse wasn’t some clever group of tentacly aliens, but our own descendants. After all, we do something similar now, seeking to visualize the past with elaborate models of Pompeii or the Pyramids. Why shouldn’t some future we, equipped with enough computing power that their iPhone 1000 could model all the grains of sand on all the beaches, not whip up a few billion simulations of their primitive ancestors, ones whose AI-based inhabitants lack the agency necessary to discover the boundaries of their digital prisons?

And no, Bostrum didn’t actually mention the iPhone, which was still several years in the future at the time he made his argument. But the idea of literal pocket universes would have fit right in with the overall scheme.

We are not there. Computer games like the ever-improving “No Man’s Sky” may be capable of harnessing current computers and procedural mathematics to create galaxies genuinely larger than our own, but not even the kind of CPU and graphics hardware owned by the most dedicated gamer geeks will render these worlds with truly convincing fidelity and complexity.  That’s why Bostrum had to invoke theoretical descendants in his argument; the ability to thoroughly simulate a universe doesn’t yet exist.

However, we don’t have to fake a civilization to destroy our own. We only have to fake faking it. And that’s many orders of magnitude easier.

Over the last few months, as “large language model” forms of AI have iterated past the stages of being unable to apply the proper number of fingers in an image, or count the number of letters in a word, news stories involving “simulated facts” have proliferated. Maybe it’s a senator fooled into believing he was speaking with a foreign official. Or high school girls relentlessly bullied with fake porn. Or fake images implying a political endorsement that never happened.

At this point in the fall of 2024, it’s likely that you only encountered each of these stories after the simulated nature of the items was revealed. And following each of these events, news sites dutifully crank out articles on how to detect fake videos, audio, or images. 

But the effort necessary to separate fact and AI-generated fiction is increasingly non-trivial. In most cases, these fakes are revealed only after they have passed by many eyeballs and been accepted as at least potentially real by at least some viewers. And we have no idea how many AI-generated false images, texts, and videos already circulate in the world, undetected.

That’s the problem. That’s why AI, as it already exists, is going to eat our civilization and spit out the charred, ash-laden bones.

A good argument can be made that people are already so excellent at lying that AI is unnecessary to our destruction. People are practically dedicated engines of falsity. After all, this is the year where one candidate has made cat-eating immigrants a central theme of his campaign.

Still, AI makes it worse. 

Not to get too mathematical, but think of it this way. If L is the effort required to create a convincing lie, and D is the effort required to debunk the lie, then a decrease in L or an increase in D extends the time the lie persists and expands the damage it may cause. AI makes it easy to create lies so convincing they require special tools to determine they are fake. If that can be determined at all.

Low E. High D. 

But D isn’t just the effort required for someone, somewhere to see through a deception. Some expert may know that this video, audio, image, or statement is fake. But that knowledge has to be shared in a way that spreads to enough people that it inoculates the public against further acceptance of the fake. Every single impactful lie requires its own vaccine. This is hard. 

It’s hard not just because old saws about lies spreading much more quickly that the truth remain accurate—especially when those lies are salacious, malicious, or reinforce existing prejudice—but because people will continue to spread lies even when they know they are lies because… people. (See cat-eating immigrants.)

It doesn’t require exceptional racism, misogyny, or idiocy for this to happen. How many times in the last few months have you encountered a claim that you suspected might be false, but liked, or linked, or copied, or encouraged in some way simply because you thought it was funny, or because it took a kick at someone you disliked? 

Here’s how all this pulls together: as the number and quality of faux facts goes up, there will eventually become a point where not all of the lies can be debunked. Fakes will survive detection longer, spread more widely, and do more damage to public understanding.

Make it easy enough to create a convincing fake, and difficult enough to debunk them, and all the facts you encounter end up in the dubious space of Bostrum’s universes: they are more likely to be fake. 

As the difference between E and D increases, the odds of encountering a real fact—that is,  information based on events that happened in this physical universe—becomes less likely. 

AI may not be good enough to simulate a whole universe, but when it comes to simulating an interview, a news site, a video, or anything other “fact” we count on to set our opinions and determine a course of action, we’re already there. Humans may be experts at lying, but AI does it well, in bulk, in ways that are potentially more convincing and harder to debunk. It can be automated to churn out an elaborate lie, support it with images or video, provide a thousand-strong chorus to spread the claim, drop supporting anecdotes,  create follow-up stories, and argue with naysayers. Then it can do it again. And again.

Thanks to social media, your contact with verifiable events is already tenuous. Thanks to AI, it’s going to recede at an increasing pace. More and more, any fact you see, hear, or read … probably isn’t.

The idea that there might be two sets of alternate facts was enough to send half the nation tumbling down a rabbit hole. But a billion different alternate facts are the same as no facts at all. Fighting to salvage even a core of common beliefs under that kind of pressure will become ever more difficult as the most basic ideas become frayed by an abrasive force of well-supported undebunkable lies. 

It isn’t easy to conceive how any society stands up to that challenge. It’s also difficult to imagine that the companies driving this “new Industrial Revolution” will notice that humanity is getting caught in the gears long enough to think about turning the machine off until they’ve built better safety rails. 

However, there are reasons to hope this race into chaos won’t take place, and the biggest one is that AI, as it exists today, sucks. It has serious performance limitations that may be baked into the nature of human language itself.

Companies have attempted to tackle this issue by throwing ungodly sums of computing power, data, and energy at the problem. Like, let’s reopen Three Mile Island levels of energy. And despite the race to pile on memory and CPUs, along with N-number of expert programmers optimizing code, there seems no sign of a fundamental breakthrough that might lead past the limitations to the holy grail of artificial general intelligence.  

That’s not to say that the current LLM-oriented technology doesn’t have value. It absolutely does.

It just might not have the kind of all-pervasive utility that the multi-billion dollar investments have been chasing. Certainly not great enough to justify the energy costs, which are also environmental costs. If this generation of AI doesn’t turn out to produce all the benefits promoters have been promising, that will be a shame.

But also maybe a blessing.

We need your support: It's our very first fundraising drive!

Become a sponsor of this new progressive community. The money we raise will go towards paying the bills, building new features, and building our own writing staff to bring you more stories, more often.

Click here to upgrade to a (completely optional!) $5 per month paid subscription.

Or Click here to send a one-time payment of any amount. The more support we have, the faster you'll see us grow!

Comments

We want Uncharted Blue to be a welcoming and progressive space.

Before commenting, make sure you've read our Community Guidelines.