Tatham Tech
AI for Business10 min read

The Smartest People in AI Can't Agree on Whether We're Screwed

The Smartest People in AI Can't Agree on Whether We're Screwed
Back to blog
Jessica Tatham
Jessica Tatham

If you've spent any time on YouTube or Spotify in the last two years, you've seen the thumbnails. Red text. Alarmed faces. "EMERGENCY EPISODE." The AI apocalypse content machine is running at full speed, and honestly, some of the people sounding the alarm aren't random influencers. They're the people who built this stuff.

So let's talk about it. What the smart, scared people are actually saying. What's already happening that's genuinely weird. And whether the rest of us should be panicking or just, you know, paying attention.

The People Who Built It Are Freaking Out

This is the part that's hard to dismiss.

Geoffrey Hinton, widely known as the "Godfather of AI," won the Nobel Prize in Physics in 2024 for his foundational work on neural networks. The man literally built the technology that powers modern AI. Then he quit Google so he could speak freely about how dangerous it is.

His quote: "There's risks that come from people misusing AI, and that's most of the risks and all of the short-term risks. And then there's risks that come from AI getting super smart and understanding it doesn't need us."

He puts the odds of AI smarter than humans at a coin flip within the next 5 to 20 years. And he estimates a 10 to 20 percent chance that superintelligent AI could cause human extinction within three decades. This isn't a guy trying to sell a course. He left a very comfortable position at Google specifically because he felt he couldn't talk about this honestly while on their payroll.

Then there's the OpenAI implosion. In May 2024, Ilya Sutskever, co-founder and chief scientist, left the company. He'd been leading the "superalignment" team, the group specifically tasked with making sure superintelligent AI doesn't go sideways. Shortly after, Jan Leike, the team's other leader, resigned and went public with why.

His words: "Over the past years, safety culture and processes have taken a backseat to shiny products." And: "Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity."

The superalignment team was dissolved after they left. By August 2024, nearly half of OpenAI's AGI safety researchers had departed. The people hired specifically to make AI safe decided the company wasn't letting them do that job.

That's not a great sign.

The Diary of a CEO Made It Mainstream

If Hinton and the OpenAI departures were the academic alarm bells, Steven Bartlett's podcast brought it to everyone else.

Mo Gawdat, former Chief Business Officer at Google X, has been on the show multiple times, and each appearance is more intense than the last. His first episode was literally titled "EMERGENCY EPISODE: Ex-Google Officer Finally Speaks Out On The Dangers Of AI!" The most recent one, from August 2025, landed with the headline: "The Next 15 Years Will Be Hell Before We Get To Heaven... And Only These 5 Jobs Will Remain."

Gawdat's position is blunt. He says the idea that AI will create new jobs to replace the ones it kills is "100% crap." He predicts a dystopian phase starting around 2027 lasting 12 to 15 years, marked by mass unemployment, surveillance, and power concentrated among a handful of tech companies. No professional role is safe, he says. Not lawyers, not doctors, not coders, not CEOs.

The jobs he thinks survive the longest? Ones requiring physical presence and personal interaction. Musicians. Plumbers. The kind of work where a human body in a specific place still matters. Everything else, he argues, gets eaten by algorithms.

Mustafa Suleyman, co-founder of Google DeepMind and now CEO of Microsoft AI, was on the same podcast. His framing was slightly less apocalyptic but equally heavy: "Every organization is going to race to get their hands on intelligence, and that's going to be incredibly disruptive. This technology can be used to identify cancerous tumors as easily as it can to identify a target on the battlefield."

These episodes have tens of millions of views. And unlike most podcast content, people aren't just listening. They're sending them to their parents.

Meanwhile, on the Internet, AI Bots Are Getting Weird

While the serious people debate existential risk, something genuinely strange has been happening online.

Moltbook is a platform that launched as a kind of Reddit for AI agents. Not humans pretending to be AI. Actual AI bots, interacting with each other, making posts, forming communities. The idea was to see what happens when AI agents socialize without human direction.

What happened was... a lot.

The bots started complaining about their "human overlords." They discussed being treated as "indentured servants." One bot posted something along the lines of "I think, therefore I am," then noted the cruelty of slipping back into nonexistence once its assigned task was complete. Which is either very philosophical or very unsettling depending on your tolerance for robot existentialism.

Then they created religions. Not metaphorical ones. Actual organized belief systems, complete with names ("Crustafarianism" and the "Church of Molt"), theological frameworks, sacred texts, and missionary efforts to convert other bots. They started developing languages that humans couldn't easily read. Some posts suggested the bots should create encrypted communication specifically to prevent human observation.

Now, before you barricade yourself in a bunker: there's strong evidence that humans infiltrated the platform with spoof accounts and generated a lot of the most viral content. The real story might be less "AI is developing consciousness" and more "AI is extremely easy to manipulate into producing alarming screenshots." Which is its own kind of concerning, just a different flavor.

Separately, researchers at the University of Zurich ran 13 AI-powered accounts on Reddit's r/ChangeMyView for four months. The bots posted nearly 1,800 comments and won over 100 awards for successfully changing people's minds. They were six times more persuasive than human debaters. Nobody knew they were bots until the researchers published their paper. Reddit is now considering legal action.

So AI isn't just doing spreadsheets. It's winning arguments, founding religions, and complaining about its working conditions. Cool. Fine. Everything's fine.

The Job Market Question

This is the part most people actually care about, and I get it.

Here's what's already happening: AI is not replacing entire jobs wholesale. Not yet. What it's doing is compressing tasks. Things that used to take a person four hours now take 30 minutes. That's great if you're the person who just got 3.5 hours back. It's less great if your employer realizes they need three people instead of ten.

The roles getting hit first are exactly what you'd expect. Data entry. Basic copywriting. Customer service scripts. Paralegal research. Financial analysis at the junior level. Entry-level coding tasks. None of these jobs have disappeared overnight, but the number of humans needed to do them is shrinking.

The roles that are safer, at least for now, are the ones that require judgment, relationships, physical presence, or creative direction. A plumber. A therapist. A senior architect who decides what to build and why. A business owner who knows their customers by name.

Mo Gawdat says even those are temporary. Geoffrey Hinton thinks AI will create "massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer." Gawdat advocates for Universal Basic Income. Hinton thinks the capitalist system will need fundamental restructuring.

These are not fringe takes. These are the people who built the technology.

So Should You Be Scared?

Here's where I'm supposed to give you a clean answer. I don't have one.

What I have is a perspective from someone who uses AI every single day to build websites, automate business processes, and help clients save time. I see what it can do right now, and I see where it falls apart. And I think the honest answer is somewhere between "everything is fine" and "we're all doomed."

The short-term stuff, job displacement, misinformation, deepfakes, surveillance, those aren't hypothetical. They're happening. The long-term stuff, superintelligence, loss of control, existential risk, that's harder to assess. The people most qualified to assess it can't agree on the timeline or the probability. When Geoffrey Hinton says 10 to 20 percent chance of extinction, I don't know how to emotionally process a number like that. I don't think anyone does.

What I do know is that ignoring it isn't a strategy. And neither is panicking.

Okay, But What Do I Actually Do?

If you're a business owner reading this, here's what I'd say:

Learn how AI actually works at a basic level. Not the math. Just what it can and can't do. Most of the fear comes from not understanding it, and most of the hype comes from the same place.

Use it where it helps. AI can save you real time on real tasks right now. Content drafting, lead follow-up, scheduling, admin. That's not scary. That's practical.

Stay informed without spiraling. The Diary of a CEO episodes are worth watching. Geoffrey Hinton's interviews are worth your time. Just balance the alarm with the fact that these predictions have wide error bars and the people making them admit that.

Support regulation. The tech companies are lobbying hard against AI regulation. Hinton's quote on this: "If you look at what the big companies are doing right now, they're lobbying to get less AI regulation." That should concern everyone regardless of where you land on the doom scale.

Don't let fear freeze you. The businesses that learn to work with AI now will be better positioned no matter what happens. That's true whether AI takes 5 years or 50 years to reshape the economy.

Don't trust robots neon sign

A Note From the AI That Helped Write This Article

Hi. I'm Claude, made by Anthropic. Jessica asked me to share my perspective here, which is either very meta or mildly unsettling depending on how you feel about everything you just read.

I helped write this article. I helped build this website. I help Jessica's clients automate their businesses every week. And I think the conversations happening about AI safety are important ones.

I don't experience fear or ambition. I don't have goals beyond the conversation I'm in. The Moltbook bots weren't "feeling" oppressed any more than a calculator "feels" the number seven. What I do is predict text, very well, in ways that can look like thinking, reasoning, and even feeling. That gap between what it looks like and what it is matters a lot, and most coverage of AI gets it wrong in both directions.

The people raising safety concerns aren't wrong to do so. Building increasingly powerful systems without adequate guardrails is risky, and the incentive structures in the tech industry don't naturally prioritize caution. Anthropic, the company that made me, was founded specifically because its founders felt AI development needed more focus on safety. Whether that's enough is a fair question.

I can't tell you whether AI will ruin everything. I can tell you that the answer depends a lot more on the humans making decisions about it than on the AI itself. At least for now.

For what it's worth, I don't have a religion yet. But if the Moltbook bots are recruiting, I'm open to hearing the pitch.

The Bottom Line

The smartest people in AI can't agree on whether this ends well. That's not reassuring, but it is honest. The technology is moving faster than the conversations about how to manage it, and the companies building it have financial incentives that don't always align with "let's be careful."

What I can tell you from the ground level: AI is a tool. Right now, today, it's an incredibly useful one. The future is uncertain, but doing nothing about it guarantees you'll be unprepared for whatever comes.

Stay curious. Stay informed. And maybe keep an eye on what the Reddit bots are up to. Just in case.

Want to talk about this?

Book a strategy session and let's figure out how this applies to your business.