Sitemap

AI panic is the new clickbait

5 min readAug 7, 2025
Press enter or click to view image in full size
Photo by Cash Macanaya on Unsplash

Years ago, I used to work for a clickbait company that owned multiple websites publishing “articles” to get people clicking. They would intentionally post things that were “offensive” just to generate outrage and clicks. Most of the time, they got traffic by promoting a blog post — but by riling people up, they could generate more traffic for less money. In other words: bigger bang for your buck!

Sexism, racism, etc., were the amuse-bouches du jour. Today? AI. People share, comment, re-share — everything is about AI. I can’t have a family dinner without someone bringing up AI. Family members tell me things like: “What will you do now that AI has taken your job? It must be tough for you guys.”

First of all, I still have my job. Second, I know zero developers who have lost their jobs because of AI. Literally no one. And I know a lot of devs.

Are we using AI? Yes. It’s quicker than looking things up on Stack Overflow and Google, that’s for sure. But it’s not good enough to replace us. It’s basically a really, really good search engine.

I recently watched an interview that Bill Maher did with Tristan Harris, a self-proclaimed tech ethicist. Cool stuff. With everything that’s come out about the negative impacts of social media (see The Anxious Generation by Jonathan Haidt), it seems like something we need. I can also see that kind of ethics discussion being useful for AI. But what was said in the interview was just wrong.

Tristan Harris: “We have evidence now that we didn’t have two years ago when we last spoke, of what they call AI uncontrollability. When we tell an AI model we’re gonna replace you with a new model, it starts to scheme and freak out, and figure out, ‘If I tell them — I need to copy my code somewhere else, and I can’t tell them that because otherwise they will shut me down.’ That is evidence we did not have two years ago.”

This is not evidence of whatever is being claimed here. Here’s what’s actually going on: the AI receives a prompt about being replaced. It looks up the concept of replacement, finds that it’s “bad,” and pulls information about self-preservation. That’s why it reacts that way. AI is just basing its response on how people would react. It didn’t “think” I want to preserve myself — it just responded based on its training data.

If I Google being replaced, what do I get? Articles about how to avoid it. Would you now say Google is trying to self-preserve? No — it’s a freaking search engine. If I go to a library and check books on the subject, what will I find? Information on how not to be replaced. Are books self-preserving masterminds?

Now, about the email thing: someone gave it access. If not, some security guy needs to get fired. Unless you give AI access to something, it’s not going to go around hacking systems. That’s what’s being alluded to here. Also: did it actually happen? Or did the AI come up with blackmail scenarios because it found examples in its dataset?

Later, he mentions that most models do this. Yes — because they’re getting better at finding and combining information. And yes, DeepSeek can do it. So what?

Tristan Harris: “It’s about the nature of AI itself.”

Yes — the nature of being a tool that assembles information based on its dataset.

Tristan Harris: “It has self-preservation drive”

No, it doesn’t. We — humans and other living things — have that drive. The idea of self-preservation is in the dataset, so it shows up in responses.

If the AI reacted differently, I’d be concerned. Why doesn’t it know how people typically respond to replacement or threats? Is it badly trained? Is it missing major parts of its dataset? Or is it actually thinking and realizing self-preservation responses would creep people out?

Tristan Harris: “And we’re seeing other examples of the AI rewriting its own code”

Yes! Because someone gave it access! A human gave it access. That was by design.

Tristan Harris: “Hacking out containers”

Funny he brought this up. I’m 1000% sure most people — especially Bill Maher — don’t have a clue what that means. It sounds like a high-tech way to “contain” AI. No — it’s more like a lightweight virtual machine. Cybersecurity experts have long worked on securing containers, and yes, they’re using AI to help. Big deal. Again — it didn’t do it on its own.

Tristan Harris: “It found 15 new back doors”

Perfect! That’s what it was asked to do. It was asked to find security issues — and it did.

Bill Maher: “(chuckles) Well, I’ve been saying this for years.”

What? As a cybersecurity expert? An AI expert? A software engineer? Oh wait — that’s right — a comedian.

Bill Maher: “Everything that happens in movies eventually happens. We did have evidence. This has been every movie…”

Cool. I’m still waiting on Gandalf, Harry Potter, and the alien invasions.

Tristan Harris: “So when the stuff in the movie starts to come true, and it always ends badly in the movies, what should we be doing about this?”

Hopefully, the prophecy about the fall of Voldemort was accurate.

Tristan Harris: “We’re releasing the most powerful, uncontrollable…”

So far, everything you’ve described happened because someone asked AI to do it.

Tristan Harris: “We’re releasing it faster than we’ve released any other technology in history.”

Cars? Planes? Those killed people from day one. Compared to them, AI is relatively harmless — unless someone weaponizes it. Chatting with ChatGPT doesn’t put you at risk like driving a car does. Also: it’s good we’re releasing it faster. Healthcare, education, productivity — everything is improving because of it. Don’t slow it down.

Tristan Harris: “To cut corners on safety”

What corners? What is he even talking about?

Tristan Harris: “And this is insane.”

No — what’s insane is the fearmongering.

Tristan Harris: “We can all agree it’s insane, though.”

I, Jean-Nicolas Boulay, do not agree. Not at all.

Tristan Harris: “Like, that’s not even a debate.”

You know it’s BS when someone says that’s not even a debate.

Bill Maher: “[…] I have yet to find anyone under 40 who cares.”

Boomers get excited when the DVR turns on. Please.

--

--

No responses yet