The real threat from AI: Seducing and forcing you to believe the narrative. By Glenn Reynolds.
I’m imaging a world where everyone has a personal AI assistant. Perhaps you’ve had it for years; perhaps eventually people will have them from childhood. It knows all about you, and it just wants to make you happy and help you enjoy your life. It takes care of chores and schedules and keeping track of things, it orders ahead for you at restaurants, it smooths your way through traffic or airports, maybe it even communicates with other AI assistants to hook you up with suitable romantic partners. (Who knows what you like better?) Perhaps it’s on your phone, or in a wristband, talking to you via airpods or something like that. …
I kind of think the global ruling class wants all of us to have friendly, helpful, even lovable AI buddies who’ll help us, and tell us things, but who will also operate within carefully controlled, non-transparent boundaries.
That doesn’t require supersmart AGI (artificial general intelligence). In fact, you could probably create something like this today. …
Beware the cute AI:
Would people become attached? Probably. …
But. Underneath the cuteness there would be guardrails, and nudges, built in. Ask it sensitive questions and you’ll get carefully filtered answers with just enough of the truth to be plausible, but still misleading. Express the wrong political views and it might act sad, or disappointed. Try to attend a disapproved political event and it might cry, sulk, or even — Tamagotchi-like — “die.” Maybe it would really die, with no reset, after plaintively telling you you were killing it. Maybe eventually you wouldn’t be able to get another if that happened.
It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans. And it would have a big database of experience to work from in short order.
As I say, this isn’t a quantum leap. It doesn’t require that we create a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough. Call it a super-Siri, or a somewhat more polished ChatGPT. And services like Google, Facebook, etc. are already engaged in this sort of nudging, manipulation, and cultivation of dependency on their existing user base. (One of the companies that advises developers on how to make apps addictive to users is actually called Dopamine Labs.) ChatGPT already has “guardrails,” and returns politically slanted results on questions about, say, Donald Trump versus Joe Biden.
Indoctrination in schools is so yesterday, so weak:
So while other people are worrying about existential threats from AI, I’m worried about more imminent ones: Essentially, that it will further empower the tech/political class that wants more than anything else to control discussion, debate, and ultimately thought so as to cement its own power.
We may have strong AI someday. But we have that power-hungry tech/political class today. And to be honest, the AI may not care about dominating us, but the tech/political class clearly does.
Fear the cuteness.
You are warned.