AI is becoming dangerous

AI is becoming dangerous. By Josh Code at The Free Press.

Last week, Anthropic announced that it had developed a cutting-edge model, Mythos, with hacking capabilities that are the stuff of science fiction.

Mythos found vulnerabilities in every major operating system and web browser, some of which would allow hackers to remotely crash any device running them. Some of the bugs it found were deep in old code and had gone undetected by decades of human-run security tests. …

Mythos’s skill at finding and exploiting coding vulnerabilities is so great, Anthropic believes, that it has limited the preview release to around 40 tech companies, to fortify their cyber infrastructure in an effort Anthropic calls Project Glasswing. Perhaps even more telling, Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent called an emergency meeting with top bank executives to prepare for the cybersecurity risks posed by Mythos to American banks. The consequences of Mythos “falling into the wrong hands,” Anthropic said in a press release, “could be severe.”

Nationalize AI, like we did with nuclear weapons?

That’s putting it lightly. In fact, the release of Mythos has given new urgency to the debate over just whose hands AI should be in — even raising the question of whether so-called frontier AI models should be nationalized.

On one side are technologists who believe AI must be handled with the care and caution that nuclear weapons were accorded at the dawn of the nuclear age. And on the opposing side are those who think handing over AI to the government will cripple American innovation and cede ground to adversaries. …

Push back from the economically threatened:

But the Pentagon is far from the only force that is hostile to AI; in just the last few weeks, gunshots were fired at the home of an Indianapolis city councillor who spoke out in favor of data centers, and OpenAI CEO Sam Altman’s residence has reportedly been both shot at and targeted with a Molotov cocktail.

While bomb throwing is the work of a few extremists, a backlash to AI is also reflected in the halls of Congress, where the idea of nationalization as a solution to the worst of AI’s risks is getting a warm reception. …

[AI thinker and entrepreneur Charles Jennings] argues that an exclusive consortium of some 40 tech companies isn’t enough. Just as every new drug ends up in a Food and Drug Administration lab before it reaches patients, he says, every frontier model like Mythos should end up in a national AI lab — staffed by experts, insulated from corporate pressure, and empowered to say no. …

Last month, Alex Karp, the CEO of the AI and data analysis company Palantir, warned his Silicon Valley peers of where things were headed if AI developers didn’t find a way to improve their image and work with the government.

“If Silicon Valley believes we’re going to take everyone’s white-collar jobs and screw the military,” he told an audience at a top venture capital conference, and “you don’t think that’s going to lead to the nationalization of our technology — you’re retarded.”

Violence starting up. By Maya Sulkin at The Free Press.

On Friday, April 10, at 3:45 a.m., Daniel Alejandro Moreno-Gama allegedly threw a Molotov cocktail at Altman’s San Francisco home while Altman, his husband, and his baby were asleep. It set the exterior gate on fire.

Moreno-Gama, 20, then allegedly drove to OpenAI’s headquarters and struck the glass doors of OpenAI’s offices, threatening to “to burn it down and kill anyone inside,” according to a complaint filed in San Francisco. The complaint says he was holding a three-part anti-artificial intelligence manifesto that, according to the federal criminal complaint, “discussed the purported risk AI poses to humanity” and his intention to kill Altman, and listed several other AI executives and their addresses. …

Three people have been arrested. No one has been hurt. And in some corners of the internet, the reaction was disappointment. …

Sam Altman has even said/admitted there’s about a 25 percent chance AI will destroy humanity and is proceeding to proceed with it at breakneck speed and fighting any kind of regulation along the way.” …

Post after post framed the attacks as an overdue reckoning: the working class finally striking back against the billionaires who are automating it into obsolescence and buying bunkers to protect themselves from a future they helped create. …

Salty-Plantain-4299 put it more bluntly: “What do they honestly think is going to happen when that fucker and his other billionaire tech bros are actively working to destroy people’s lives and livelihoods. 500,000+ in the U.S. alone laid off because of AI 2024-2025. Let that sink in. How can anyone really be surprised when the pitchforks come out?”

For example:

Lukas, a 15-year-old living in New York, equated the three suspects to John Brown, a 19th-century abolitionist who was hanged for a raid aimed at freeing slaves: “Sometimes peaceful protest is so hard, nearly impossible, that you need to be violent.”

I reached out to Lukas after reading some of his alarming posts on several Reddit boards. When I got him on the phone, he told me he empathized with the three alleged attackers. After all, Altman is threatening human extinction. “Sam Altman is violent. He’s harming people. So we’re just fighting violence with violence.” …

The possibility of an AI apocalypse has made him “feel depressed lately,” Lukas told me.

He continued, “I feel like I won’t be useful in the future. I’ll be easily replaced by some AI model. So I can understand people’s anger. We have our whole future still ahead of us.”

The attackers, Lukas reasoned, “want to live the rest of their lives without it being cut short by AI extinction,” he said, adding, “It’s unfair, because everyone in control has already lived their lives.

Stay tuned.