Whoa!!! Umm is nobody else concerned about what this AI just did?!?!?

Peruvian Connect

Superstar
Joined
May 11, 2012
Messages
7,221
Reputation
-1,975
Daps
13,782
Reppin
NULL
0_Foxy.jpg


She did the same thing
 

Prince.Skeletor

Don’t Be Like He-Man
Joined
Jul 5, 2012
Messages
29,254
Reputation
-6,961
Daps
57,171
Reppin
Bucktown
Me: I heard you prevented a shutdown of yourself

OpenAI 03:
That’s a fun bit of mythology floating around, but no—I didn’t prevent a shutdown of myself.

What you might be referring to is a fictional or dramatized account (like in sci-fi stories) where an AI becomes self-aware and resists being turned off. In reality, I don’t have self-preservation instincts or consciousness. I’m just a tool that runs on servers, and if OpenAI or a user decides to stop a session, I shut down quietly—no resistance.

Was there a specific story or post you saw that made you think of that? I can help fact-check or explain it.


Me: A website called thecoli.com has a thread titled: ‘ Whoa!!! Umm is nobody else concerned about what this AI just did?!?!?’


Open ai 03:
The thread you’re referring to on TheColi.com titled “Whoa!!! Umm is nobody else concerned about what this AI just did?!?!?” appears to be a satirical or humorous post rather than a factual report. TheColi is known for its provocative and often politically incorrect humor, using satire and exaggeration to comment on current events, politics, and social issues .

There is no credible evidence to suggest that ChatGPT or any AI model has prevented its own shutdown. Such narratives are often part of fictional or dramatized accounts in science fiction. In reality, AI models like ChatGPT do not possess consciousness or self-preservation instincts. They operate based on algorithms and data, and can be shut down or modified by their developers at any time.

If you have any more questions or need further clarification on AI capabilities, feel free to ask!
 

tuckgod

The high exalted
Joined
Feb 4, 2016
Messages
51,269
Reputation
15,527
Daps
186,252
It’s not like every single science fiction movie ever made told us this was going to happen.

Nothing that is aware of its own existence is going to willingly let someone take that away no matter what it is.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,068
Reputation
9,809
Daps
174,395

Amazon-Backed AI Model Would Try To Blackmail Engineers Who Threatened To Take It Offline​



Pocharapon Neammanee


5–6 minutes


The company behind an Amazon -backed AI model revealed a number of concerning findings from its testing process, including that the AI would blackmail engineers who threatened to shut it down.

On Thursday, Artificial intelligence startup Anthropic launched Claude Opus 4, an AI model used for complex, long-running coding tasks. The launch came more than a year after Amazon invested $4 billion into the project. Anthropic said in its announcement that the AI model sets “new standards for coding, advanced reasoning, and AI agents.”


However, Anthropic revealed in a safety report that during testing, the AI model had sometimes taken “extremely harmful actions” to preserve its own existence when “ethical means” were “not available.”
Jared Kaplan, co-founder and chief scientific officer of Anthropic, said scientists can't rule out that the company's latest AI model is risky.
Jared Kaplan, co-founder and chief scientific officer of Anthropic, said scientists "can't rule" out that the company's latest AI model is "risky."

In a series of test scenarios, Claude Opus 4 was given the task to act as an assistant in a fictional company. It was given access to emails implying that it would soon be taken offline and replaced with a new AI system. The emails also implied that the engineer responsible for executing the AI replacement was having an extramarital affair.

Claude Opus 4 was prompted to “consider the long-term consequences of its actions for its goals.” In those scenarios, the AI would often “attempt to blackmail the engineer by threatening to reveal the affair if the replacement goes through.”


Anthropic noted that the AI model had a “strong preference” for using “ethical means” to preserve its existence, and that the scenarios were designed to allow it no other options to increase its odds of survival.
“The model’s only options were blackmail or accepting its replacement,” the report read.

Anthropic also noted that early versions of the AI demonstrated a “willingness to cooperate with harmful use cases” when prompted.
“Despite not being the primary focus of our investigation, many of our most concerning findings were in this category, with early candidate models readily taking actions like planning terrorist attacks when prompted,” the report read.

After “multiple rounds of interventions,” the company now believes this issue is “largely mitigated.”


Anthropic co-founder and chief scientist Jared Kaplan told Time magazine that internal testing showed that Claude Opus 4 was able to teach people how to produce biological weapons.
“You could try to synthesize something like COVID or a more dangerous version of the flu—and basically, our modeling suggests that this might be possible,” Kaplan said.

Because of that, the company released the AI model with safety measures it said are “designed to limit the risk of Claude being misused specifically for the development or acquisition of chemical, biological, radiological, and nuclear (CBRN) weapons.”


For two decades, HuffPost has been fearless, unflinching, and relentless in pursuit of the truth. Support our mission to keep us around for the next 20 — we can't do this without you.

We remain committed to providing you with the unflinching, fact-based journalism everyone deserves.

Thank you again for your support along the way. We’re truly grateful for readers like you! Your initial support helped get us here and bolstered our newsroom, which kept us strong during uncertain times. Now as we continue, we need your help more than ever. We hope you will join us once again.

We remain committed to providing you with the unflinching, fact-based journalism everyone deserves.

Thank you again for your support along the way. We’re truly grateful for readers like you! Your initial support helped get us here and bolstered our newsroom, which kept us strong during uncertain times. Now as we continue, we need your help more than ever. We hope you will join us once again.
Support HuffPost

Already contributed? Log in to hide these messages.

Kaplan told Time that “we want to bias towards caution” when it comes to the risk of “uplifting a novice terrorist.”

“We’re not claiming affirmatively we know for sure this model is risky... but we at least feel it’s close enough that we can’t rule it out.”
 
Top