OpenAI staff reportedly warned the board about an AI breakthrough that could threaten humanity before Sam Altman was ousted

Ethnic Vagina Finder

The Great Paper Chaser
Joined
May 4, 2012
Messages
52,625
Reputation
2,405
Daps
148,878
Reppin
North Jersey but I miss Cali :sadcam:
A potential breakthrough in the field of artificial intelligence may have contributed to Sam Altman’s recent ouster as CEO of OpenAI.

According to a Reuters report citing two sources acquainted with the matter, several staff researchers wrote a letter to the organization’s board warning of a discovery that could potentially threaten the human race.

The two anonymous individuals claim this letter, which informed directors that a secret project named Q* resulted in A.I. solving grade school level mathematics, reignited tensions over whether Altman was proceeding too fast in a bid to commercialize the technology.

Just a day before he was sacked, Altman may have referenced Q* (pronounced Q-star) at a summit of world leaders in San Francisco when he spoke of what he believed was a recent breakthrough.

“Four times now in the history of OpenAI—the most recent time was just in the last couple of weeks—I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward,” said Altman at a discussion during the Asia-Pacific Economic Cooperation.

He has since been reinstated as CEO in a spectacular reversal of events after staff threatened to mutiny against the board.

According to one of the sources, after being contacted by Reuters, OpenAI’s chief technology officer Mira Murati acknowledged in an internal memo to employees the existence of the Q* project as well as a letter that was sent by the board.

OpenAI could not be reached immediately by Fortunefor a statement, but it declined to provide a comment to Reuters.

Trained to identify patterns and infer outcomes​

So why is all of this special, let alone alarming?

Machines have been solving mathematical problems for decades going back to the pocket calculator.

The difference is conventional devices were designed to arrive at a single answer using a series of deterministic commands that all personal computers employ where values can only either be true or false, 0 or 1.

Under this rigid binary system, there is no capability to diverge from their programming in order to think creatively.

By comparison, neural nets are not hard coded to execute certain commands in a specific way. Instead, they are trained just like a human brain would be with massive sets of interrelated data, giving them the ability to identify patterns and infer outcomes.

Think of Google’s helpful Autocomplete function that aims to predict what an internet user is searching for using statistical probability—this is a very rudimentary form of generative AI.

That’s why Meredith Whittaker, a leading expert in the field, describes neural nets like ChatGPT as “probabilistic engines designed to spit out what seems plausible”.

Should generative A.I. prove able to arrive at the correct solution to mathematical problems on its own, it suggests a capacity for higher reasoning.

This could potentially be the first step towards developing artificial general intelligence, a form of AI that can surpass humans.

The fear is that an AGI needs guardrails since it one day might come to view humanity as a threat to its existence.

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,202
Reputation
7,364
Daps
134,119

OpenAI Says Board Can Overrule CEO on Safety of New AI Releases

The arrangement was mentioned in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with AI risks.

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023. 

Sam Altman, chief executive officer of OpenAI, at the Hope Global Forums annual meeting in Atlanta, Georgia, US, on Monday, Dec. 11, 2023.
Photographer: Dustin Chambers/Bloomberg

By Rachel Metz
December 18, 2023 at 1:03 PM EST


OpenAI said its board can choose to hold back the release of an AI model even if the company’s leadership has deemed it safe, another sign of the artificial intelligence startup empowering its directors to bolster safeguards for developing the cutting-edge technology.

The arrangement was spelled out in a set of guidelines released Monday explaining how the ChatGPT-maker plans to deal with what it may deem to be extreme risks from its most powerful AI systems. The release of the guidelines follows a period of turmoil at OpenAI after Chief Executive Officer Sam Altman was briefly ousted by the board, putting a spotlight on the balance of power between directors and the company’s c-suite.

OpenAI’s recently announced “preparedness” team said it will continuously evaluate its AI systems to figure out how they fare across four different categories — including potential cybersecurity issues as well as chemical, nuclear and biological threats — and work to lessen any hazards the technology appears to pose. Specifically, the company is monitoring for what it calls “catastrophic” risks, which it defines in the guidelines as “any risk which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.”

Aleksander Madry, who is leading the preparedness group and is on leave from a faculty position at the Massachusetts Institute of Technology, told Bloomberg News his team will send a monthly report to a new internal safety advisory group. That group will then analyze Madry’s team’s work and send recommendations to Altman and the company’s board, which was overhauled after ousting the CEO. Altman and his leadership team can make a decision about whether to release a new AI system based on these reports, but the board has the right to reverse that decision, according to the document.

OpenAI announced the formation of the “preparedness” team in October, making it one of three separate groups overseeing AI safety at the startup. There’s also “safety systems,” which looks at current products such as GPT-4, and “superalignment,” which focuses on extremely powerful — and hypothetical — AI systems that may exist in the future.

Madry said his team will repeatedly evaluate OpenAI’s most advanced, unreleased AI models, rating them “low,” “medium,” “high,” or “critical” for different types of perceived risks. The team will also make changes in hopes of reducing potential dangers they spot in AI and measure their effectiveness. OpenAI will only roll out models that are rated “medium” or “low,” according to the new guidelines.

“AI is not something that just happens to us that might be good or bad,” Madry said. “It’s something we’re shaping.”

Madry said he hopes other companies will use OpenAI’s guidelines to evaluate potential risks from their AI models as well. The guidelines, he said, are a formalization of many processes OpenAI followed previously when evaluating AI technology it has already released. He and his team came up with the details over the past couple months, he said, and got feedback from others within OpenAI.
 
Top