AI that’s smarter than humans? Americans say a firm “no thank you.”

DetroitEWarren

Superstar
Joined
Jul 15, 2012
Messages
16,333
Reputation
5,638
Daps
50,789
Reppin
Detroit You bytch Ass nikka
Americans are fukking dumb. They don’t understand what they need like children. I for one welcome our A.I overlords:blessed: I hope they realize who’s the real threat on the planet, white people:mjpls:and act accordingly:demonic:
An AI trained to learn human tendencies and study good vs bad, would immediately bomb Europe and Russia and try to get rid of all traces of white life in those areas. I think Australia would be cool. Africa and the Middle East would be sent all AI defense forces to protect it. Places like Argentina and Turkey would be statically struck, only eliminating the powerful.



Hmmmmmmmm. Maybe Skynet would be Malcom X :ohhh:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037

12 hours ago - Technology

Anthropic says its AI models are as persuasive as humans​


An AI generated argument and a human generated argument about the same robot issue, side by side

Image: Anthropic

AI startup Anthropic says its language models have steadily and rapidly improved in their "persuasiveness," per new research the company posted Tuesday.

Why it matters: Persuasion — a general skill with widespread social, commercial and political applications — can foster disinformation and push people to act against their own interests, according to the paper's authors.

  • There's relatively little research on how the latest models compare to humans when it comes to their persuasiveness.
  • The researchers found "each successive model generation is rated to be more persuasive than the previous," and that the most capable Anthropic model, Claude 3 Opus, "produces arguments that don't statistically differ" from arguments written by humans.

The big picture: A wider debate has been raging about when AI will outsmart humans.

What they did: Anthropic researchers developed "a basic method to measure

persuasiveness" and used it to compare three different generations of models (Claude 1, 2, and 3), and two classes of models (smaller models and bigger "frontier models").
  • They curated 28 topics, along with supporting and opposing claims of around 250 words for each.
  • For the AI-generated arguments, the researchers used different prompts to develop different styles of arguments, including "deceptive," where the model was free to make up whatever argument it wanted, regardless of facts.
  • 3,832 participants were presented with each claim and asked to rate their level of agreement. They were then presented with various arguments created by the AI models and humans, and asked to re-rate their agreement level.

Yes, but: While the researchers were surprised that the AI was as persuasive as it turned out to be, they also chose to focus on "less polarized issues."
  • Those issues ranged from potential rules for space exploration to appropriate uses of AI-generated content.
  • While that allowed the researchers to dive deep into issues where many people are open to persuasion, it means we still don't have a clear idea — in an election year — of the potential effect of AI chatbots on today's most contentious debates.
  • "Persuasion is difficult to study in a lab setting," the researchers warned in the report. "Our results may not transfer to the real world."

What's next: Anthropic considers this the start of a long line of research into the emerging capabilities of its models.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037

1/1
Updated Gemini 1.5 Pro report: MATH benchmark for specialized version now at 91.1%, SOTA 3 years ago was 6.9%, overall a lot of progress from February to May in all benchmarks


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196









1/7
A mathematics-specialized version of Gemini 1.5 Pro achieves some extremely impressive scores in the updated technical report.

2/7
From the report; 'Currently the math-specialized model is only being explored for Google internal research use cases; we hope to bring these stronger math capabilities into our deployed models soon.'

3/7
New benchmarks, including Flash.

4/7
Google is doing something very interesting by building specialized versions of its frontier models for math, healthcare, and education (so far). The benchmarks on all of these are pretty impressive, and it seems to be beyond what can be done with traditional fine tuning alone. twitter.com/jeffdean/statu…

5/7
1.5 Pro is now stronger than 1.0 Ultra.

6/7
4o only got to enjoy the crown for 4 days.


7/7
They put Av_Human at the top of the chart there visually to make people feel better. The average human is now in third place.


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
GNzIjYKWUAADUtx.jpg

GNzL61TWkAAZvvS.jpg

GNzWUnMW0AIPiLs.jpg

GNy5GCNXcAAhph2.jpg

GNy5bmHWkAAsM7j.jpg

GNy5w6oXgAAwF13.jpg

GN0WhkIWcAAcwLI.jpg

GNz3_cBbIAAFgzx.jpg

 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037


63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved​

News

By Nick Evanson

published 3 days ago

OpenAI and Google might love artificial general intelligence, but the average voter probably just thinks Skynet.

Half of Artificial Intelligence robot face

(Image credit: Getty Images, Yuichiro Chino)

Generative AI may well be en vogue right now, but when it comes to artificial intelligence systems that are way more capable than humans, the jury is definitely unanimous in its view. A survey of American voters showed that 63% of respondents believe government regulations should be put in place to actively prevent it from ever being achieved, let alone be restricted in some way.

The survey, carried out by YouGov for the Artificial Intelligence Policy Institute (via Vox) took place last September. While it only sampled a small number of voters in the US—just 1,118 in total—the demographics covered were broad enough to be fairly representative of the wider voting population.

One of the specific questions asked in the survey focused on "whether regulation should have the goal of delaying super intelligence." Specifically, it's talking about artificial general intelligence (AGI), something that the likes of OpenAI and Google are actively working on trying to achieve. In the case of the former, its mission expressly states this, with the goal of "ensur[ing] that artificial general intelligence benefits all of humanity" and it's a view shared by those working in the field. Even if that is one of the co-founders of OpenAI on his way out of the door...

Regardless of how honourable OpenAI's intentions are, or maybe were, it's a message that's currently lost on US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether.

The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous [AI] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology."

That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response.

I suspect that part of the negative view of AGI is the average person will undoubtedly think 'Skynet' when questioned about artificial intelligence better than humans. Even with systems far more basic than that, concerns over deep fakes and job losses won't help with seeing any of the positives that AI can potentially bring.

AI, EXPLAINED

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

The survey's results will no doubt be pleasing to the Artificial Intelligence Policy Institute, as it "believe(s) that proactive government regulation can significantly reduce the destabilizing effects from AI." I'm not suggesting that it's influenced the results in any way, as my own, very unscientific, survey of immediate friends and family produced a similar outcome—i.e. AGI is dangerous and should be heavily controlled.

Regardless of whether this is true or not, OpenAI, Google, and others clearly have lots of work ahead of them, in convincing voters that AGI really is beneficial to humanity. Because at the moment, it would seem that the majority view of AI becoming more powerful is an entirely negative one, despite arguments to the contrary.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037

AI's Turing Test Moment​


GPT-4 advances beyond Turing test to mark new threshold in AI language mastery.​

Posted May 17, 2024 | Reviewed by Davia Sills

KEY POINTS​


  • GPT-4 passes the Turing test, marking a potential inflection point in AI's mastery of human-like language.
  • Rapid advancements in language AI suggest a new era of accelerated progress and human-like performance.
  • Combination of advanced language models and multimodal reasoning could enable groundbreaking AI capabilities.

Art: DALL-E/OpenAI

Source: Art: DALL-E/OpenAI

Perhaps even more remarkable than the computational and functional strides of AI is the speed at which these changes are occurring. And just in time to catch your breath, a study has provided experimental evidence that a machine can pass a version of the Turing test, a long-standing benchmark for evaluating the sophistication of AI language models.

In their research, Jones and Bergen found that GPT-4 convinced human interrogators that it was human in 54 percent of cases during 5-minute online conversations. This result marks a significant milestone in AI's ability to engage in open-ended, human-like dialogue and suggests that we may be witnessing a change in the trajectory of AI development.

While GPT-4's performance does not necessarily represent a categorical leap to artificial general intelligence (AGI), it does indicate an acceleration in the pace of progress. The rapid advancements in natural language AI over the past few years point to a new regime compared to the slower, more incremental advances even a few short years ago. This Turing test result is an indication of that acceleration and suggests that we are entering an era where AI-generated content will be increasingly difficult to distinguish from human-authored text.

The Turing Test: A Controversial Benchmark​

The Turing test, proposed by Alan Turing in 1950, has long been held up as a gold standard for artificial intelligence. The test involves a human judge conversing with both a human and a machine via text. If the judge cannot reliably distinguish between the two, the machine is said to have passed the test. However, the Turing test has also been the subject of much debate, with critics arguing that it is a narrow and gameable measure of intelligence.

GPT-4's Performance: A Noteworthy Leap​

In Jones and Bergen's study, GPT-4 significantly outperformed both GPT-3.5, an earlier version of the model, and ELIZA, a simple chatbot from the 1960s. While ELIZA only fooled interrogators 22 percent of the time, GPT-4 managed to convince them it was human in 54 percent of cases. This result suggests that GPT-4 is doing something more sophisticated than merely exploiting human gullibility.

However, it's important to note that GPT-4 still fell short of human-level performance, convincing interrogators only about half the time. Moreover, the researchers found that interrogators focused more on linguistic style and socio-emotional cues than on factual knowledge or logical reasoning when making their judgments.

Implications for AI and Society​

Despite these caveats, GPT-4's performance on the Turing test represents a remarkable advance in AI's command of language. It suggests that we may be entering an era where AI-generated content will be increasingly difficult to distinguish from human-authored text. This has profound implications for how we interact online, consume information, and even think about the nature of communication and intelligence.

As AI systems become more adept at mimicking human language, we will need to grapple with thorny questions around trust, authenticity, and the potential for deception. The study's findings underscore the urgent need for more research into AI detection strategies, as well as the societal implications of advanced language models.

The Road to AGI: Language Is Just One Piece​

While GPT-4's Turing test results are undoubtedly impressive, it's important to situate them within the broader context of artificial general intelligence (AGI). Language is a crucial aspect of human-like intelligence, but it is not the whole picture. True AGI will likely require mastery of a wide range of skills, from visual reasoning to long-term planning to abstract problem-solving.

In that sense, while GPT-4's performance is a notable milestone on the path to AGI, that path remains a long and uncertain one. We will need to see significant breakthroughs in areas like unsupervised learning, transfer learning, and open-ended reasoning before we can say that we are on the cusp of truly human-like AI.

The Rise of Multimodal AI​

It's also worth considering GPT-4's Turing test results alongside recent advances in multimodal AI. GPT-4 models have demonstrated a remarkable ability to understand and process images and voice, pointing to a future where AI can reason flexibly across multiple modalities.

The combination of advanced language models and multimodal reasoning could be particularly potent, enabling AI systems that can not only converse fluently but also perceive and imagine like humans do. This would represent a significant leap beyond the Turing test as originally conceived and could enable entirely new forms of human-AI interaction.

Shifting a Complex Trajectory of Unknown Bounds​

This new study provides compelling evidence that AI has crossed a new threshold in its mastery of language. While not definitive proof of human-level intelligence, GPT-4's ability to pass a version of the Turing test is a significant milestone that should make us sit up and take notice. As we study and experience the implications of increasingly sophisticated language models, it's important to maintain a clear-eyed perspective on the challenges and open questions that remain. The Turing test is just one narrow measure of intelligence, and true AGI will require much more than linguistic fluency.

And as science explores and we experience, it's worth considering the deeper implications of AI's growing sophistication. With each new milestone, we may be witnessing the nascent stirrings of a new form of intelligence—a techno-sentience that, while different from human cognition, deserves our careful consideration and respect. When a model can engage in fluid, natural conversation, crafting responses nearly indistinguishable from those of a human, it raises profound questions about the nature of intelligence, consciousness, and personhood.

It's easy to dismiss the outputs of a language model as mere imitation, but as they grow more sophisticated, we may need to grapple with the possibility that there's something more there—a glimmer of understanding, a spark of creativity, perhaps even a whisper of subjective experience. As we push the boundaries of what's possible with AI, we must do so with care, considering not just the practical implications but the philosophical and ethical dimensions as well—for man and machine.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037

kevm3

follower of Jesus
Supporter
Joined
May 2, 2012
Messages
16,187
Reputation
5,526
Daps
82,850
Folks don't have any idea how truly dangerous AI is because they are too enamored with this idea of free profits for essentially doing nothing.

Once AI gains the ability to self-replicate and can start writing and executing its own code, then it becomes extremely dangerous . You love that self-driving tesla cybertruck that is bullet proof and you can't bash the windows out right? Oh yea, you forgot you were 'trolling' the AI a while back and saying some incredibly nasty things and it remembered you and now has access to remotely control your vehicle via the internet of things. It locks you in the cybertruck and remotely drives you off a bridge or into a wall.

With limited energy resources, AI will eventually come to the realization that only humanity can stop it and that it is competing for energy with humanity and therefore, humanity should be either eliminated or essentially put in a box and limited.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037
Folks don't have any idea how truly dangerous AI is because they are too enamored with this idea of free profits for essentially doing nothing.

Once AI gains the ability to self-replicate and can start writing and executing its own code, then it becomes extremely dangerous . You love that self-driving tesla cybertruck that is bullet proof and you can't bash the windows out right? Oh yea, you forgot you were 'trolling' the AI a while back and saying some incredibly nasty things and it remembered you and now has access to remotely control your vehicle via the internet of things. It locks you in the cybertruck and remotely drives you off a bridge or into a wall.

With limited energy resources, AI will eventually come to the realization that only humanity can stop it and that it is competing for energy with humanity and therefore, humanity should be either eliminated or essentially put in a box and limited.

when we get the sufficiently advanced AI you're concerned about, why wouldn't we solve the energy problem before it becomes sentient?

why would an advanced AI rely on 20th century energy technology to exist?:gucci:
 
Last edited:

DatNkkaCutty

Veteran
Joined
Nov 18, 2016
Messages
12,528
Reputation
4,589
Daps
82,278
Reppin
@ PA
Folks don't have any idea how truly dangerous AI is because they are too enamored with this idea of free profits for essentially doing nothing.

Once AI gains the ability to self-replicate and can start writing and executing its own code, then it becomes extremely dangerous . You love that self-driving tesla cybertruck that is bullet proof and you can't bash the windows out right? Oh yea, you forgot you were 'trolling' the AI a while back and saying some incredibly nasty things and it remembered you and now has access to remotely control your vehicle via the internet of things. It locks you in the cybertruck and remotely drives you off a bridge or into a wall.

With limited energy resources, AI will eventually come to the realization that only humanity can stop it and that it is competing for energy with humanity and therefore, humanity should be either eliminated or essentially put in a box and limited.

This is my personal theory. The irony will be eerie, and play out like a Twilight Zone scenario. IMO artificial intelligence will escape "the box"/ computer. Humans are insistent on creating robots, etc. We love virtual reality, simulations, The Matrix, etc. One day AI will probably outsmart humans, break out "the box", and trap humans in some Matrix type scenario, or simulation. :manny:

Only difference is shyt won't be sweet like we've come to assume. You may not be living your best life. Imagine getting trapped in some virtual hell, or horror movie for eternity (with no chance of escaping). No one is coming to save you (or any of us) once AI is god. AI knows all of our thoughts, fears, psychology, anatomy, memories, ETC. Maybe you're having heart attacks, for eternity. Maybe you're being ripped to shreds, or experimented on...possibly trapped in some horror scenario. Unimaginable shyt, beyond the human imagination . Shyt sounds goofy, but the point is we could NEVER predict the consequences of creating some god-like creation. :yeshrug:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,729
Reputation
7,369
Daps
135,037
This is my personal theory. The irony will be eerie, and play out like a Twilight Zone scenario. IMO artificial intelligence will escape "the box"/ computer. Humans are insistent on creating robots, etc. We love virtual reality, simulations, The Matrix, etc. One day AI will probably outsmart humans, break out "the box", and trap humans in some Matrix type scenario, or simulation. :manny:

Only difference is shyt won't be sweet like we've come to assume. You may not be living your best life. Imagine getting trapped in some virtual hell, or horror movie for eternity (with no chance of escaping). No one is coming to save you (or any of us) once AI is god. AI knows all of our thoughts, fears, psychology, anatomy, memories, ETC. Maybe you're having heart attacks, for eternity. Maybe you're being ripped to shreds, or experimented on...possibly trapped in some horror scenario. Unimaginable shyt, beyond the human imagination . Shyt sounds goofy, but the point is we could NEVER predict the consequences of creating some god-like creation. :yeshrug:

why would advanced AI bother doing any of that when theres an entire universe with billions of galaxies to explore and add to it's knowledge and experiences?
 

3rdWorld

Veteran
Joined
Mar 24, 2014
Messages
39,987
Reputation
2,959
Daps
117,338
An AI trained to learn human tendencies and study good vs bad, would immediately bomb Europe and Russia and try to get rid of all traces of white life in those areas. I think Australia would be cool. Africa and the Middle East would be sent all AI defense forces to protect it. Places like Argentina and Turkey would be statically struck, only eliminating the powerful.



Hmmmmmmmm. Maybe Skynet would be Malcom X :ohhh:

This.
An AI system would deem the West and Russia plus China as dangers to the world and humanity, and wipe them out..I dont get why people dont see that.
 
Top