AI that’s smarter than humans? Americans say a firm “no thank you.”

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,164
Reputation
7,364
Daps
134,094

AI that’s smarter than humans? Americans say a firm “no thank you.”​

Exclusive: 63 percent of Americans want regulation to actively prevent superintelligent AI, a new poll reveals.

sam_altman_GettyImages_1258549651.0.jpg

Sam Altman, CEO of OpenAI, the company that made ChatGPT. For Altman, the chatbot is just a stepping stone on the way to artificial general intelligence.
SeongJoon Cho/Bloomberg via Getty Images


Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?

Americans, by and large, don’t want it.

That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.

Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”

But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.

Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been.
“It’s so strange to me to say, ‘We have to be really careful with AGI,’ rather than saying, ‘We don’t need AGI, this is not on the table,’” Elke Schwarz, a political theorist who studies AI ethics at Queen Mary University of London, told me earlier this year. “But we’re already at a point when power is consolidated in a way that doesn’t even give us the option to collectively suggest that AGI should not be pursued.”

Building AGI is a deeply political move. Why aren’t we treating it that way?

Technological solutionism — the ideology that says we can trust technologists to engineer the way out of humanity’s greatest problems — has played a major role in consolidating power in the hands of the tech sector. Although this may sound like a modern ideology, it actually goes all the way back to the medieval period, when religious thinkers began to teach that technology is a means of bringing about humanity’s salvation. Since then, Western society has largely bought the notion that tech progress is synonymous with moral progress.

In modern America, where the profit motives of capitalism have combined with geopolitical narratives about needing to “race” against foreign military powers, tech accelerationism has reached fever pitch. And Silicon Valley has been only too happy to run with it.

RELATED

Silicon Valley’s vision for AI? It’s religion, repackaged.

AGI enthusiasts promise that the coming superintelligence will bring radical improvements. It could develop everything from cures for diseases to better clean energy technologies. It could turbocharge productivity, leading to windfall profits that may alleviate global poverty. And getting to it first could help the US maintain an edge over China; in a logic reminiscent of a nuclear weapons race, it’s better for “us” to have it than “them,” the argument goes.

But Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.

In the new AI Policy Institute/YouGov poll, the “better us than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.

Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.

AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”

And yet, Colson pointed out, “most of the direction of society is set by the technologists and by the technologies that are being released … There’s an important way in which that’s extremely undemocratic.”

He expressed consternation that when tech billionaires recently descended on Washington to opine on AI policy at Sen. Chuck Schumer’s invitation, they did so behind closed doors. The public didn’t get to watch, never mind participate in, a discussion that will shape its future.

According to Schwarz, we shouldn’t let technologists depict the development of AGI as if it’s some natural law, as inevitable as gravity. It’s a choice — a deeply political one.
“The desire for societal change is not merely a technological aim, it is a fully political aim,” she said. “If the publicly stated aim is to ‘change everything about society,’ then this alone should be a prompt to trigger some level of democratic input and oversight.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,164
Reputation
7,364
Daps
134,094
RELATED

The case for slowing down AI

AI companies are radically changing our world. Should they be getting our permission first?​

AI stands to be so transformative that even its developers are expressing unease about how undemocratic its development has been.

Jack Clark, the co-founder of AI safety and research company Anthropic, recently wrote an unusually vulnerable newsletter. He confessed that there were several key things he’s “confused and uneasy” about when it comes to AI. Here is one of the questions he articulated: “How much permission do AI developers need to get from society before irrevocably changing society?” Clark continued:
Technologists have always had something of a libertarian streak and this is perhaps best epitomized by the ‘social media’ and Uber et al era of the 2010s — vast, society-altering systems ranging from social networks to rideshare systems were deployed into the world and aggressively scaled with little regard to the societies they were influencing. This form of permissionless invention is basically the implicitly preferred form of development as epitomized by Silicon Valley and the general ‘move fast and break things’ philosophy of tech. Should the same be true of AI?

That more people, including tech CEOs, are starting to question the norm of “permissionless invention” is a very healthy development. It also raises some tricky questions.

When does it make sense for technologists to seek buy-in from those who’ll be affected by a given product? And when the product will affect the entirety of human civilization, how can you even go about seeking consensus?

Many of the great technological innovations in history happened because a few individuals decided by fiat that they had a great way to change things for everyone. Just think of the invention of the printing press or the telegraph. The inventors didn’t ask society for its permission to release them.

That may be partly because of technological solutionism and partly because, well, it would have been pretty hard to consult broad swaths of society in an era before mass communications — before things like a printing press or a telegraph! And while those inventions did come with perceived risks, they didn’t pose the threat of wiping out humanity altogether or making us subservient to a different species.

For the few technologies we’ve invented so far that meet that bar, seeking democratic input and establishing mechanisms for global oversight have been attempted, and rightly so. It’s the reason we have a Nuclear Nonproliferation Treaty and a Biological Weapons Convention — treaties that, though they’re struggling, matter a lot for keeping our world safe.

While those treaties came after the use of such weapons, another example — the 1967 Outer Space Treaty — shows that it’s possible to create such mechanisms in advance. Ratified by dozens of countries and adopted by the United Nations against the backdrop of the Cold War, it laid out a framework for international space law. Among other things, it stipulated that the moon and other celestial bodies can only be used for peaceful purposes, and that states can’t store their nuclear weapons in space.

Nowadays, the treaty comes up in debates about whether we should send messages into space with the hope of reaching extraterrestrials. Some argue that’s very dangerous because an alien species, once aware of us, might oppress us. Others argue it’s more likely to be a boon — maybe the aliens will gift us their knowledge in the form of an Encyclopedia Galactica. Either way, it’s clear that the stakes are incredibly high and all of human civilization would be affected, prompting some to make the case for democratic deliberation before any more intentional transmissions are sent into space.

As Kathryn Denning, an anthropologist who studies the ethics of space exploration, put it in an interview with the New York Times, “Why should my opinion matter more than that of a 6-year-old girl in Namibia? We both have exactly the same amount at stake.”

Or, as the old Roman proverb goes: what touches all should be decided by all.

That is as true of superintelligent AI as it is of nukes, chemical weapons, or interstellar broadcasts. And though some might argue that the American public only knows as much about AI as a 6-year-old, that doesn’t mean it’s legitimate to ignore or override the public’s general wishes for technology.
“Policymakers shouldn’t take the specifics of how to solve these problems from voters or the contents of polls,” Colson acknowledged. “The place where I think voters are the right people to ask, though, is: What do you want out of policy? And what direction do you want society to go in?”
 

Spence

Superstar
Joined
Jul 14, 2015
Messages
16,546
Reputation
2,710
Daps
43,757
Genie is out of the bottle and def not going back in. All we can “hope for” is a government that doesn’t give a shyt about us to deploy universal basic income. It’s going to look like ssi benefits though and won’t be nearly enough to live off of. :beli:
 

IIVI

Superstar
Joined
Mar 11, 2022
Messages
8,830
Reputation
2,226
Daps
28,634
Reppin
LA
Genie is out of the bottle and def not going back in. All we can “hope for” is a government that doesn’t give a shyt about us to deploy universal basic income. It’s going to look like ssi benefits though and won’t be nearly enough to live off of. :beli:
Yup. I been saying it, long before ChatGPT - our biggest mistake was having no plan for A.I.

Had many chances the last decade. Even politicians used it as a focal point to their campaign.

Nobody cared.

Now we do and it's too late.
 

TRUEST

Superstar
Joined
May 17, 2012
Messages
13,284
Reputation
2,532
Daps
51,359
Reppin
NULL
An army of hostile strangers is fast approaching. What do you do? Do you Ask for a meeting to Discuss the terms of their impending attack? lol

Or do you summon up your own automaton combatants? Release your merchants of death, indiscriminately. The fight ahead isn’t to be fought in the realm of the physical but of the virtual.

Black man. Black woman. Wake the fucck up.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
44,164
Reputation
7,364
Daps
134,094
A.I. will never be 'smarter' than humans.​


today A.I is the dumbest it'll ever be. :lupe:

Med-PaLM2

n3twsGT.png

The new Large Language Model from Google called MedPalm2 is doing far better than physicians. arxiv.org/abs/2305.09617

Nothing wrong with that, since AI will make this the trend for "all" roles and occupations, even highly skilled ones.

The role of physicians will become more on the more softer skills and personal side since they'll have far more time, which are all lacking as they're being grinded down by the healthcare system.
 
Last edited:

The Pledge

THE PRICE OF THE BRICK GOING UP!
Joined
Dec 13, 2019
Messages
4,439
Reputation
1,657
Daps
21,030
Cambridge Analytica already showed me that people don’t even understand wtf is happening with their data NOR do they honestly care.

:unimpressed: This is why devs shouldn’t give a fukk about what the users want. Just keep innovating.
 
Top