Biden issues U.S.′ first AI executive order, requiring safety assessments, civil rights guidance, research on labor market impact

Kyle C. Barker

Migos VERZUZ Mahalia Jackson
Joined
Feb 5, 2015
Messages
26,718
Reputation
9,042
Daps
114,804
I heard a YouTuber make a good point about ai

It’s not ai ,workers should be worried about, it’s the 7% interest rate that’s killing jobs

I guess this YouTuber was talking about the fed rate (which is actually in the 5s).

In any case, they were being dramatic. The rates can be lowered in the future but you can never put the AI genie back in the bottle.
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,567
Reputation
486
Daps
14,936
I guess this YouTuber was talking about the fed rate (which is actually in the 5s).

In any case, they were being dramatic. The rates can be lowered in the future but you can never put the AI genie back in the bottle.
We can adapt to ai, but that won’t be possible if the powers that be are committed to trying to keep the dollar as top dog making risky decisions that backfires on everybody especially citizens
 

JLova

Veteran
Joined
May 6, 2012
Messages
55,326
Reputation
3,664
Daps
164,883
I guess , I’m just saying people should be way more afraid of the fact that the education system is terrible in America

Sure, there are many issues right now but the government doesn't benefit from an educated population.
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,567
Reputation
486
Daps
14,936
Sure, there are many issues right now but the government doesn't benefit from an educated population.
You know I’ve heard that saying before but that shyt doesn’t make no sense if you really think about it

An uneducated population is the reason why America is going to fall, is falling I should say
 

JLova

Veteran
Joined
May 6, 2012
Messages
55,326
Reputation
3,664
Daps
164,883
You know I’ve heard that saying before but that shyt doesn’t make no sense if you really think about it

An uneducated population is the reason why America is going to fall, is falling I should say

The government would not be able to do a lot of the shyt they do with a more educated public. Not just government, but corps as well. The system would collapse. Maybe it's for the greater good, but not for the government.
 

Gritsngravy

Superstar
Joined
Mar 11, 2022
Messages
7,567
Reputation
486
Daps
14,936
The government would not be able to do a lot of the shyt they do with a more educated public. Not just government, but corps as well. The system would collapse. Maybe it's for the greater good, but not for the government.
The system wouldn’t collapse, it would just be harder to hide corruption

them neglecting the citizens has put the government in a situation where they not only fukked the country up but fukked themselves up as well

It’s dumb as hell to not take education seriously

Them crackers thought they shyt ain’t stink and now they about to get moved off the block in a geopolitical sense
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
45,193
Reputation
7,433
Daps
136,400

McConnell opposes bill to ban use of deceptive AI to influence elections​

BY ALEXANDER BOLTON - 05/15/24 11:41 AM ET

Minority Leader Mitch McConnell (R-Ky.)
Greg Nash

Minority Leader Mitch McConnell (R-Ky.) addresses reporters after the weekly policy luncheon on Tuesday, October 31, 2023.

Senate Republican Leader Mitch McConnell (R-Ky.) announced Wednesday he will oppose bipartisan legislation coming out of the Senate Rules Committee that would ban the use of artificial intelligence (AI) to create deceptive content about federal candidates to influence elections.

McConnell, a longtime opponent of campaign finance restrictions, warned that the bills coming out of the Rules Committee “would tamper” with what he called the “well-developed legal regime” for taking down false ads and “create new definitions that could reach well beyond deepfakes.”

He argued that if his colleagues on the Rules panel viewed a dozen political ads, they “would differ on which ones were intentionally misleading.”

“The core question we’re facing is whether or not politicians should have another tool to take down speech they don’t like,” he said. “But if the amendment before us extends this authority to unpaid political speech, then we’re also talking about an extension of speech regulation that has not happened in the 50 years of our modern campaign finance regime.”

The Protect Elections from Deceptive AI Act, which would ban the use of AI to create misleading content, is backed by Senate Rules Committee Chair Amy Klobuchar (D-Minn.) and Sens. Josh Hawley (R-Mo.), Chris c00ns (D-Del.), Susan Collins (R-Maine), Michael Bennet (D-Colo.) and Pete Ricketts (R-Neb.).

But McConnell, citing testimony from Sen. Bill Hagerty (R-Tenn.), said the definitions in the bills to crack down on deepfakes are “nebulous, at best, and overly censorious if they’re applied most cynically.”

“They could wind up barring all manner of photos and videos as long as the ill-defined ‘reasonable person’ could deduce an alternative meaning from the content,” he said.

The Rules Committee also marked up Wednesday the AI Transparency in Elections Act, which requires disclaimers on political ads with images, audio or video generated by AI and the Preparing Election Administrators for AI Act, which requires federal agencies to develop voluntary guidelines for election offices.

McConnell said the proposal to require new disclaimers could be used to regulate content, which he opposes.

“I also have concerns about the disclaimer provisions and their application. Our political disclaimer regime has for its entire history served a singular purpose: to help voters understand who is paying for or endorsing an advertisement. It has never been applied to political advertisements as a content regulation tool,” he said.

He urged his colleagues to spend more time on the issue to reach consensus and announced he would oppose the AI-related bills moving forward.

“Until Congress reaches a consensus understanding of what AI is acceptable and what is not, leading with our chin is not going to cut it in the domain of political speech. So I will oppose S. 2770 or S. 3875 at this time. And I would urge my colleagues to do the same,” he said.

All three bills cleared the Rules Committee.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
45,193
Reputation
7,433
Daps
136,400


63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved​

News

By Nick Evanson

published 3 days ago

OpenAI and Google might love artificial general intelligence, but the average voter probably just thinks Skynet.

Half of Artificial Intelligence robot face

(Image credit: Getty Images, Yuichiro Chino)

Generative AI may well be en vogue right now, but when it comes to artificial intelligence systems that are way more capable than humans, the jury is definitely unanimous in its view. A survey of American voters showed that 63% of respondents believe government regulations should be put in place to actively prevent it from ever being achieved, let alone be restricted in some way.

The survey, carried out by YouGov for the Artificial Intelligence Policy Institute (via Vox) took place last September. While it only sampled a small number of voters in the US—just 1,118 in total—the demographics covered were broad enough to be fairly representative of the wider voting population.

One of the specific questions asked in the survey focused on "whether regulation should have the goal of delaying super intelligence." Specifically, it's talking about artificial general intelligence (AGI), something that the likes of OpenAI and Google are actively working on trying to achieve. In the case of the former, its mission expressly states this, with the goal of "ensur[ing] that artificial general intelligence benefits all of humanity" and it's a view shared by those working in the field. Even if that is one of the co-founders of OpenAI on his way out of the door...

Regardless of how honourable OpenAI's intentions are, or maybe were, it's a message that's currently lost on US voters. Of those surveyed, 63% agreed with the statement that regulation should aim to actively prevent AI superintelligence, 21% felt that didn't know, and 16% disagreed altogether.

The survey's overall findings suggest that voters are significantly more worried about keeping "dangerous [AI] models out of the hands of bad actors" rather than it being of benefit to us all. Research into new, more powerful AI models should be regulated, according to 67% of the surveyed voters, and they should be restricted in what they're capable of. Almost 70% of respondents felt that AI should be regulated like a "dangerous powerful technology."

That's not to say those people weren't against learning about AI. When asked about a proposal in Congress that expands access to AI education, research, and training, 55% agreed with the idea, whereas 24% opposed it. The rest chose that "Don't know" response.

I suspect that part of the negative view of AGI is the average person will undoubtedly think 'Skynet' when questioned about artificial intelligence better than humans. Even with systems far more basic than that, concerns over deep fakes and job losses won't help with seeing any of the positives that AI can potentially bring.

AI, EXPLAINED

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen are seen in this illustration photo taken in Krakow, Poland on December 5, 2022.

What is artificial general intelligence?: We dive into the lingo of AI and what the terms actually mean.

The survey's results will no doubt be pleasing to the Artificial Intelligence Policy Institute, as it "believe(s) that proactive government regulation can significantly reduce the destabilizing effects from AI." I'm not suggesting that it's influenced the results in any way, as my own, very unscientific, survey of immediate friends and family produced a similar outcome—i.e. AGI is dangerous and should be heavily controlled.

Regardless of whether this is true or not, OpenAI, Google, and others clearly have lots of work ahead of them, in convincing voters that AGI really is beneficial to humanity. Because at the moment, it would seem that the majority view of AI becoming more powerful is an entirely negative one, despite arguments to the contrary.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
45,193
Reputation
7,433
Daps
136,400

Democratic consultant indicted for Biden deepfake that told people not to vote​


Steven Kramer charged with voter suppression and faces possible $6 million fine.​

JON BRODKIN - 5/23/2024, 3:17 PM

Joe Biden holds a cell phone to his ear while having a conversation.

Enlarge / President Joe Biden at a Rose Garden event at the White House on May 1, 2023, in Washington, DC.

Getty Images | Alex Wong

95

A Democratic consultant was indicted on charges of voter suppression and impersonation of a candidate after admitting that he commissioned a robocall that used artificial intelligence to imitate President Joe Biden's voice. The political consultant, Steven Kramer, is also facing a $6 million fine proposed by the Federal Communications Commission.

The fake Biden robocall urged Democrats not to vote and was placed to New Hampshire residents before the state's presidential primary in January. Kramer, who was working for a candidate running against Biden, acknowledged that he was responsible for the robocall in February.

Kramer, a 54-year-old from New Orleans, "has been charged with 13 felony counts of voter suppression... and 13 misdemeanor counts of impersonation of a candidate," New Hampshire Attorney General John Formella announced today. "The charges are spread across four counties based on the residence of thirteen New Hampshire residents who received the Biden robocalls."

Formella said his office is still investigating the incident. "New Hampshire remains committed to ensuring that our elections remain free from unlawful interference and our investigation into this matter remains ongoing," he said.

Separately, the FCC today proposed a $6 million fine against Kramer in a Notice of Apparent Liability for Forfeiture. Kramer will be given a chance to respond before the FCC makes a final determination on the fine.

"Political consultant Steve Kramer was responsible for the calls," which "apparently violated the Truth in Caller ID Act by maliciously spoofing the number of a prominent local political consultant," the FCC said. "The robocalls, made two days prior to the election, used a deepfake of President Biden's voice and encouraged voters to not vote in the primary but rather to 'save your vote for the November election.'"

Kramer defended fake Biden call​

Kramer defended his actions after his role was revealed by an NBC News article. "Kramer claimed he planned the fake robocall from the start as an act of civil disobedience to call attention to the dangers of AI in politics," NBC News wrote in February after talking to Kramer. "He compared himself to American Revolutionary heroes Paul Revere and Thomas Paine. He said more enforcement is necessary to stop people like him from doing what he did."

"This is a way for me to make a difference, and I have," Kramer told NBC News in an interview. "For $500, I got about $5 million worth of action, whether that be media attention or regulatory action."

Kramer was working as a consultant for Democrat Dean Phillips, a US representative from Minnesota who ran against Biden in the New Hampshire Democratic primary. Phillips suspended his long-shot presidential campaign in March.

"Phillips and his campaign have denounced the robocall, saying they had no knowledge of Kramer's involvement and would have immediately terminated him if they had known," NBC News wrote. Kramer also said that Phillips had nothing to do with the robocall.

In early February, the New Hampshire AG's office said the robocall was traced to a Texas company called Life Corporation and a person named Walter Monk. But the AG's office said it was "continuing to investigate whether Life Corporation worked with or at the direction of any other persons or entities."

Texas-based phone company Lingo Telecom was found to have transmitted the calls and is now facing an FCC fine. the FCC today proposed a $2 million fine against Lingo Telecom for "incorrectly labeling [the calls] with the highest level of caller ID attestation and making it less likely that other providers could detect the calls as potentially spoofed."

The FCC alleged that Lingo Telecom violated rules related to the STIR/SHAKEN Caller ID authentication system. "Lingo Telecom failed to follow 'Know Your Customer' principles by applying the highest level attestation—signifying trust in the caller ID information—to apparently illegally spoofed calls without making any effort to verify the accuracy of the information," the FCC said.
 
Top