Top AI expert ‘completely terrified’ of 2024 election, shaping up to be ‘tsunami of misinformation’

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

FCC moves to outlaw AI-generated robocalls​

Devin Coldewey @techcrunch / 3:23 PM EST•January 31, 2024

An illustration of a humanoid robot emerging from a smartphone screen

Image Credits: Golden Sikorka / Getty Images

No one likes robocalls to begin with, but using AI-generated voices of people like President Biden makes them even worse. As such the FCC is proposing that using voice cloning tech in robocalls be ruled fundamentally illegal, making it easier to charge the operators of these frauds.

You may ask why it’s necessary if robocalls are illegal to begin with. In fact some automated calls are necessary and even desirable, and it’s only when a call operation is found to be breaking the law in some way that it becomes the business of the authorities.

For example, regarding the recent fake Biden calls in New Hampshire telling people not to vote, the attorney general there can (and did) say with confidence that the messages “appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters.”

Under the law there, voter suppression is illegal and so, when they track down the perpetrators (and I’m emailing them constantly to find out if they have, by the way) that will be what they are charged with, likely among other things. But it remains that a crime must be committed, or reasonably suspected to have been committed, for the authorities to step in.

If employing voice cloning tech in automated calls, like what was obviously used on Biden, is itself illegal, that makes charging robocallers that much easier.

“That’s why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers,” said FCC Chairwoman Jessica Rosenworcel in a news release. They previously announced that they were looking into this back when the problem was relatively fresh.

The FCC already uses the Telephone Consumer Protection Act as the basis for charging robocallers and other telephone scammers. The TCPA already prohibits “artificial” voices, but it is not clear that cloned voices fall under that category. It’s arguable, for instance, that a company could use the generated voice of its CEO for legitimate business purposes.

But the fact is that legal applications of the tech are fewer in number and less immediately important than the illegal applications. Therefore the FCC proposes to issue a Declaratory Ruling that AI-powered voice cloning causes a call to fall under the “artificial” heading.

The law here is being rapidly iterated as telephone, messaging and generative voice tech all evolve. So don’t be surprised if it isn’t entirely clear what is and isn’t illegal, or why despite being obviously illegal, some calls or scams seem to operate with impunity. It’s a work in progress.

Update: FCC spokesman Will Wiquist told me that procedurally, this proposal will be propagated internally and voted on at Commissioners’ discretion. It will only be public when and if it is adopted.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Texas firm allegedly behind fake Biden robocall that told people not to vote​

Tech and telecom firms helped New Hampshire AG trace call to "Life Corporation."​

JON BRODKIN - 2/7/2024, 4:04 PM

President Joe Biden holding a cell phone to his ear while he talks.

Enlarge / US President Joe Biden speaks on the phone in the Rose Garden of the White House in Washington, DC, on May 1, 2023.
Getty Images | Brendan Smialowski


93

An anti-voting robocall that used an artificially generated clone of President Biden's voice has been traced to a Texas company called Life Corporation "and an individual named Walter Monk," according to an announcement by New Hampshire Attorney General John Formella yesterday.

The AG office's Election Law Unit issued a cease-and-desist order to Life Corporation for violating a New Hampshire law that prohibits deterring people from voting "based on fraudulent, deceptive, misleading, or spurious grounds or information," the announcement said.

As previously reported, the fake Biden robocall was placed before the New Hampshire Presidential Primary Election on January 23. The AG's office said it is investigating "whether Life Corporation worked with or at the direction of any other persons or entities."

"What a bunch of malarkey," the fake Biden voice said. "You know the value of voting Democratic when our votes count. It's important that you save your vote for the November election. We'll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday."

The artificial Biden voice seems to have been created using a text-to-speech engine offered by ElevenLabs, which reportedly responded to the news by suspending the account of the user who created the deepfake.

The robocalls "illegally spoofed their caller ID information to appear to come from a number belonging to a former New Hampshire Democratic Party Chair," the AG's office said. Formella, a Republican, said that "AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process."



Tech firms helped investigation​

Formella's announcement said that YouMail and Nomorobo helped identify the robocalls and that the calls were traced to Life Corporation and Walter Monk with the help of the Industry Traceback Group run by the telecom industry. Nomorobo estimated the number of calls to be between 5,000 and 25,000.

"The tracebacks further identified the originating voice service provider for many of these calls to be Texas-based Lingo Telecom. After Lingo Telecom was informed that these calls were being investigated, Lingo Telecom suspended services to Life Corporation," the AG's office said.

The Election Law Unit issued document preservation notices and subpoenas for records to Life Corporation, Lingo Telecom, and other entities "that may possess records relevant to the Attorney General’s ongoing investigation," the announcement said.

Media outlets haven't had much luck in trying to get a comment from Monk. "At his Arlington office, the door was locked when NBC 5 knocked," an NBC 5 Dallas-Fort Worth article said. "A man inside peeked around the corner to see who was ringing the doorbell but did not answer the door."

The New York Times reports that "a subsidiary of Life Corporation called Voice Broadcasting Corp., which identifies Mr. Monk as its founder on its website, has received numerous payments from the Republican Party’s state committee in Delaware, most recently in 2022, as well as payments from congressional candidates in both parties."

A different company, also called Life Corporation, posted a message on its home page that said, "We are a medical device manufacturer located in Florida and are not affiliated with the Texas company named in current news stories."



FCC warns carrier​

The Federal Communications Commission said yesterday that it is taking action against Lingo Telecom. The FCC said it sent a letter demanding that Lingo "immediately stop supporting unlawful robocall traffic on its networks," and a K4 Order that "strongly encourages other providers to refrain from carrying suspicious traffic from Lingo."

"The FCC may proceed to require other network providers affiliated with Lingo to block its traffic should the company continue this behavior," the agency said.

The FCC is separately planning a vote to declare that the use of AI-generated voices in robocalls is illegal under the Telephone Consumer Protection Act.
 

ORDER_66

Demon Time coming 2024
Joined
Feb 2, 2014
Messages
146,361
Reputation
15,794
Daps
584,273
Reppin
Queens,NY

Texas firm allegedly behind fake Biden robocall that told people not to vote​

Tech and telecom firms helped New Hampshire AG trace call to "Life Corporation."​

JON BRODKIN - 2/7/2024, 4:04 PM

President Joe Biden holding a cell phone to his ear while he talks.

Enlarge / US President Joe Biden speaks on the phone in the Rose Garden of the White House in Washington, DC, on May 1, 2023.
Getty Images | Brendan Smialowski


93

An anti-voting robocall that used an artificially generated clone of President Biden's voice has been traced to a Texas company called Life Corporation "and an individual named Walter Monk," according to an announcement by New Hampshire Attorney General John Formella yesterday.

The AG office's Election Law Unit issued a cease-and-desist order to Life Corporation for violating a New Hampshire law that prohibits deterring people from voting "based on fraudulent, deceptive, misleading, or spurious grounds or information," the announcement said.

As previously reported, the fake Biden robocall was placed before the New Hampshire Presidential Primary Election on January 23. The AG's office said it is investigating "whether Life Corporation worked with or at the direction of any other persons or entities."

"What a bunch of malarkey," the fake Biden voice said. "You know the value of voting Democratic when our votes count. It's important that you save your vote for the November election. We'll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday."

The artificial Biden voice seems to have been created using a text-to-speech engine offered by ElevenLabs, which reportedly responded to the news by suspending the account of the user who created the deepfake.

The robocalls "illegally spoofed their caller ID information to appear to come from a number belonging to a former New Hampshire Democratic Party Chair," the AG's office said. Formella, a Republican, said that "AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process."



Tech firms helped investigation​

Formella's announcement said that YouMail and Nomorobo helped identify the robocalls and that the calls were traced to Life Corporation and Walter Monk with the help of the Industry Traceback Group run by the telecom industry. Nomorobo estimated the number of calls to be between 5,000 and 25,000.

"The tracebacks further identified the originating voice service provider for many of these calls to be Texas-based Lingo Telecom. After Lingo Telecom was informed that these calls were being investigated, Lingo Telecom suspended services to Life Corporation," the AG's office said.

The Election Law Unit issued document preservation notices and subpoenas for records to Life Corporation, Lingo Telecom, and other entities "that may possess records relevant to the Attorney General’s ongoing investigation," the announcement said.

Media outlets haven't had much luck in trying to get a comment from Monk. "At his Arlington office, the door was locked when NBC 5 knocked," an NBC 5 Dallas-Fort Worth article said. "A man inside peeked around the corner to see who was ringing the doorbell but did not answer the door."

The New York Times reports that "a subsidiary of Life Corporation called Voice Broadcasting Corp., which identifies Mr. Monk as its founder on its website, has received numerous payments from the Republican Party’s state committee in Delaware, most recently in 2022, as well as payments from congressional candidates in both parties."

A different company, also called Life Corporation, posted a message on its home page that said, "We are a medical device manufacturer located in Florida and are not affiliated with the Texas company named in current news stories."



FCC warns carrier​

The Federal Communications Commission said yesterday that it is taking action against Lingo Telecom. The FCC said it sent a letter demanding that Lingo "immediately stop supporting unlawful robocall traffic on its networks," and a K4 Order that "strongly encourages other providers to refrain from carrying suspicious traffic from Lingo."

"The FCC may proceed to require other network providers affiliated with Lingo to block its traffic should the company continue this behavior," the agency said.

The FCC is separately planning a vote to declare that the use of AI-generated voices in robocalls is illegal under the Telephone Consumer Protection Act.

...and none of these people was arrested or hemmed up???:francis:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

FCC votes to outlaw scam robocalls that use AI-generated voices​

By Brian Fung, CNN

2 minute read

Updated 11:51 AM EST, Thu February 8, 2024


GettyImages-1130064392.jpg

dramalens/iStockphoto/Getty Images

WashingtonCNN —

The Federal Communications Commission said Thursday it is immediately outlawing scam robocalls featuring fake, artificial intelligence-created voices, cracking down on so-called “deepfake” technology that experts say could undermine election security or supercharge fraud.

The unanimous FCC vote extends anti-robocall rules to cover unsolicited AI deepfake calls by recognizing those voices as “artificial” under a federal law governing telemarketing and robocalling.

The FCC’s move gives state attorneys general more legal tools to pursue illegal robocallers that use AI-generated voices to fool Americans, the FCC said.

“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” said FCC Chairwoman Jessica Rosenworcel in a statement. “We’re putting the fraudsters behind these robocalls on notice.”

The decision to interpret the 1991 Telephone Consumer Protection Act (TCPA) more broadly to include AI-generated voices comes weeks after a fake robocall that impersonated President Joe Biden targeted thousands of New Hampshire voters and urged them not to participate in the state’s primary.

Authorities said this week they had linked those fake calls to a Texas man and two companies in an ongoing investigation that could lead to civil and criminal penalties.

In its announcement Thursday, the FCC said those who wish to send robocalls “must obtain prior express consent from the called party before making a call that utilizes artificial or prerecorded voice simulated or generated through AI technology.”

With Thursday’s change, scam robocalls featuring cloned voices would be subject to the same fines and consequences associated with illegal robocalls that do not use the technology. The FCC had announced it was considering the proposal last week.

Violations of the TCPA can carry stiff civil penalties. In 2021, the FCC announced a $5 million proposed fine against right-wing operatives Jacob Wohl and Jack Burkman for allegedly using illegal robocalls to discourage voting in the 2020 election.

The number of robocalls placed in the US peaked at around 58.5 billion in 2019, according to estimates by YouMail, a robocall blocking service. Last year, the figure was closer to 55 billion.

As the FCC updates its interpretation of federal law, some US lawmakers have proposed revising the law directly to further deter illegal robocallers. House Democrats unveiled legislation this year that would double the TCPA’s maximum penalties when a robocall violation involves the use of AI.

This story has been updated.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Tech companies sign accord to combat AI-generated election trickery​

FILE - Meta's president of global affairs Nick Clegg speaks at the World Economic Forum in Davos, Switzerland, Jan. 18, 2024. Adobe, Google, Meta, Microsoft, OpenAI, TikTok and other companies are gathering at the Munich Security Conference on Friday to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. (AP Photo/Markus Schreiber, File)

FILE - Meta’s president of global affairs Nick Clegg speaks at the World Economic Forum in Davos, Switzerland, Jan. 18, 2024. Adobe, Google, Meta, Microsoft, OpenAI, TikTok and other companies are gathering at the Munich Security Conference on Friday to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. (AP Photo/Markus Schreiber, File)

BY MATT O’BRIEN AND ALI SWENSON

Updated 1:18 PM EST, February 16, 2024

Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Tech executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new voluntary framework for how they will respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing on to the accord.

“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

The accord is largely symbolic, but targets increasingly realistic AI-generated images, audio and video “that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election, or that provide false information to voters about when, where, and how they can lawfully vote.”

The companies aren’t committing to ban or remove deepfakes. Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide “swift and proportionate responses” when that content starts to spread.

The vagueness of the commitments and lack of any binding requirements likely helped win over a diverse swath of companies, but may disappoint pro-democracy activists and watchdogs looking for stronger assurances.

“The language isn’t quite as strong as one might have expected,” said Rachel Orey, senior associate director of the Elections Project at the Bipartisan Policy Center. “I think we should give credit where credit is due, and acknowledge that the companies do have a vested interest in their tools not being used to undermine free and fair elections. That said, it is voluntary, and we’ll be keeping an eye on whether they follow through.”

Clegg said each company “quite rightly has its own set of content policies.”

“This is not attempting to try to impose a straitjacket on everybody,” he said. “And in any event, no one in the industry thinks that you can deal with a whole new technological paradigm by sweeping things under the rug and trying to play whack-a-mole and finding everything that you think may mislead someone.”

Tech executives were also joined by several European and U.S. political leaders at Friday’s announcement. European Commission Vice President Vera Jourova said while such an agreement can’t be comprehensive, “it contains very impactful and positive elements.” She also urged fellow politicians to take responsibility to not use AI tools deceptively.

She stressed the seriousness of the issue, saying the “combination of AI serving the purposes of disinformation and disinformation campaigns might be the end of democracy, not only in the EU member states.”

The agreement at the German city’s annual security meeting comes as more than 50 countries are due to hold national elections in 2024. Some have already done so, including Bangladesh, Taiwan, Pakistan, and most recently Indonesia.

Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked U.S. President Joe Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.

Just days before Slovakia’s elections in November, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were already widely shared as real across social media.

Politicians and campaign committees also have experimented with the technology, from using AI chatbots to communicate with voters to adding AI-generated images to ads.

Friday’s accord said in responding to AI-generated deepfakes, platforms “will pay attention to context and in particular to safeguarding educational, documentary, artistic, satirical, and political expression.”

It said the companies will focus on transparency to users about their policies on deceptive AI election content and work to educate the public about how they can avoid falling for AI fakes.

Many of the companies have previously said they’re putting safeguards on their own generative AI tools that can manipulate images and sound, while also working to identify and label AI-generated content so that social media users know if what they’re seeing is real. But most of those proposed solutions haven’t yet rolled out and the companies have faced pressure from regulators and others to do more.

That pressure is heightened in the U.S., where Congress has yet to pass laws regulating AI in politics, leaving AI companies to largely govern themselves. In the absence of federal legislation, many states are considering ways to put guardrails around the use of AI, in elections and other applications.

The Federal Communications Commission recently confirmed AI-generated audio clips in robocalls are against the law, but that doesn’t cover audio deepfakes when they circulate on social media or in campaign advertisements.

Misinformation experts warn that while AI deepfakes are especially worrisome for their potential to fly under the radar and influence voters this year, cheaper and simpler forms of misinformation remain a major threat. The accord noted this too, acknowledging that “traditional manipulations (”cheapfakes”) can be used for similar purposes.”

Many social media companies already have policies in place to deter deceptive posts about electoral processes — AI-generated or not. For example, Meta says it removes misinformation about “the dates, locations, times, and methods for voting, voter registration, or census participation” as well as other false posts meant to interfere with someone’s civic participation.

Jeff Allen, co-founder of the Integrity Institute and a former data scientist at Facebook, said the accord seems like a “positive step” but he’d still like to see social media companies taking other basic actions to combat misinformation, such as building content recommendation systems that don’t prioritize engagement above all else.

Lisa Gilbert, executive vice president of the advocacy group Public Citizen, argued Friday that the accord is “not enough” and AI companies should “hold back technology” such as hyper-realistic text-to-video generators “until there are substantial and adequate safeguards in place to help us avert many potential problems.”

In addition to the major platforms that helped broker Friday’s agreement, other signatories include chatbot developers Anthropic and Inflection AI; voice-clone startup ElevenLabs; chip designer Arm Holdings; security companies McAfee and TrendMicro; and Stability AI, known for making the image-generator Stable Diffusion.

Notably absent from the accord is another popular AI image-generator, Midjourney. The San Francisco-based startup didn’t immediately return a request for comment Friday.

The inclusion of X — not mentioned in an earlier announcement about the pending accord — was one of the biggest surprises of Friday’s agreement. Musk sharply curtailed content-moderation teams after taking over the former Twitter and has described himself as a “free speech absolutist.”

But in a statement Friday, X CEO Linda Yaccarino said “every citizen and company has a responsibility to safeguard free and fair elections.”

“X is dedicated to playing its part, collaborating with peers to combat AI threats while also protecting free speech and maximizing transparency,” she said.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Joe Biden’s voice was imitated by AI in a call to New Hampshire voters discouraging them from voting in the state’s Democratic primary.

View image in fullscreen

Joe Biden’s voice was imitated by AI in a call to New Hampshire voters discouraging them from voting in the state’s Democratic primary. Photograph: Evan Vucci/AP

US elections 2024

New Orleans magician says he made AI Biden robocall for aide to challenger​

Paul David Carpenter says he was paid by consultant for Democrat Dean Phillips to mimic Biden’s voice in New Hampshire primary

Ramon Antonio Vargas in New Orleans

Fri 23 Feb 2024 09.33 EST

A magician in New Orleans says he was the person who used artificial intelligence to create an audio recording of Joe Biden used in an infamous robocall and that he was paid by a consultant for the president’s primary challenger, Dean Phillips.


NBC News reported Paul David Carpenter, who holds several world records and also works as a hypnotist, provided it with text messages, call logs and payment documentation to back up his claims.

Carpenter claimed he was hired by Steve Kramer, a consultant for Phillips’s campaign, to use AI to mimic Biden’s voice discouraging people from voting in New Hampshire’s 23 January primary.

“I created the audio used in the robocall [but] I did not distribute it,” Carpenter reportedly told NBC. “I was in a situation where someone offered me some money to do something and I did it.

“There was no malicious intent. I didn’t know how it was going to be distributed.”

The audio recording is currently under investigation by law enforcement officials, and prompted the US government to outlaw robocalls using AI-generated voices.

Carpenter told NBC it was “so scary” how easy it was for him to produce the fake audio, saying it took less than 20 minutes and cost him $1. In return, he was paid $150, as documented in Venmo payments from Kramer and his father, Bruce Kramer, that Carpenter reportedly supplied to NBC.

He also shared what he described as the original robocall audio file, which he manufactured with software from ElevenLabs, an AI firm that touts its ability to create a voice clone from existing speech samples.

NBC said Kramer, a veteran political operative, did not comment on Carpenter’s version of events and would soon publish an opinion piece that would “explain all”.

In a statement, Phillips’ campaign said it was “disgusted to learn that Mr Kramer is allegedly behind this call”.

“If it is true that Mr Kramer had any involvement in the creation of deepfake robocalls, he did so of his own volition, which had nothing to do with our campaign,” said the campaign’s press secretary, Katie Dolan.

“The fundamental notion of our campaign is the importance of competition, choice and democracy,” she added. “If the allegations are true, we absolutely denounce his actions.”

Federal Election Commission records show that in December and January, the Phillips campaign paid nearly $260,000 to Kramer, who once worked on the 2020 presidential campaign for Ye, formerly known as Kanye West.

NBC said it found no evidence to suggest the Minnesota congressman’s campaign had instructed Kramer to produce the audio or disseminate the robocall.

Carpenter describes himself as a “digital nomad artist”, and perhaps his biggest previous claim to fame was setting the world records for fastest straitjacket escape and most fork bends in under a minute.

“The only thing missing from the political circus is a magician, and here I am,” Carpenter joked.

Carpenter – who didn’t immediately respond to a request for comment from the Guardian – has no fixed address but lists himself as a resident of New Orleans. Videos and images online show him in the streets of the city’s famed French Quarter neighborhood.

New Hampshire authorities by 6 February issued cease-and-desist orders and subpoenas to two Texas companies believed to be linked to the robocall – Life Corporation, which investigators alleged was the robocall’s source, and Lingo Telecom, which they said transmitted it.

After news of the robocall became known, the Federal Communications Commission ruled unanimously to either fine companies using AI voices in their calls or block any service providers that carry them.

Phillips’ campaign has done little to affect Biden’s status as the presumptive Democratic nominee for November’s presidential election. On Thursday, the congressman floated the idea of running for the White House on a “unity ticket” with Nikki Haley, who was on track to lose the Republican primary to Biden’s presidential predecessor Donald Trump.

Edward Helmore contributed reporting
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Microsoft's new AI tool is a deepfake nightmare machine​

By Daniel John

published yesterday

VASA-1 can create videos from a single image.

Faces generated with Microsoft VASA-1

(Image credit: Microsoft)

It almost seems quaint to remember when all AI could do was generate images from a text prompt. Over the last couple of years generative AI has become more and more powerful, making the jump from photos to videos with the advent of tools like Sora. And now Microsoft has introduced a powerful tool that might be the most impressive (and terrifying) we've seen yet.

VASA-1 is an AI image-to-video model that can generate videos from just one photo and a speech audio clip. Videos feature synchronised facial and lip movements, as well as "a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness."


On its research website, Microsoft explains how the tech works. "The core innovations include a holistic facial dynamics and head movement generation model that works in a face latent space, and the development of such an expressive and disentangled face latent space using videos. Through extensive experiments including evaluation on a set of new metrics, we show that our method significantly outperforms previous methods along various dimensions comprehensively. Our method not only delivers high video quality with realistic facial and head dynamics but also supports the online generation of 512x512 videos at up to 40 FPS with negligible starting latency. It paves the way for real-time engagements with lifelike avatars that emulate human conversational behaviours."

Microsoft just dropped VASA-1.This AI can make single image sing and talk from audio reference expressively. Similar to EMO from Alibaba10 wild examples:1. Mona Lisa rapping Paparazzi pic.twitter.com/LSGF3mMVnD April 18, 2024


See more

In other words, it's capable of creating deepfake videos based on a single image. It's notable that Microsoft insists the tool is a "research demonstration and there's no product or API release plan." Seemingly in an attempt to allay fears, the company is suggesting that VASA-1 won't be making its way into users' hands any time soon.

From Sora AI to Will Smith eating spaghetti, we've seen all manner of weird and wonderful (but mostly weird) AI generated video content, and it's only going to get more realistic. Just look how much generative AI has improved in one year.
 

the cac mamba

Veteran
Joined
May 21, 2012
Messages
100,346
Reputation
13,416
Daps
293,219
Reppin
NULL
this election is a national disgrace :yeshrug:

trump is a corrupt, criminal, treasonous fakkit who should die in guantanamo bay. and yet i have to vote for biden, and actually pretend that he's fit to be president until january of 2029 :snoop:

biden could drop dead, tomorrow, and it would be perfectly normal. open any obituary page of any newspaper in the country :mjlol:
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

McConnell opposes bill to ban use of deceptive AI to influence elections​

BY ALEXANDER BOLTON - 05/15/24 11:41 AM ET

Minority Leader Mitch McConnell (R-Ky.)
Greg Nash

Minority Leader Mitch McConnell (R-Ky.) addresses reporters after the weekly policy luncheon on Tuesday, October 31, 2023.

Senate Republican Leader Mitch McConnell (R-Ky.) announced Wednesday he will oppose bipartisan legislation coming out of the Senate Rules Committee that would ban the use of artificial intelligence (AI) to create deceptive content about federal candidates to influence elections.

McConnell, a longtime opponent of campaign finance restrictions, warned that the bills coming out of the Rules Committee “would tamper” with what he called the “well-developed legal regime” for taking down false ads and “create new definitions that could reach well beyond deepfakes.”

He argued that if his colleagues on the Rules panel viewed a dozen political ads, they “would differ on which ones were intentionally misleading.”

“The core question we’re facing is whether or not politicians should have another tool to take down speech they don’t like,” he said. “But if the amendment before us extends this authority to unpaid political speech, then we’re also talking about an extension of speech regulation that has not happened in the 50 years of our modern campaign finance regime.”

The Protect Elections from Deceptive AI Act, which would ban the use of AI to create misleading content, is backed by Senate Rules Committee Chair Amy Klobuchar (D-Minn.) and Sens. Josh Hawley (R-Mo.), Chris c00ns (D-Del.), Susan Collins (R-Maine), Michael Bennet (D-Colo.) and Pete Ricketts (R-Neb.).

But McConnell, citing testimony from Sen. Bill Hagerty (R-Tenn.), said the definitions in the bills to crack down on deepfakes are “nebulous, at best, and overly censorious if they’re applied most cynically.”

“They could wind up barring all manner of photos and videos as long as the ill-defined ‘reasonable person’ could deduce an alternative meaning from the content,” he said.

The Rules Committee also marked up Wednesday the AI Transparency in Elections Act, which requires disclaimers on political ads with images, audio or video generated by AI and the Preparing Election Administrators for AI Act, which requires federal agencies to develop voluntary guidelines for election offices.

McConnell said the proposal to require new disclaimers could be used to regulate content, which he opposes.

“I also have concerns about the disclaimer provisions and their application. Our political disclaimer regime has for its entire history served a singular purpose: to help voters understand who is paying for or endorsing an advertisement. It has never been applied to political advertisements as a content regulation tool,” he said.

He urged his colleagues to spend more time on the issue to reach consensus and announced he would oppose the AI-related bills moving forward.

“Until Congress reaches a consensus understanding of what AI is acceptable and what is not, leading with our chin is not going to cut it in the domain of political speech. So I will oppose S. 2770 or S. 3875 at this time. And I would urge my colleagues to do the same,” he said.

All three bills cleared the Rules Committee.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Democratic consultant indicted for Biden deepfake that told people not to vote​


Steven Kramer charged with voter suppression and faces possible $6 million fine.​

JON BRODKIN - 5/23/2024, 3:17 PM

Joe Biden holds a cell phone to his ear while having a conversation.

Enlarge / President Joe Biden at a Rose Garden event at the White House on May 1, 2023, in Washington, DC.

Getty Images | Alex Wong

95

A Democratic consultant was indicted on charges of voter suppression and impersonation of a candidate after admitting that he commissioned a robocall that used artificial intelligence to imitate President Joe Biden's voice. The political consultant, Steven Kramer, is also facing a $6 million fine proposed by the Federal Communications Commission.

The fake Biden robocall urged Democrats not to vote and was placed to New Hampshire residents before the state's presidential primary in January. Kramer, who was working for a candidate running against Biden, acknowledged that he was responsible for the robocall in February.

Kramer, a 54-year-old from New Orleans, "has been charged with 13 felony counts of voter suppression... and 13 misdemeanor counts of impersonation of a candidate," New Hampshire Attorney General John Formella announced today. "The charges are spread across four counties based on the residence of thirteen New Hampshire residents who received the Biden robocalls."

Formella said his office is still investigating the incident. "New Hampshire remains committed to ensuring that our elections remain free from unlawful interference and our investigation into this matter remains ongoing," he said.

Separately, the FCC today proposed a $6 million fine against Kramer in a Notice of Apparent Liability for Forfeiture. Kramer will be given a chance to respond before the FCC makes a final determination on the fine.

"Political consultant Steve Kramer was responsible for the calls," which "apparently violated the Truth in Caller ID Act by maliciously spoofing the number of a prominent local political consultant," the FCC said. "The robocalls, made two days prior to the election, used a deepfake of President Biden's voice and encouraged voters to not vote in the primary but rather to 'save your vote for the November election.'"

Kramer defended fake Biden call​

Kramer defended his actions after his role was revealed by an NBC News article. "Kramer claimed he planned the fake robocall from the start as an act of civil disobedience to call attention to the dangers of AI in politics," NBC News wrote in February after talking to Kramer. "He compared himself to American Revolutionary heroes Paul Revere and Thomas Paine. He said more enforcement is necessary to stop people like him from doing what he did."

"This is a way for me to make a difference, and I have," Kramer told NBC News in an interview. "For $500, I got about $5 million worth of action, whether that be media attention or regulatory action."

Kramer was working as a consultant for Democrat Dean Phillips, a US representative from Minnesota who ran against Biden in the New Hampshire Democratic primary. Phillips suspended his long-shot presidential campaign in March.

"Phillips and his campaign have denounced the robocall, saying they had no knowledge of Kramer's involvement and would have immediately terminated him if they had known," NBC News wrote. Kramer also said that Phillips had nothing to do with the robocall.

In early February, the New Hampshire AG's office said the robocall was traced to a Texas company called Life Corporation and a person named Walter Monk. But the AG's office said it was "continuing to investigate whether Life Corporation worked with or at the direction of any other persons or entities."

Texas-based phone company Lingo Telecom was found to have transmitted the calls and is now facing an FCC fine. the FCC today proposed a $2 million fine against Lingo Telecom for "incorrectly labeling [the calls] with the highest level of caller ID attestation and making it less likely that other providers could detect the calls as potentially spoofed."

The FCC alleged that Lingo Telecom violated rules related to the STIR/SHAKEN Caller ID authentication system. "Lingo Telecom failed to follow 'Know Your Customer' principles by applying the highest level attestation—signifying trust in the caller ID information—to apparently illegally spoofed calls without making any effort to verify the accuracy of the information," the FCC said.
 
Top