Top AI expert ‘completely terrified’ of 2024 election, shaping up to be ‘tsunami of misinformation’

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Top AI expert ‘completely terrified’ of 2024 election, shaping up to be ‘tsunami of misinformation’​

BY ALI SWENSON, CHRISTINE FERNANDO AND THE ASSOCIATED PRESS

December 28, 2023 at 10:22 AM EST

Oren Etzioni
AP23360741125063-e1703776831930.jpg

Oren Etzioni poses for photos at the Allen Institute for Artificial Intelligence where he serves as advisor & board member, Friday, Dec. 8, 2023, in Seattle.

AP PHOTO/JOHN FROSCHAUER

Nearly three years after rioters stormed the U.S. Capitol, the false election conspiracy theories that drove the violent attack remain prevalent on social media and cable news: suitcases filled with ballots, l ate-night ballot dumps, dead people voting.

Experts warn it will likely be worse in the coming presidential election contest. The safeguards that attempted to counter the bogus claims the last time are eroding, while the tools and systems that create and spread them are only getting stronger.

Many Americans, egged on by former President Donald Trump, have continued to push the unsupported idea that elections throughout the U.S. can’t be trusted. A majority of Republicans (57%) believe Democrat Joe Biden was not legitimately elected president.

Meanwhile, generative artificial intelligence tools have made it far cheaper and easier to spread the kind of misinformation that can mislead voters and potentially influence elections. And social media companies that once invested heavily in correcting the record have shifted their priorities.

“I expect a tsunami of misinformation,” said Oren Etzioni, an artificial intelligence expert and professor emeritus at the University of Washington. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified.”



AI DEEPFAKES GO MAINSTREAM​

Manipulated images and videos surrounding elections are nothing new, but 2024 will be the first U.S. presidential election in which sophisticated AI tools that can produce convincing fakes in seconds are just a few clicks away.

The fabricated images, videos and audio clips known as deepfakes have started making their way into experimental presidential campaign ads. More sinister versions could easily spread without labels on social media and fool people days before an election, Etzioni said.

“You could see a political candidate like President Biden being rushed to a hospital,” he said. “You could see a candidate saying things that he or she never actually said. You could see a run on the banks. You could see bombings and violence that never occurred.”

High-tech fakes already have affected elections around the globe, said Larry Norden, senior director of the elections and government program at the Brennan Center for Justice. Just days before Slovakia’s recent elections, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false, but they were shared as real across social media regardless.

These tools might also be used to target specific communities and hone misleading messages about voting. That could look like persuasive text messages, false announcements about voting processes shared in different languages on WhatsApp, or bogus websites mocked up to look like official government ones in your area, experts said.

Faced with content that is made to look and sound real, “everything that we’ve been wired to do through evolution is going to come into play to have us believe in the fabrication rather than the actual reality,” said misinformation scholar Kathleen Hall Jamieson, director of the Annenberg Public Policy Center at the University of Pennsylvania.

Republicans and Democrats in Congress and the Federal Election Commission are exploring steps to regulate the technology, but they haven’t finalized any rules or legislation. That’s left states to enact the only restrictions so far on political AI deepfakes.

A handful of states have passed laws requiring deepfakes to be labeled or banning those that misrepresent candidates. Some social media companies, including YouTube and Meta, which owns Facebook and Instagram, have introduced AI labeling policies. It remains to be seen whether they will be able to consistently catch violators.




SOCIAL MEDIA GUARDRAILS FADE​

It was just over a year ago that Elon Musk bought Twitter and began firing its executives, dismantling some of its core features and reshaping the social media platform into what’s now known as X.

Since then, he has upended its verification system, leaving public officials vulnerable to impersonators. He has gutted the teams that once fought misinformation on the platform, leaving the community of users to moderate itself. And he has restored the accounts of conspiracy theorists and extremists who were previously banned.

The changes have been applauded by many conservatives who say Twitter’s previous moderation attempts amounted to censorship of their views. But pro-democracy advocates argue the takeover has shifted what once was a flawed but useful resource for news and election information into a largely unregulated echo chamber that amplifies hate speech and misinformation.

Twitter used to be one of the “most responsible” platforms, showing a willingness to test features that might reduce misinformation even at the expense of engagement, said Jesse Lehrich, co-founder of Accountable Tech, a nonprofit watchdog group.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839
“Obviously now they’re on the exact other end of the spectrum,” he said, adding that he believes the company’s changes have given other platforms cover to relax their own policies. X didn’t answer emailed questions from The Associated Press, only sending an automated response.

In the run-up to 2024, X, Meta and YouTube have together removed 17 policies that protected against hate and misinformation, according to a report from Free Press, a nonprofit that advocates for civil rights in tech and media.

In June, YouTube announced that while it would still regulate content that misleads about current or upcoming elections, it would stop removing content that falsely claims the 2020 election or other previous U.S. elections were marred by “widespread fraud, errors or glitches.” The platform said the policy was an attempt to protect the ability to “openly debate political ideas, even those that are controversial or based on disproven assumptions.”

Lehrich said even if tech companies want to steer clear of removing misleading content, “there are plenty of content-neutral ways” platforms can reduce the spread of disinformation, from labeling months-old articles to making it more difficult to share content without reviewing it first.

X, Meta and YouTube also have laid off thousands of employees and contractors since 2020, some of whom have included content moderators.

The shrinking of such teams, which many blame on political pressure, “sets the stage for things to be worse in 2024 than in 2020,” said Kate Starbird, a misinformation expert at the University of Washington.

Meta explains on its website that it has some 40,000 people devoted to safety and security and that it maintains “the largest independent fact-checking network of any platform.” It also frequently takes down networks of fake social media accounts that aim to sow discord and distrust.

“No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times,” the posting says.

Ivy Choi, a YouTube spokesperson, said the platform is “heavily invested” in connecting people to high-quality content on YouTube, including for elections. She pointed to the platform’s recommendation and information panels, which provide users with reliable election news, and said the platform removes content that misleads voters on how to vote or encourages interference in the democratic process.

The rise of TikTok and other, less regulated platforms such as Telegram, Truth Social and Gab, also has created more information silos online where baseless claims can spread. Some apps that are particularly popular among communities of color and immigrants, such as WhatsApp and WeChat, rely on private chats, making it hard for outside groups to see the misinformation that may spread.

“I’m worried that in 2024, we’re going to see similar recycled, ingrained false narratives but more sophisticated tactics,” said Roberta Braga, founder and executive director of the Digital Democracy Institute of the Americas. “But on the positive side, I am hopeful there is more social resilience to those things.”[/SIZE][/SIZE]


THE TRUMP FACTOR​

Trump’s front-runner status in the Republican presidential primary is top of mind for misinformation researchers who worry that it will exacerbate election misinformation and potentially lead to election vigilantism or violence.

The former president still falsely claims to have won the 2020 election.

“Donald Trump has clearly embraced and fanned the flames of false claims about election fraud in the past,” Starbird said. “We can expect that he may continue to use that to motivate his base.”

Without evidence, Trump has already primed his supporters to expect fraud in the 2024 election, urging them to intervene to “ guard the vote ” to prevent vote rigging in diverse Democratic cities. Trump has a long history of suggesting elections are rigged if he doesn’t win and did so before voting in 2016 and 2020.

That continued wearing away of voter trust in democracy can lead to violence, said Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, which tracks misinformation.

“If people don’t ultimately trust information related to an election, democracy just stops working,” he said. “If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act.”



ELECTION OFFICIALS RESPOND​

Election officials have spent the years since 2020 preparing for the expected resurgence of election denial narratives. They’ve dispatched teams to explain voting processes, hired outside groups to monitor misinformation as it emerges and beefed up physical protections at vote-counting centers.

In Colorado, Secretary of State Jena Griswold said informative paid social media and TV campaigns that humanize election workers have helped inoculate voters against misinformation.

“This is an uphill battle, but we have to be proactive,” she said. “Misinformation is one of the biggest threats to American democracy we see today.”

Minnesota Secretary of State Steve Simon’s office is spearheading #TrustedInfo2024, a new online public education effort by the National Association of Secretaries of State to promote election officials as a trusted source of election information in 2024.

His office also is planning meetings with county and city election officials and will update a “Fact and Fiction” information page on its website as false claims emerge. A new law in Minnesota will protect election workers from threats and harassment, bar people from knowingly distributing misinformation ahead of elections and criminalize people who non-consensually share deepfake images to hurt a political candidate or influence an election.

“We hope for the best but plan for the worst through these layers of protections,” Simon said.

In a rural Wisconsin county north of Green Bay, Oconto County Clerk Kim Pytleski has traveled the region giving talks and presentations to small groups about voting and elections to boost voters’ trust. The county also offers equipment tests in public so residents can observe the process.

“Being able to talk directly with your elections officials makes all the difference,” she said. “Being able to see that there are real people behind these processes who are committed to their jobs and want to do good work helps people understand we are here to serve them.”
 

Professor Emeritus

Veteran
Poster of the Year
Supporter
Joined
Jan 5, 2015
Messages
51,330
Reputation
19,656
Daps
203,798
Reppin
the ether
hl allstars will disagree


Based on what? I've been warning all year about how AI is going to lead to a massive influx of misinformation from the right.

You should take your own advice from our last two conversations:

Apparently the choice of attacking me personally instead of discussing the subject was more pressing for the people who responded.

That's really disappointing.
 
Last edited:

88m3

Fast Money & Foreign Objects
Joined
May 21, 2012
Messages
87,844
Reputation
3,581
Daps
156,219
Reppin
Brooklyn
Based on what? I've been warning all year about how AI is going to lead to a massive influx of misinformation from the right.

You should take your own advice from our last two conversations:

I'm happy for you. Hopefully they don't @ you with snake emojis
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

Screenshot-2024-01-08-at-3.00.02%E2%80%AFPM-700x592.png


This Brazilian fact-checking org uses a ChatGPT-esque bot to answer reader questions​

“Instead of giving a list of URLs that the user can access — which requires more work for the user — we can answer the question they asked.”


By HANAA' TAMEEZ @hanaatameez Jan. 9, 2024, 12:33 p.m.

In the 13 months since OpenAI launched ChatGPT, news organizations around the world have been experimenting with the technology in hopes of improving their news products and production processes — covering public meetings, strengthening investigative capabilities, powering translations.

With all of its flaws — including the ability to produce false information at scale and generate plain old bad writing — newsrooms are finding ways to make the technology work for them. In Brazil, one outlet has been integrating the OpenAI API with its existing journalism to produce a conversational Q&A chatbot. Meet FátimaGPT.

FátimaGPT is the latest iteration of Fátima, a fact-checking bot by Aos Fatos on WhatsApp, Telegram, and Twitter. Aos Fatos first launched Fátima as a chatbot in 2019. Aos Fatos (“The Facts” in Portuguese) is a Brazilian investigative news outlet that focuses on fact-checking and disinformation.


Bruno Fávero, Aos Fato’s director of innovation, said that although the chatbot had over 75,000 users across platforms, it was limited in function. When users asked it a question, the bot would search Aos Fatos’s archives and use keyword comparison to return a (hopefully) relevant URL.

Fávero said that when OpenAI launched ChatGPT, he and his team started thinking about how they could use language learning models in their work. “Fátima was the obvious candidate for that,” he said.

“AI is taking the world by storm, and it’s important for journalists to understand the ways it can be harmful and to try to educate the public on how bad actors may misuse AI,” Fávero said. “I think it’s also important for us to explore how it can be used to create tools that are useful for the public and that helps bring them reliable information.”

This past November, Aos Fatos launched the upgraded FátimaGPT, which pairs a language learning model with Aos Fato’s archives to give users a clear answer to their question with source lists of URLs. It’s available to use on WhatsApp, Telegram, and the web. In its first few weeks of beta, Fávero said that 94% of the answers analyzed were “adequate,” while 6% were “insufficient,” meaning the answer was in the database but FátimaGPT didn’t provide it. There were no factual mistakes in any of the results, he said.

I asked FátimaGPT through WhatsApp if COVID-19 vaccines are safe and it returned a thorough answer saying yes, along with a source list. On the other hand, I asked FátimaGPT for the lyrics to a Brazilian song I like and it answered, “Wow, I still don’t know how to answer that. I will take note of your message and forward it internally as I am still learning.”

Screenshot-2024-01-08-at-2.56.06%E2%80%AFPM.png

Aos Fatos was concerned at first about implementing this sort of technology, particularly because of “ hallucinations,” where ChatGPT presents false information as true. Aos Fatos is using a technique called retrieval-augmented generation, which links the language learning model to a specific, reliable database to pull information from. In this case, the database is all of Aos Fatos’s journalism.

“If a user asks us, for instance, if elections in Brazil are safe and reliable, then we do a search in our database of fact-checks and articles,” Fávero explained. “Then we extract the most relevant paragraphs that may help answer this question. We put that in a prompt as context to the OpenAI API and [it] almost always gives a relevant answer. Instead of giving a list of URLs that the user can access — which requires more work for the user — we can answer the question they asked.”

Aos Fatos has been experimenting with AI for years. Fátima is an audience-facing product, but Aos Fatos has also used AI to build the subscription audio transcription tool Escriba for journalists. Fávero said the idea came from the fact that journalists in his own newsroom would manually transcribe their interviews because it was hard to find a tool that transcribed Portuguese well. In 2019, Aos Fatos also launched Radar, a tool that uses algorithms to real-time monitor disinformation campaigns on different social media platforms.

Other newsrooms in Brazil are also using artificial intelligence in interesting ways. In October 2023, investigative outlet Agência Pública started using text-to-speech technology to read stories aloud to users. It uses AI to develop the story narrations in the voice of journalist Mariana Simões, who has hosted podcasts for Agência Pública. Núcleo, an investigative news outlet that covers the impacts of social media and AI, developed Legislatech, a tool to monitor and understand government documents.

The use of artificial intelligence in Brazilian newsrooms is particularly interesting as trust in news in the country continues to decline. A 2023 study from KPMG found that 86% of Brazilians believe artificial intelligence is reliable, and 56% are willing to trust the technology.

Fávero said that one of the interesting trends in how people are using FátimaGPT is trying to test its potential biases. Users will often ask a question, for example, about the current president Luiz Inácio Lula da Silva, and then ask the same question about his political rival, former president Jair Bolsonaro. Or, users will ask one question about Israel and then ask the same question about Palestine to look for bias. The next step is developing FátimaGPT to accept YouTube links so that it can extract a video’s subtitles and fact-check the content against Aos Fatos’s journalism.

FátimaGPT’s results can only be as good as Aos Fatos’s existing coverage, which can be a challenge when users ask about a topic that hasn’t gotten much coverage. To get around that, Fávero’s team programmed FátimaGPT to provide the dates for the information published that it shares. This way users know that the information they’re getting may be outdated.

“If you ask something about, [for instance], how many people were killed in the most recent Israeli-Palestinian conflict, it’s something that we were covering, but we’re [not] covering it that frequently,” Fávero said. “We try to compensate that by training [FátimaGPT] to be transparent with the user and providing as much context as possible.”
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

KATE KNIBBS
BUSINESS

JAN 24, 2024 7:00 AM

Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them​

Nearly 90 percent of top news outlets like The New York Times now block AI data collection bots from OpenAI and others. Leading right-wing outlets like NewsMax and Breitbart mostly permit them.

Illustration of a translucent brain plugged in to a red outlet filling with red ether

ILLUSTRATION: JACQUI VANLIEW; GETTY IMAGES



As media companies haggle licensing deals with artificial intelligence powerhouses like OpenAI that are hungry for training data, they’re also throwing up a digital blockade. New data shows that over 88 percent of top-ranked news outlets in the US now block web crawlers used by artificial intelligence companies to collect training data for chatbots and other AI projects. One sector of the news business is a glaring outlier, though: Right-wing media lags far behind their liberal counterparts when it comes to bot-blocking.

Data collected in mid-January on about 40 top news sites by Ontario-based AI detection startup Originality AI shows that almost all of them block AI web crawlers, including newspapers like The New York Times, The Washington Post, and The Guardian, general-interest magazines like The Atlantic, and special-interest sites like Bleacher Report. OpenAI’s GPTBot is the most widely-blocked crawler. But none of the top right-wing news outlets surveyed, including Fox News, the Daily Caller, and Breitbart, block any of the most prominent AI web scrapers, which also include Google’s AI data collection bot. Pundit Bari Weiss’ new website The Free Press also does not block AI scraping bots.

Most of the right-wing sites didn’t respond to requests for comment on their AI crawler strategy, but researchers contacted by WIRED had a few different guesses to explain the discrepancy. The most intriguing: Could this be a strategy to combat perceived political bias? “AI models reflect the biases of their training data,” says Originality AI founder and CEO Jon Gillham. “If the entire left-leaning side is blocking, you could say, come on over here and eat up all of our right-leaning content.”

Originality tallied which sites block GPTbot and other AI scrapers by surveying the robots.txt files that websites use to inform automated web crawlers which pages they are welcome to visit or barred from. The startup used Internet Archive data to establish when each website started blocking AI crawlers; many did so soon after OpenAI announced its crawler would respect robots.txt flags in August 2023. Originality’s initial analysis focused on the top news sites in the US, according to estimated web traffic. Only one of those sites had a significantly right-wing perspective, so Originality also looked at nine of the most well-known right-leaning outlets. Out of the nine right-wing sites, none were blocking GPTBot.

Bot Biases

Conservative leaders in the US (and also Elon Musk) have expressed concern that ChatGPT and other leading AI tools exhibit liberal or left-leaning political biases. At a recent hearing on AI, Senator Marsha Blackburn recited an AI-generated poem praising President Biden as evidence, claiming that generating a similar ode to Trump was impossible with ChatGPT. Right-leaning outlets might see their ideological foes’ decisions to block AI web crawlers as a unique opportunity to redress the balance.

David Rozado, a data scientist based in New Zealand who developed an AI model called RightWingGPT to explore bias he perceived in ChatGPT, says that’s a plausible-sounding strategy. “From a technical point of view, yes, a media company allowing its content to be included in AI training data should have some impact on the model parameters,” he says.

However, Jeremy Baum, an AI ethics researcher at UCLA, says he’s skeptical that right-wing sites declining to block AI scraping would have a measurable effect on the outputs of finished AI systems such as chatbots. That’s in part because of the sheer volume of older material AI companies have already collected from mainstream news outlets before they started blocking AI crawlers, and also because AI companies tend to hire liberal-leaning employees.

“A process called reinforcement learning from human feedback is used right now in every state-of-the-art model,” to fine-tune its responses, Baum says. Most AI companies aim to create systems that appear neutral. If the humans steering the AI see an uptick of right-wing content but judge it to be unsafe or wrong, they could undo any attempt to feed the machine a certain perspective.

OpenAI spokesperson Kayla Wood says that in pursuit of AI models that “deeply represent all cultures, industries, ideologies, and languages” the company uses broad collections of training data. “Any one sector—including news—and any single news site is a tiny slice of the overall training data, and does not have a measurable effect on the model’s intended learning and output,” she says.

Rights Fights

The disconnect in which news sites block AI crawlers could also reflect an ideological divide on copyright. The New York Times is currently suing OpenAI for copyright infringement, arguing that the AI upstart’s data collection is illegal. Other leaders in mainstream media also view this scraping as theft. Condé Nast CEO Roger Lynch recently said at a Senate hearing that many AI tools have been built with “stolen goods.” (WIRED is owned by Condé Nast.) Right-wing media bosses have been largely absent from the debate. Perhaps they quietly allow data scraping because they endorse the argument that data scraping to build AI tools is protected by the fair use doctrine?

For a couple of the nine right-wing outlets contacted by WIRED to ask why they permitted AI scrapers, their responses pointed to a different, less ideological reason. The Washington Examiner did not respond to questions about its intentions but began blocking OpenAI’s GPTBot within 48 hours of WIRED’s request, suggesting that it may not have previously known about or prioritized the option to block web crawlers.

Meanwhile, the Daily Caller admitted that its permissiveness toward AI crawlers had been a simple mistake. “We do not endorse bots stealing our property. This must have been an oversight, but it's being fixed now,” says Daily Caller cofounder and publisher Neil Patel.

Right-wing media is influential, and notably savvy at leveraging social media platforms like Facebook to share articles. But outlets like the Washington Examiner and the Daily Caller are small and lean compared to establishment media behemoths like The New York Times, which have extensive technical teams.

Data journalist Ben Welsh keeps a running tally of news websites blocking AI crawlers from OpenAI, Google, and the nonprofit Common Crawl project whose data is widely used in AI. His results found that approximately 53 percent of the 1,156 media publishers surveyed block one of those three bots. His sample size is much larger than Originality AI’s and includes smaller and less popular news sites, suggesting outlets with larger staffs and higher traffic are more likely to block AI bots, perhaps because of better resourcing or technical knowledge.

At least one right-leaning news site is considering how it might leverage the way its mainstream competitors are trying to stonewall AI projects to counter perceived political biases. “Our legal terms prohibit scraping, and we are exploring new tools to protect our IP. That said, we are also exploring ways to help ensure AI doesn’t end up with all of the same biases as the establishment press,” Daily Wire spokesperson Jen Smith says. As of today, GPTBot and other AI bots were still free to scrape content from the Daily Wire.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
54,189
Reputation
8,082
Daps
153,839

OpenAI suspends bot developer for presidential hopeful Dean Phillips​

It’s the ChatGPT maker’s first known action against the use of its technology in a political campaign​

By Elizabeth Dwoskin

Updated January 22, 2024 at 6:11 p.m. EST|Published January 20, 2024 at 5:33 p.m. EST


imrs.php

Rep. Dean Phillips (D-Minn.), right, on Thursday in Hanover, N.H. (Matt McClain/The Washington Post)


The artificial intelligence company OpenAI banned the developer of a bot mimicking long shot Democratic presidential hopeful Rep. Dean Phillips — the first action that the maker of ChatGPT has taken in response to what it sees as a misuse of its AI tools in a political campaign.

Dean. Bot was the brainchild of Silicon Valley entrepreneurs Matt Krisiloff and Jed Somers, who had started a super PAC supporting Phillips (Minn.) ahead of the New Hampshire primary on Tuesday. The PAC had received $1 million from hedge fund manager Bill Ackman, the billionaire activist who led the charge to oust Harvard University president Claudine Gay.

The bot was powered by OpenAI’s ChatGPT conversational software, which the company has made available to outside developers.

The super PAC, called We Deserve Better, had contracted with AI start-up Delphi to build the bot. OpenAI suspended Delphi’s account late Friday in response to a Washington Post story on the SuperPAC, noting that OpenAI’s rules ban the use of its technology in political campaigns. Delphi took down Dean. Bot after the account suspension.

“Anyone who builds with our tools must follow our usage policies,” OpenAI spokeswoman Lindsey Held said in a statement. “We recently removed a developer account that was knowingly violating our API usage policies which disallow political campaigning, or impersonating an individual without consent.”

Delphi co-founder Dara Ladjevardian told The Post on Monday that the company “incorrectly” believed that OpenAI’s terms of service would let “a political action committee that supports Dean Phillips create a clone of him using our platform.”

The start-up “did not understand that … [the super PAC] may not and did not coordinate with or seek permission from candidates they are supporting,” Ladjevardian said, adding that he had refunded the super PAC and updated the company’s terms of service to ban engagement with political campaigns.

Dean. Bot, which could converse with voters in real-time through a website, was an early use of an emerging technology that researchers have said could cause significant harm to elections.

The bot included a disclaimer explaining that it was an AI tool and not the real Dean Phillips, and required that voters consent to its use. But researchers told The Post that such technologies could lull people into accepting a dangerous tool, even when disclaimers are in place.

Proponents, including We Deserve Better, argue that the bots, when used appropriately, can educate voters by giving them an entertaining way to learn more about a candidate.

Without disclaimers, experts have said, the technologies could enable mass robocalls to voters who think they’re talking to actual candidates or supporters. AI systems can also produce disinformation in ads or content, such as fake websites, at scale.


OpenAI won't let politicians use its tech for campaigning

After The Post asked We Deserve Better about OpenAI’s prohibitions on Thursday, Krisiloff said he had asked Delphi to remove ChatGPT from the bot and instead rely on open source technologies that also offer conversational capabilities that had gone into the bot’s design.

The bot remained available to the public without ChatGPT until late Friday, when Delphi took the bot down in response to the suspension, Krisiloff said.

Krisiloff did not have further comment. Delphi did not immediately respond to a request for comment.

Krisiloff is a former chief of staff to OpenAI CEO Sam Altman. Altman has met with Phillips but has no involvement in the super PAC, Krisiloff said.
 
Top