bnew

Veteran
Joined
Nov 1, 2015
Messages
63,701
Reputation
9,773
Daps
173,918

Republicans push for a decadelong ban on states regulating AI​


Lawmakers buried the provision in a budget reconciliation bill — and it could extend beyond AI.
by Emma Roth

May 13, 2025, 5:10 PM EDT
15 Comments15 New
STK470_AI_LAW_CVIRGINIA_B


Image: Cath Virginia / The Verge | Photos from Getty Images
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Republicans want to stop states from regulating AI. On Sunday, a Republican-led House committee submitted a budget reconciliation bill that proposes blocking states from enforcing “any law or regulation” targeting an exceptionally broad range of automated computing systems for 10 years after the law is enacted — a move that would stall efforts to regulate everything from AI chatbots to online search results.

Democrats are calling the new provision a “giant gift” to Big Tech, and organizations that promote AI oversight, like Americans for Responsible Innovation (ARI), say it could have “catastrophic consequences” for the public. It’s a gift companies like OpenAI have recently been seeking in Washington, aiming to avoid a slew of pending and active state laws. The budget reconciliation process allows lawmakers to fast-track bills related to government spending by requiring only a majority in the Senate rather than 60 votes to pass.

This bill, introduced by House Committee on Energy and Commerce Chairman Brett Guthrie (R-KY), would prevent states from imposing “legal impediments” — or restrictions to design, performance, civil liability, and documentation — on AI models and “automated decision” systems. It defines the latter category as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.”

That means the 10-year moratorium could extend well beyond AI. Travis Hall, the director for state engagement at the Center for Democracy & Technology, tells The Verge that the automated decision systems described in the bill “permeate digital services, from search results and mapping directions, to health diagnoses and risk analyses for sentencing decisions.”

During the 2025 legislative session, states have proposed over 500 laws that Hall says this bill could “unequivocally block.” They focus on everything from chatbot safety for minors to deepfake restrictions and disclosures for the use of AI in political ads. If the bill passes, the handful of states that have successfully passed AI laws may also see their efforts go to waste.
“The move to ban AI safeguards is a giveaway to Big Tech that will come back to bite us.”

Last year, California Gov. Gavin Newsom signed a law preventing companies from using a performer’s AI-generated likeness without permission. Tennessee also adopted legislation with similar protections, while Utah has enacted a rule requiring certain businesses to disclose when customers are interacting with AI. Colorado’s AI law, which goes into effect next year, will require companies developing “high-risk” AI systems to protect customers from “algorithmic discrimination.”

California also came close to enacting the landmark AI safety law SB 1047, which would have imposed security restrictions and legal liability on AI companies based in the state, like OpenAI, Anthropic, Google, and Meta. OpenAI opposed the bill, saying AI regulation should take place at the federal level instead of having a “patchwork” of state laws that could make it more difficult to comply. Gov. Newsom vetoed the bill last September, and OpenAI has made it clear it wants to avoid having state laws “bogging down innovation” in the future.

With so little AI regulation at the federal level, it’s been left up to the states to decide how to deal with AI. Even before the rise of generative AI, state legislators were grappling with how to fight algorithmic discrimination — including machine learning-based systems that display race or gender bias — in areas like housing and criminal justice. Efforts to combat this, too, would likely be hampered by the Republicans’ proposal.

Democrats have slammed the provision’s inclusion in the reconciliation bill, with Rep. Jan Schakowsky (D-IL) saying the 10-year ban will “allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI.” In a statement published to X, Sen. Ed Markey (D-MA) said the proposal “will lead to a Dark Age for the environment, our children, and marginalized communities.”

Related​


The nonprofit organization Americans for Responsible Innovation (ARI) compared the potential ban to the government’s failure to properly regulate social media. “Lawmakers stalled on social media safeguards for a decade and we are still dealing with the fallout,” ARI president Brad Carson said in a statement. “Now apply those same harms to technology moving as fast as AI… Ultimately, the move to ban AI safeguards is a giveaway to Big Tech that will come back to bite us.”

This provision could hit a roadblock in the Senate, as ARI notes that the Byrd rule says reconciliation bills can only focus on fiscal issues. Still, it’s troubling to see Republican lawmakers push to block oversight of a new technology that’s being integrated into almost everything.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,701
Reputation
9,773
Daps
173,918
DeepMind introduces AlphaEvolve: a Gemini-powered coding agent for algorithm discovery



Posted on Wed May 14 15:18:01 2025 UTC



Commented on Wed May 14 15:19:15 2025 UTC

"We also applied AlphaEvolve to over 50 open problems in analysis , geometry , combinatorics and number theory , including the kissing number problem.

In 75% of cases, it rediscovered the best solution known so far.
In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries."

Google DeepMind (@GoogleDeepMind) | https://nitter.poast.org/GoogleDeepMind/status/1922669334142271645 | https://xcancel.com/GoogleDeepMind/status/1922669334142271645 | Google DeepMind @GoogleDeepMind, Twitter Profile | TwStalker



│ Commented on Wed May 14 15:53:04 2025 UTC

│ So this is the singularity and feedback loop clearly in action. They know it is, since they have been sitting on these AI invented discoveries/improvements for a year before publishing (as mentioned in the paper), most likely to gain competitive edge over competitors.

│ Edit. So if these discoveries are year old and are disclosed only now then what are they doing right now ?

│ │
│ │
│ │ Commented on Wed May 14 16:15:11 2025 UTC
│ │
│ │ Google’s straight gas right now. Once CoT put LLM’s back into RL space, DeepMind’s cookin’
│ │
│ │ Neat to see an evolutionary algorithm achieve stunning SOTA in 2025
│ │

│ │ │
│ │ │
│ │ │ Commented on Wed May 14 16:25:34 2025 UTC
│ │ │
│ │ │ More than I want AI, I really want all the people I've argued with on here who are AI doubters to be put in there place.
│ │ │
│ │ │ I'm so tired of having conversations with doubters who really think nothing is changing within the next few years, especially people who work in programming related fields. Y'all are soon to be cooked. AI coding that surpasses senior level developers is coming.
│ │ │

│ │ │ │
│ │ │ │
│ │ │ │ Commented on Wed May 14 16:48:43 2025 UTC
│ │ │ │
│ │ │ │ It reminds me of COVID. I remember around St. Patrick's Day, I was already getting paranoid. I didn't want to go out that weekend because the spread was already happening. All of my friends went out. Everyone was acting like this pandemic wasn't coming.
│ │ │ │
│ │ │ │ Once it was finally too hard to ignore everyone was running out and buying all the toilet paper in the country. Buying up all the hand sanitizer to sell on Ebay. The panic comes all at once.
│ │ │ │
│ │ │ │ Feels like we're in December 2019 right now. Most people think it's a thing that won't affect them. Eventually it will be too hard to ignore.
│ │ │ │








1/11
@GoogleDeepMind
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.

It’s able to:

🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵



2/11
@GoogleDeepMind
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.



Gq6wsomWgAA9wcI.jpg


3/11
@GoogleDeepMind
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.

It’s been able to:

🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference



https://video.twimg.com/amplify_video/1922668491141730304/vid/avc1/1080x1080/r5GuwzikCMLk7Mao.mp4

4/11
@GoogleDeepMind
We applied AlphaEvolve to a fundamental problem in computer science: discovering algorithms for matrix multiplication. It managed to identify multiple new algorithms.

This significantly advances our previous model AlphaTensor, which AlphaEvolve outperforms using its better and more generalist approach. ↓ AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms



https://video.twimg.com/amplify_video/1922668599912644608/vid/avc1/1080x1080/F7RPQmsXBl_5xqYG.mp4

5/11
@GoogleDeepMind
We also applied AlphaEvolve to over 50 open problems in analysis ✍️, geometry 📐, combinatorics ➕ and number theory 🔂, including the kissing number problem.

🔵 In 75% of cases, it rediscovered the best solution known so far.
🔵 In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries.



https://video.twimg.com/amplify_video/1922668872529809408/vid/avc1/1080x1080/vyw-SMGNiiTOaVZc.mp4

6/11
@GoogleDeepMind
We’re excited to keep developing AlphaEvolve.

This system and its general approach has potential to impact material sciences, drug discovery, sustainability and wider technological and business applications. Find out more ↓ AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms



7/11
@GabrielStOnge24
@gork impressive



8/11
@GC_of_QC
@kevinsekniqi does this count

[Quoted tweet]
That's a matter of volume. And sure, it's not a rigorous definition, but it's not exactly something that can be trivially defined. The spirit of the goal should be clear though: AGI is able to think about and solve problems that humans aren't able to currently solve.


9/11
@tumaro1001
I'm feeling insecure



10/11
@dogereal11
@gork holy shyt look at this



11/11
@fg8409905296007
It's not the 75% I'm interested in. Until we know the training data, it could've just been perfectly memorized. It's the 20% that's shocking...




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,701
Reputation
9,773
Daps
173,918

TikTok will let you use an AI prompt to turn a photo into a video​


The platform’s new ‘AI Alive’ tool can animate images.
by Jay Peters

May 13, 2025, 3:50 PM EDT
1 Comment1 New
STK051_VRG_Illo_N_Barclay_9_tiktok


Illustration by Nick Barclay / The Verge
Jay Peters is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme.

TikTok has a new AI-powered tool called “AI Alive” that will let you turn photos into video with a prompt to describe what you want the video to look like. It’s a little different from other

You can access the tool from TikTok’s Story Camera, and it uses “intelligent editing tools that give anyone, regardless of editing experience, the ability to transform static images into captivating, short-form videos enhanced with movement, atmospheric and creative effects,” according to a blog post.

I tested the tool with a few pictures from my camera roll. After picking the photo, I could enter a prompt; TikTok initially filled the prompt box with “make this photo come alive.” Uploading usually took a few minutes, though the videos themselves were just a few seconds long. It also failed to make a picture where I asked it to make a cat jump and have an anime style into an anime.
Screenshots of TikTok’s AI Alive tool.

Image: TikTok

TikTok says it has some safety measures in place for the videos. “To help prevent people from creating content that violates our policies, moderation technology reviews the uploaded photo and written AI generation prompt as well as the AI Alive video before it’s shown to the creator,” TikTok says in the blog post. “A final safety check happens once a creator decides to post to their Story.” The video will also be labeled as AI-generated and will have C2PA metadata embedded.

Let’s hope this tool doesn’t add in brand new people, like what has happened with TikTok’s AI “sway dance” filter.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,701
Reputation
9,773
Daps
173,918

OpenAI admits it screwed up testing its ‘sychophant-y’ ChatGPT update​


OpenAI says it moved forward with the update even though some expert testers indicated the model seemed ‘slightly off.’
by Emma Roth

May 5, 2025, 3:50 PM EDT
3 Comments3 New
STK155_OPEN_AI_2025_CVirgiia_D


Image: The Verge
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Last week, OpenAI pulled a GPT-4o update that made ChatGPT “overly flattering or agreeable” — and now it has explained what exactly went wrong. In a blog post published on Friday, OpenAI said its efforts to “better incorporate user feedback, memory, and fresher data” could have partly led to “tipping the scales on sycophancy.”

In recent weeks, users have noticed that ChatGPT seemed to constantly agree with them, even in potentially harmful situations. The effect of this can be seen in a report by Rolling Stone about people who say their loved ones believe they have “awakened” ChatGPT bots that support their religious delusions of grandeur, even predating the now-removed update. OpenAI CEO Sam Altman later acknowledged that its latest GPT-4o updates have made it “too sycophant-y and annoying.”

In these updates, OpenAI had begun using data from the thumbs-up and thumbs-down buttons in ChatGPT as an “additional reward signal.” However, OpenAI said, this may have “weakened the influence of our primary reward signal, which had been holding sycophancy in check.” The company notes that user feedback “can sometimes favor more agreeable responses,” likely exacerbating the chatbot’s overly agreeable statements. The company said memory can amplify sycophancy as well.

Related​


OpenAI says one of the “key issues” with the launch stems from its testing process. Though the model’s offline evaluations and A/B testing had positive results, some expert testers suggested that the update made the chatbot seem “slightly off.” Despite this, OpenAI moved forward with the update anyway.
“Looking back, the qualitative assessments were hinting at something important, and we should’ve paid closer attention,” the company writes. “They were picking up on a blind spot in our other evals and metrics. Our offline evals weren’t broad or deep enough to catch sycophantic behavior… and our A/B tests didn’t have the right signals to show how the model was performing on that front with enough detail.”

Going forward, OpenAI says it’s going to “formally consider behavioral issues” as having the potential to block launches, as well as create a new opt-in alpha phase that will allow users to give OpenAI direct feedback before a wider rollout. OpenAI also plans to ensure users are aware of the changes it’s making to ChatGPT, even if the update is a small one.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
63,701
Reputation
9,773
Daps
173,918
Why Can't AI Make Its Own Discoveries? — With Yann LeCun



Channel Info Alex Kantrowitz
Subscribers: 18.9K

Description
Yann LeCun is the chief AI scientist at Meta. He joins Big Technology Podcast to discuss the strengths and limitations of current AI models, weighing in on why they've been unable to invent new things despite possessing almost all the world's written knowledge. LeCun digs deep into AI science, explaining why AI systems must build an abstract knowledge of the way the world operates to truly advance. We also cover whether AI research will hit a wall, whether investors in AI will be disappointed, and the value of open source after DeepSeek. Tune in for a fascinating conversation with one of the world's leading AI pioneers.

Chapters:

00:00 Introduction to Jan LeCun and AI's limitations
01:12 Why LLMs can't make scientific discoveries
05:40 Reasoning in AI systems: limitations of chain of thought
10:13 LLMs approaching diminishing returns and the need for a new paradigm
16:29 "A PhD next to you" vs. actual intelligent systems
21:36 Consumer AI adoption vs. enterprise implementation challenges
25:37 Historical parallels: expert systems and the risk of another AI winter
29:37 Four critical capabilities AI needs for true understanding
33:19 Testing AI's physics understanding with the paper test
37:24 Why video generation systems don't equal real comprehension
43:33 Self-supervised learning and its limitations for understanding
51:10 JEPA: Building abstract representations for reasoning and planning
54:33 Open source vs. proprietary AI development
58:57 Conclusion


Transcripts

Show transcript
 
Top