bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

Republicans push for a decadelong ban on states regulating AI​


Lawmakers buried the provision in a budget reconciliation bill — and it could extend beyond AI.
by Emma Roth

May 13, 2025, 5:10 PM EDT
15 Comments15 New
STK470_AI_LAW_CVIRGINIA_B


Image: Cath Virginia / The Verge | Photos from Getty Images
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Republicans want to stop states from regulating AI. On Sunday, a Republican-led House committee submitted a budget reconciliation bill that proposes blocking states from enforcing “any law or regulation” targeting an exceptionally broad range of automated computing systems for 10 years after the law is enacted — a move that would stall efforts to regulate everything from AI chatbots to online search results.

Democrats are calling the new provision a “giant gift” to Big Tech, and organizations that promote AI oversight, like Americans for Responsible Innovation (ARI), say it could have “catastrophic consequences” for the public. It’s a gift companies like OpenAI have recently been seeking in Washington, aiming to avoid a slew of pending and active state laws. The budget reconciliation process allows lawmakers to fast-track bills related to government spending by requiring only a majority in the Senate rather than 60 votes to pass.

This bill, introduced by House Committee on Energy and Commerce Chairman Brett Guthrie (R-KY), would prevent states from imposing “legal impediments” — or restrictions to design, performance, civil liability, and documentation — on AI models and “automated decision” systems. It defines the latter category as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output, including a score, classification, or recommendation, to materially influence or replace human decision making.”

That means the 10-year moratorium could extend well beyond AI. Travis Hall, the director for state engagement at the Center for Democracy & Technology, tells The Verge that the automated decision systems described in the bill “permeate digital services, from search results and mapping directions, to health diagnoses and risk analyses for sentencing decisions.”

During the 2025 legislative session, states have proposed over 500 laws that Hall says this bill could “unequivocally block.” They focus on everything from chatbot safety for minors to deepfake restrictions and disclosures for the use of AI in political ads. If the bill passes, the handful of states that have successfully passed AI laws may also see their efforts go to waste.
“The move to ban AI safeguards is a giveaway to Big Tech that will come back to bite us.”

Last year, California Gov. Gavin Newsom signed a law preventing companies from using a performer’s AI-generated likeness without permission. Tennessee also adopted legislation with similar protections, while Utah has enacted a rule requiring certain businesses to disclose when customers are interacting with AI. Colorado’s AI law, which goes into effect next year, will require companies developing “high-risk” AI systems to protect customers from “algorithmic discrimination.”

California also came close to enacting the landmark AI safety law SB 1047, which would have imposed security restrictions and legal liability on AI companies based in the state, like OpenAI, Anthropic, Google, and Meta. OpenAI opposed the bill, saying AI regulation should take place at the federal level instead of having a “patchwork” of state laws that could make it more difficult to comply. Gov. Newsom vetoed the bill last September, and OpenAI has made it clear it wants to avoid having state laws “bogging down innovation” in the future.

With so little AI regulation at the federal level, it’s been left up to the states to decide how to deal with AI. Even before the rise of generative AI, state legislators were grappling with how to fight algorithmic discrimination — including machine learning-based systems that display race or gender bias — in areas like housing and criminal justice. Efforts to combat this, too, would likely be hampered by the Republicans’ proposal.

Democrats have slammed the provision’s inclusion in the reconciliation bill, with Rep. Jan Schakowsky (D-IL) saying the 10-year ban will “allow AI companies to ignore consumer privacy protections, let deepfakes spread, and allow companies to profile and deceive consumers using AI.” In a statement published to X, Sen. Ed Markey (D-MA) said the proposal “will lead to a Dark Age for the environment, our children, and marginalized communities.”

Related​


The nonprofit organization Americans for Responsible Innovation (ARI) compared the potential ban to the government’s failure to properly regulate social media. “Lawmakers stalled on social media safeguards for a decade and we are still dealing with the fallout,” ARI president Brad Carson said in a statement. “Now apply those same harms to technology moving as fast as AI… Ultimately, the move to ban AI safeguards is a giveaway to Big Tech that will come back to bite us.”

This provision could hit a roadblock in the Senate, as ARI notes that the Byrd rule says reconciliation bills can only focus on fiscal issues. Still, it’s troubling to see Republican lawmakers push to block oversight of a new technology that’s being integrated into almost everything.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484
DeepMind introduces AlphaEvolve: a Gemini-powered coding agent for algorithm discovery



Posted on Wed May 14 15:18:01 2025 UTC



Commented on Wed May 14 15:19:15 2025 UTC

"We also applied AlphaEvolve to over 50 open problems in analysis , geometry , combinatorics and number theory , including the kissing number problem.

In 75% of cases, it rediscovered the best solution known so far.
In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries."

Google DeepMind (@GoogleDeepMind) | https://nitter.poast.org/GoogleDeepMind/status/1922669334142271645 | https://xcancel.com/GoogleDeepMind/status/1922669334142271645 | Google DeepMind @GoogleDeepMind, Twitter Profile | TwStalker



│ Commented on Wed May 14 15:53:04 2025 UTC

│ So this is the singularity and feedback loop clearly in action. They know it is, since they have been sitting on these AI invented discoveries/improvements for a year before publishing (as mentioned in the paper), most likely to gain competitive edge over competitors.

│ Edit. So if these discoveries are year old and are disclosed only now then what are they doing right now ?

│ │
│ │
│ │ Commented on Wed May 14 16:15:11 2025 UTC
│ │
│ │ Google’s straight gas right now. Once CoT put LLM’s back into RL space, DeepMind’s cookin’
│ │
│ │ Neat to see an evolutionary algorithm achieve stunning SOTA in 2025
│ │

│ │ │
│ │ │
│ │ │ Commented on Wed May 14 16:25:34 2025 UTC
│ │ │
│ │ │ More than I want AI, I really want all the people I've argued with on here who are AI doubters to be put in there place.
│ │ │
│ │ │ I'm so tired of having conversations with doubters who really think nothing is changing within the next few years, especially people who work in programming related fields. Y'all are soon to be cooked. AI coding that surpasses senior level developers is coming.
│ │ │

│ │ │ │
│ │ │ │
│ │ │ │ Commented on Wed May 14 16:48:43 2025 UTC
│ │ │ │
│ │ │ │ It reminds me of COVID. I remember around St. Patrick's Day, I was already getting paranoid. I didn't want to go out that weekend because the spread was already happening. All of my friends went out. Everyone was acting like this pandemic wasn't coming.
│ │ │ │
│ │ │ │ Once it was finally too hard to ignore everyone was running out and buying all the toilet paper in the country. Buying up all the hand sanitizer to sell on Ebay. The panic comes all at once.
│ │ │ │
│ │ │ │ Feels like we're in December 2019 right now. Most people think it's a thing that won't affect them. Eventually it will be too hard to ignore.
│ │ │ │








1/11
@GoogleDeepMind
Introducing AlphaEvolve: a Gemini-powered coding agent for algorithm discovery.

It’s able to:

🔘 Design faster matrix multiplication algorithms
🔘 Find new solutions to open math problems
🔘 Make data centers, chip design and AI training more efficient across @Google. 🧵



2/11
@GoogleDeepMind
Our system uses:
🔵 LLMs: To synthesize information about problems as well as previous attempts to solve them - and to propose new versions of algorithms
🔵 Automated evaluation: To address the broad class of problems where progress can be clearly and systematically measured.
🔵 Evolution: Iteratively improving the best algorithms found, and re-combining ideas from different solutions to find even better ones.



Gq6wsomWgAA9wcI.jpg


3/11
@GoogleDeepMind
Over the past year, we’ve deployed algorithms discovered by AlphaEvolve across @Google’s computing ecosystem, including data centers, software and hardware.

It’s been able to:

🔧 Optimize data center scheduling
🔧 Assist in hardware design
🔧 Enhance AI training and inference



https://video.twimg.com/amplify_video/1922668491141730304/vid/avc1/1080x1080/r5GuwzikCMLk7Mao.mp4

4/11
@GoogleDeepMind
We applied AlphaEvolve to a fundamental problem in computer science: discovering algorithms for matrix multiplication. It managed to identify multiple new algorithms.

This significantly advances our previous model AlphaTensor, which AlphaEvolve outperforms using its better and more generalist approach. ↓ AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms



https://video.twimg.com/amplify_video/1922668599912644608/vid/avc1/1080x1080/F7RPQmsXBl_5xqYG.mp4

5/11
@GoogleDeepMind
We also applied AlphaEvolve to over 50 open problems in analysis ✍️, geometry 📐, combinatorics ➕ and number theory 🔂, including the kissing number problem.

🔵 In 75% of cases, it rediscovered the best solution known so far.
🔵 In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries.



https://video.twimg.com/amplify_video/1922668872529809408/vid/avc1/1080x1080/vyw-SMGNiiTOaVZc.mp4

6/11
@GoogleDeepMind
We’re excited to keep developing AlphaEvolve.

This system and its general approach has potential to impact material sciences, drug discovery, sustainability and wider technological and business applications. Find out more ↓ AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms



7/11
@GabrielStOnge24
@gork impressive



8/11
@GC_of_QC
@kevinsekniqi does this count

[Quoted tweet]
That's a matter of volume. And sure, it's not a rigorous definition, but it's not exactly something that can be trivially defined. The spirit of the goal should be clear though: AGI is able to think about and solve problems that humans aren't able to currently solve.


9/11
@tumaro1001
I'm feeling insecure



10/11
@dogereal11
@gork holy shyt look at this



11/11
@fg8409905296007
It's not the 75% I'm interested in. Until we know the training data, it could've just been perfectly memorized. It's the 20% that's shocking...




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

TikTok will let you use an AI prompt to turn a photo into a video​


The platform’s new ‘AI Alive’ tool can animate images.
by Jay Peters

May 13, 2025, 3:50 PM EDT
1 Comment1 New
STK051_VRG_Illo_N_Barclay_9_tiktok


Illustration by Nick Barclay / The Verge
Jay Peters is a news editor covering technology, gaming, and more. He joined The Verge in 2019 after nearly two years at Techmeme.

TikTok has a new AI-powered tool called “AI Alive” that will let you turn photos into video with a prompt to describe what you want the video to look like. It’s a little different from other

You can access the tool from TikTok’s Story Camera, and it uses “intelligent editing tools that give anyone, regardless of editing experience, the ability to transform static images into captivating, short-form videos enhanced with movement, atmospheric and creative effects,” according to a blog post.

I tested the tool with a few pictures from my camera roll. After picking the photo, I could enter a prompt; TikTok initially filled the prompt box with “make this photo come alive.” Uploading usually took a few minutes, though the videos themselves were just a few seconds long. It also failed to make a picture where I asked it to make a cat jump and have an anime style into an anime.
Screenshots of TikTok’s AI Alive tool.

Image: TikTok

TikTok says it has some safety measures in place for the videos. “To help prevent people from creating content that violates our policies, moderation technology reviews the uploaded photo and written AI generation prompt as well as the AI Alive video before it’s shown to the creator,” TikTok says in the blog post. “A final safety check happens once a creator decides to post to their Story.” The video will also be labeled as AI-generated and will have C2PA metadata embedded.

Let’s hope this tool doesn’t add in brand new people, like what has happened with TikTok’s AI “sway dance” filter.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

OpenAI admits it screwed up testing its ‘sychophant-y’ ChatGPT update​


OpenAI says it moved forward with the update even though some expert testers indicated the model seemed ‘slightly off.’
by Emma Roth

May 5, 2025, 3:50 PM EDT
3 Comments3 New
STK155_OPEN_AI_2025_CVirgiia_D


Image: The Verge
Emma Roth is a news writer who covers the streaming wars, consumer tech, crypto, social media, and much more. Previously, she was a writer and editor at MUO.

Last week, OpenAI pulled a GPT-4o update that made ChatGPT “overly flattering or agreeable” — and now it has explained what exactly went wrong. In a blog post published on Friday, OpenAI said its efforts to “better incorporate user feedback, memory, and fresher data” could have partly led to “tipping the scales on sycophancy.”

In recent weeks, users have noticed that ChatGPT seemed to constantly agree with them, even in potentially harmful situations. The effect of this can be seen in a report by Rolling Stone about people who say their loved ones believe they have “awakened” ChatGPT bots that support their religious delusions of grandeur, even predating the now-removed update. OpenAI CEO Sam Altman later acknowledged that its latest GPT-4o updates have made it “too sycophant-y and annoying.”

In these updates, OpenAI had begun using data from the thumbs-up and thumbs-down buttons in ChatGPT as an “additional reward signal.” However, OpenAI said, this may have “weakened the influence of our primary reward signal, which had been holding sycophancy in check.” The company notes that user feedback “can sometimes favor more agreeable responses,” likely exacerbating the chatbot’s overly agreeable statements. The company said memory can amplify sycophancy as well.

Related​


OpenAI says one of the “key issues” with the launch stems from its testing process. Though the model’s offline evaluations and A/B testing had positive results, some expert testers suggested that the update made the chatbot seem “slightly off.” Despite this, OpenAI moved forward with the update anyway.
“Looking back, the qualitative assessments were hinting at something important, and we should’ve paid closer attention,” the company writes. “They were picking up on a blind spot in our other evals and metrics. Our offline evals weren’t broad or deep enough to catch sycophantic behavior… and our A/B tests didn’t have the right signals to show how the model was performing on that front with enough detail.”

Going forward, OpenAI says it’s going to “formally consider behavioral issues” as having the potential to block launches, as well as create a new opt-in alpha phase that will allow users to give OpenAI direct feedback before a wider rollout. OpenAI also plans to ensure users are aware of the changes it’s making to ChatGPT, even if the update is a small one.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484
Why Can't AI Make Its Own Discoveries? — With Yann LeCun



Channel Info Alex Kantrowitz
Subscribers: 18.9K

Description
Yann LeCun is the chief AI scientist at Meta. He joins Big Technology Podcast to discuss the strengths and limitations of current AI models, weighing in on why they've been unable to invent new things despite possessing almost all the world's written knowledge. LeCun digs deep into AI science, explaining why AI systems must build an abstract knowledge of the way the world operates to truly advance. We also cover whether AI research will hit a wall, whether investors in AI will be disappointed, and the value of open source after DeepSeek. Tune in for a fascinating conversation with one of the world's leading AI pioneers.

Chapters:

00:00 Introduction to Jan LeCun and AI's limitations
01:12 Why LLMs can't make scientific discoveries
05:40 Reasoning in AI systems: limitations of chain of thought
10:13 LLMs approaching diminishing returns and the need for a new paradigm
16:29 "A PhD next to you" vs. actual intelligent systems
21:36 Consumer AI adoption vs. enterprise implementation challenges
25:37 Historical parallels: expert systems and the risk of another AI winter
29:37 Four critical capabilities AI needs for true understanding
33:19 Testing AI's physics understanding with the paper test
37:24 Why video generation systems don't equal real comprehension
43:33 Self-supervised learning and its limitations for understanding
51:10 JEPA: Building abstract representations for reasoning and planning
54:33 Open source vs. proprietary AI development
58:57 Conclusion


Transcripts

Show transcript
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484


1/10
@_akhaliq
Google presents LightLab

Controlling Light Sources in Images with Diffusion Models



https://video.twimg.com/amplify_video/1923135795163963392/vid/avc1/1280x720/RHt56hduR4WiOtG2.mp4

2/10
@_akhaliq
discuss with author: Paper page - LightLab: Controlling Light Sources in Images with Diffusion Models



3/10
@GiulioAprin
Wow



4/10
@jaimemguajardo
Wow



5/10
@JonathanKorstad
@Google Stadia is going to be lit



6/10
@jclotetdomingo
How can I use lightlab @grok



7/10
@zhaoyan9394
Interesting use of diffusion models! Shows how AI's reshaping tech tools. Excited to see how this influences future roles in the industry.



8/10
@REVOLVO_OCELOTS
Kinda similar to relight from SD



9/10
@GlaiveSong
cool



10/10
@C12s_AI
Game changer for image editing.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484


1/2
@HuggingPapers
Marigold was just published on Hugging Face

Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis



https://video.twimg.com/ext_tw_video/1923108204906676225/pu/vid/avc1/720x720/6cBQLHktVSb-P_d3.mp4

2/2
@HuggingPapers
Discuss with author: Paper page - Marigold: Affordable Adaptation of Diffusion-Based Image Generators for Image Analysis




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

1/3
@_akhaliq
Omni-R1

Do You Really Need Audio to Fine-Tune Your Audio LLM?



Gq_hiY2XIAAeCq_.png


2/3
@OrcsSandHive
Need audio? Bah! Smash it good, or it ain't worth it!



3/3
@edsonroteia
Thanks for featuring our work! For those interested, check out @arouditchenko's thread:

[Quoted tweet]
Do you really need audio to fine-tune your Audio LLM? 🤔 Answer below:

Introducing Omni-R1, a simple GRPO fine‑tuning method for Qwen2.5‑Omni on audio question answering. It sets new state‑of‑the‑art accuracies on the MMAU benchmark for Audio LLMs.

arxiv.org/abs/2505.09439



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484



1/10
@victormustar
🤯 It’s here: sub-10 second video generation is now real with LTX-Video-13B-distilled!

⬇️ Try it now on Hugging Face



https://video.twimg.com/amplify_video/1922926265511604224/vid/avc1/1352x1080/BY4lfJy5In-8VFlN.mp4

2/10
@victormustar
LTX Video Fast - a Hugging Face Space by Lightricks



3/10
@kingnish24
What's the prompt ??



4/10
@victormustar
something like "fpv gameplay" (image-to-video)



5/10
@Hathibel
Just tried LTX-Video-13B-distilled out. Took about 30 seconds to generate this.



https://video.twimg.com/amplify_video/1923267445260943363/vid/avc1/768x768/JDNsJk948jm-p9od.mp4

6/10
@Ren_Simmons
It’s incredible



7/10
@kasznare
Is this open source?



8/10
@bradsmithcoach
Sub-10 second video generation is a game changer!



9/10
@turbotardo
How soon? 🐋®️2️⃣



10/10
@picatrix_picori
prompt share, bro 😆




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484



1/7
@_akhaliq
AM-Thinking-v1 just dropped on Hugging Face

Advancing the Frontier of Reasoning at 32B Scale



Gq6dlmRWkAETwF8.jpg


2/7
@_akhaliq
discuss: Paper page - AM-Thinking-v1: Advancing the Frontier of Reasoning at 32B Scale



3/7
@_akhaliq
model: a-m-team/AM-Thinking-v1 · Hugging Face



4/7
@DanielMizr43248
This is cracked



5/7
@unclemusclez
yuge



6/7
@OmarBessa
wow



7/7
@tobeniceman
32B参数就能超过DeepSeek R1,确实牛逼




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

OpenAI introduces Codex, its first full-fledged AI agent for coding​



It replicates your development environment and takes up to 30 minutes per task.

Samuel Axon – May 16, 2025 1:38 PM |
79

A place to enter a prompt, set parameters, and click code or ask


The interface for OpenAI's Codex in ChatGPT. Credit: OpenAI


We've been expecting it for a while, and now it's here: OpenAI has introduced an agentic coding tool called Codex in research preview. The tool is meant to allow experienced developers to delegate rote and relatively simple programming tasks to an AI agent that will generate production-ready code and show its work along the way.

Codex is a unique interface (not to be confused with the Codex CLI tool introduced by OpenAI last month) that can be reached from the side bar in the ChatGPT web app. Users enter a prompt and then click either "code" to have it begin producing code, or "ask" to have it answer questions and advise.

Whenever it's given a task, that task is performed in a distinct container that is preloaded with the user's codebase and is meant to accurately reflect their development environment.

To make Codex more effective, developers can include an "AGENTS.md" file in the repo with custom instructions, for example to contextualize and explain the code base or to communicate standardizations and style practices for the project—kind of a README.md but for AI agents rather than humans.

Codex is built on codex-1, a fine-tuned variation of OpenAI's o3 reasoning model that was trained using reinforcement learning on a wide range of coding tasks to analyze and generate code, and to iterate through tests along the way.



OpenAI's announcement post about Codex is filled with objection handling to tackle the common refrains against AI coding agents; based on older tools and models, many developers accurately point out that LLM coding tools (especially when used for vibe coding instead of just for code completion or as an advisor) have been known to produce scripts that don't follow standards, are opaque or difficult to debug, or are insecure.

The fine tuning that led to codex-1 is meant to address these concerns in part, and it's also key that Codex shows its thinking and work every step of the way as it goes through its tasks (which can take anywhere from one to 30 minutes to complete). All that said, OpenAI notes that "it still remains essential for users to manually review and validate all agent-generated code before integration and execution."

Codex is available in a research preview, but it's rolling out to all ChatGPT Pro, Enterprise, and Team users now. Plus and Edu support is coming at a later date. For now, "users will have generous access at no additional cost for the coming weeks" so that they "can explore what Codex can do," but OpenAI says it intends to introduce rate limits and a new pricing scheme later.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484
[Codex] AMA with OpenAI Codex team



Posted on Fri May 16 15:33:12 2025 UTC

/r/ChatGPT/comments/1ko3tp1/ama_with_openai_codex_team/

Ask us anything about:

Codex
Codex CLI
codex-1 and codex-mini

Participating in the AMA:

Alexander Embiricos, Codex (https://old.reddit.com/[u][url]https://old.reddit.com/u/embirico[/url][/u])
Andrey Mishchenko, Research (https://old.reddit.com/u/andrey-openai)
Calvin French-Owen, Codex (https://old.reddit.com/u/calvinfo)
Fouad Matin, Codex CLI (https://old.reddit.com/u/pourlefou)
Hanson Wang, Research (https://old.reddit.com/u/hansonwng)
Jerry Tworek, VP of Research (https://old.reddit.com/u/jerrytworek)
Joshua Ma, Codex (https://old.reddit.com/u/joshjoshma)
Katy Shi, Research (https://old.reddit.com/u/katy_shi)
Thibault Sottiaux, Research (https://old.reddit.com/u/tibo-openai)
Tongzhoug Wang, Research (https://old.reddit.com/u/SsssnL)

We'll be online from 11:00am-12:00pm PT to answer questions.

✅ PROOF: https://twiiit.com/OpenAIDevs/status/1923417722496471429 | https://nitter.poast.org/OpenAIDevs/status/1923417722496471429 | https://xcancel.com/OpenAIDevs/status/1923417722496471429 | OpenAI Developers @OpenAIDevs, Twitter Profile | TwStalker

Alright, that's a wrap for us now. Team's got to go back to work. Thanks everyone for participating and please keep the feedback on Codex coming! - u/embirico



Commented on Fri May 16 15:50:51 2025 UTC

Why write the Codex CLI tool in TypeScript? Seems like writing in Python would have made more sense considering how Python-oriented everything else is. Similarly, is there any plans to make Codex more scriptable? An ideal use-case would be to call Codex from within code (e.g., triggered from a Slack message, etc.), but currently it seems like the only feasible way of handling this is to run a subprocess using "quiet mode" which is a bit clunky.

For the Codex service, are there plans to incorporate this into IDEs like VS Code? I'm all for moving as much work into the ChatGPT interface as possible, but unless I'm just casually updating code in my repos from my phone (which is a nice option), I'm likely going to be sitting in front of my IDE and it's a bit awkward imagining having these agents run via ChatGPT in a remote environment while I'm just waiting to pull down their changes, etc. It'd be great to run Codex agents locally via Docker so that they can operate on my codebase that is right in front of me.


│ Commented on Fri May 16 18:54:36 2025 UTC

│ Definitely! We want to enable you, other developers, and ourselves to be able to safely deploy code-executing agents wherever they’re useful. I think that’s part of the magic of a CLI, we’ve been using them wherever we want from local machines to servers in the cloud.

│ Re: language choice, candidly it’s a language I’m particularly familiar with and generally pretty great for UI (even if that UI is in the terminal) but in the near future, we’re going to have a high-performance engine with bindings for different languages so people can decide to extend with whatever language they prefer.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484
Computational chemistry unlocked



Posted on Fri May 16 14:02:09 2025 UTC

/r/singularity/comments/1ko1mnd/computational_chemistry_unlocked/

Computational chemistry unlocked: A record-breaking dataset to train AI models has launched

"Open Molecules 2025, an unprecedented dataset of molecular simulations, Sharing new breakthroughs and artifacts supporting molecular property prediction, language processing, and neuroscience, paving the way for the development of machine learning tools that can accurately model chemical reactions of real-world complexity for the first time.

This vast resource, produced by a collaboration co-led by Meta and the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab), could transform research for materials science, biology, and energy technologies.

"I think it's going to revolutionize how people do atomistic simulations for chemistry, and to be able to say that with confidence is just so cool," said project co-lead Samuel Blau, a chemist and research scientist at Berkeley Lab. His colleagues on the team hail from six universities, two companies, and two national labs."
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

1/11
@AIatMeta
Announcing the newest releases from Meta FAIR. We’re releasing new groundbreaking models, benchmarks, and datasets that will transform the way researchers approach molecular property prediction, language processing, and neuroscience.

1️⃣ Open Molecules 2025 (OMol25): A dataset for molecular discovery with simulations of large atomic systems.

2️⃣ Universal Model for Atoms: A machine learning interatomic potential for modeling atom interactions across a wide range of materials and molecules.

3️⃣ Adjoint Sampling: A scalable algorithm for training generative models based on scalar rewards.

4️⃣ FAIR and the Rothschild Foundation Hospital partnered on a large-scale study that reveals striking parallels between language development in humans and LLMs.

Read more ➡️ Sharing new breakthroughs and artifacts supporting molecular property prediction, language processing, and neuroscience



https://video.twimg.com/amplify_video/1922686195848613888/vid/avc1/1920x1080/lfaeSnC6ndDdpek9.mp4

2/11
@sidneywright_
@grok describe what this means in simple terms



3/11
@grok
Meta's AI team announced tools to boost science. Open Molecules 2025 is a huge dataset for predicting chemical properties, speeding up drug and material discovery. The Universal Model for Atoms uses this data to design new materials. Adjoint Sampling is a faster way to train AI for creating molecules. A language study with a hospital shows how kids learn speech, like AI learns language. These could transform research but face issues like data bias, high computing costs, and ethical concerns about brain data privacy.

(349 characters)



4/11
@inizkhan
❤️



5/11
@IterIntellectus
what?!



6/11
@amt_c42
@grok explain me the adjoint sampling part



7/11
@jnyryl
Me in 2028 with my glasses…

“Hey, Meta compile a compound that tastes like chocolate chip cookies with the nutritional value of Salad”



8/11
@JeffKirdeikis
Our knowledge of the universe and how it works is now on an exponential trajectory



9/11
@LegalPrimes
Looking forward to exploring these models especially the ones for material science



10/11
@DirtyWaterDegen
wtf timeline am I on?



11/11
@Thorsday008
This is wild!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196










1/10
@AriWagen
Meta's FAIR Chemistry (@OpenCatalyst) just released Open Molecules 2025 (OMol25), a massive high-accuracy dataset + models spanning small molecules, biomolecules, metal complexes, and electrolytes, including 83 elements + charged & open-shell species.

And it's live on @RowanSci!



https://video.twimg.com/amplify_video/1923086044624269312/vid/avc1/1410x720/bR-MA7vUkrRcuqAQ.mp4

2/10
@AriWagen
This is insanely impressive, and a huge push in the right direction—here's why I think it's so timely:

Access to high-quality data to train NNPs on has been limited. Folks have been training models on all the data they can and working hard to squeeze out little improvements.



GrArt90W0AAUimS.jpg


3/10
@AriWagen
OMol25 is a lot of data, and it's a big step towards bridging the divide in ML for chemistry between the molecular+organic realm (think SPICE) and the periodic+inorganic realm (think Materials Project).

I also love the inclusion of charge + spin.



GrAr07sWYAAade_.jpg

GrAsNKtXsAA2mL3.jpg


4/10
@AriWagen
Open sourcing this data will help researchers test ideas in NNP architectures, dataset cleaning, and model training strategies, propelling the whole field forward and making atomistic simulation more useful than ever before.

From myself: a huge congrats and thanks, OMol25 team!



5/10
@AriWagen
To read more about OMol25, check out some of these posts from the team behind the project!

From @mshuaibii:

[Quoted tweet]
Excited to share our latest releases to the FAIR Chemistry’s family of open datasets and models: OMol25 and UMA! @AIatMeta @OpenCatalyst

OMol25: huggingface.co/facebook/OMol…
UMA: huggingface.co/facebook/UMA
Blog: ai.meta.com/blog/meta-fair-s…
Demo: huggingface.co/spaces/facebo…


https://video.twimg.com/amplify_video/1922693624245985281/vid/avc1/1280x720/8wDzePyt7_kUqYo6.mp4

6/10
@AriWagen
From @SamMBlau:

[Quoted tweet]
The Open Molecules 2025 dataset is out! With >100M gold-standard ωB97M-V/def2-TZVPD calcs of biomolecules, electrolytes, metal complexes, and small molecules, OMol is by far the largest, most diverse, and highest quality molecular DFT dataset for training MLIPs ever made 1/N


Gq7QKwiWAAEcjRj.jpg


7/10
@AriWagen
From @nc_frey:

[Quoted tweet]
Introducing Open Molecules 25, a foundational quantum chemistry dataset including >100M DFT calculations across 83M unique molecules, built with 6B core hours of compute!

What does this mean for drug discovery, biology, and BioML?

1/


Gq_kPD9aAAAav1a.jpg


8/10
@AriWagen
And, of course, check out the paper (The Open Molecules 2025 (OMol25) Dataset, Evaluations, and Models) as well as the models (facebook/OMol25 · Hugging Face).

And—to quickly run simulations with them—you can use Rowan's web comp chem platform at Rowan Labs.



9/10
@Andrew_S_Rosen
This speaks highly to your software stack with how fast you're all able to implement things!



10/10
@mccrinbc
you guys ship like no other team -- massive respects




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
68,657
Reputation
10,572
Daps
185,484

AI Agents Now Write Code in Parallel: OpenAI Introduces Codex, a Cloud-Based Coding Agent Inside ChatGPT​


By Asif Razzaq

May 16, 2025

OpenAI has introduced Codex , a cloud-native software engineering agent integrated into ChatGPT, signaling a new era in AI-assisted software development. Unlike traditional coding assistants, Codex is not just a tool for autocompletion—it acts as a cloud-based agent capable of autonomously performing a wide range of programming tasks, from writing and debugging code to running tests and generating pull requests.

A Shift Toward Parallel, Agent-Driven Development


At the core of Codex is codex-1 , a fine-tuned version of OpenAI’s reasoning model, optimized specifically for software engineering workflows. Codex can handle multiple tasks simultaneously, operating inside isolated cloud sandboxes that are preloaded with the user’s codebase. Each request is handled in its own environment, allowing users to delegate different coding operations in parallel without disrupting their local development environment.

This architecture introduces a fundamentally new approach to software engineering—developers now interact with an agent that behaves more like a collaborative teammate than a static code tool. You can ask Codex to “fix a bug,” “add logging,” or “refactor this module,” and it will return a verifiable response, including diffs, terminal logs, and test results. If the output looks good, you can copy the patch directly into your repository—or ask for revisions.

Embedded Within ChatGPT, Accessible to Teams


Codex lives in the ChatGPT interface, currently available to Pro, Team, and Enterprise users , with broader access expected soon. The interface includes a dedicated sidebar where developers can describe what they want in natural language. Codex then interprets the intent and handles the coding behind the scenes, surfacing results for review and feedback.

This integration offers a significant boost to developer productivity. As OpenAI notes, Codex is designed to take on many of the repetitive or boilerplate-heavy aspects of coding—allowing developers to focus on architecture, design, and higher-order problem solving. In one case, an OpenAI staffer even “checked in two bug fixes written entirely by Codex,” all while working on unrelated tasks.

Codex Understands Your Codebase


What makes Codex more than just a smart code generator is its context-awareness. Each instance runs with full access to your project’s file structure, coding conventions, and style. This allows it to write code that aligns with your team’s standards—whether you’re using Flask or FastAPI, React or Vue, or a custom internal framework.

Codex’s ability to adapt to a codebase makes it particularly useful for large-scale enterprise teams and open-source maintainers. It supports workflows like branch-based pull request generation, test suite execution, and static analysis—all initiated by simple English prompts. Over time, it learns the nuances of the repository it works in, leading to better suggestions and more accurate code synthesis.

Broader Implications: Lowering the Barrier to Software Creation


OpenAI frames Codex as a research preview, but its long-term vision is clear: AI will increasingly take over much of the routine work involved in building software. The aim isn’t to replace developers but to democratize software creation , allowing more people—especially non-traditional developers—to build working applications using natural language alone.

In this light, Codex is not just a coding tool, but a stepping stone toward a world where software development is collaborative between humans and machines. It brings software creation closer to the realm of design and ideation, and further away from syntax and implementation details.

What’s Next?


Codex is rolling out gradually, with usage limits in place during the preview phase. OpenAI is gathering feedback to refine the agent’s capabilities, improve safety, and optimize its performance across different environments and languages.

Whether you’re a solo developer, part of a DevOps team, or leading an enterprise platform, Codex represents a significant shift in how code is written, tested, and shipped. As AI agents continue to mature, the future of software engineering will be less about writing every line yourself—and more about knowing what to build, and asking the right questions.




Check out the Details here . All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit .

 
Top