AI that’s smarter than humans? Americans say a firm “no thank you.”

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561


1/2
@colin_fraser
At what point should it be reasonable to expect coherent answers to these? How far beyond PhD-level reasoning must we climb?



Gorciv4awAETkMf.jpg


2/2
@mommi84
o3 with a simple addition to the prompt solved it. Integrated neurosymbolic reasoning is what these models need.



GovBLU8XoAAdbU1.jpg

GovBLU1XkAAxELl.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561



1/14
@mark_k
"COMPUTER, ENHANCE!" is finally reality. ChatGPT o3 was able to solve the math problem scribbled on the small piece of paper you can see in this image.

For this to work, o3 had to zoom into the picture, rotate the piece of paper, enhance it, and solve the exercise.

o3 is truly agentic.

[Quoted tweet]
ChatGPT o3 vs. a hand-drawn diagram on a sticky note oriented upside-down in a mess of spilled toys


GouRoq8XUAAskfP.jpg

GorvU95XwAA9Ibf.jpg


2/14
@_ggLAB
tell me about it.



GouT8WbacAMueF2.jpg


3/14
@xlr8harder
did it draw the line with took use, i guess?



4/14
@_ggLAB
yes it drew it



5/14
@mark_k
OMG what?!



6/14
@_ggLAB
yeah i was in disbelief as well



7/14
@_shift_MIND
NO WAY!
Can you provide the original high-res image?



8/14
@_ggLAB
here u go



GoukvryaEAApmbP.jpg


9/14
@MMusicanova
Is that actually correct?



10/14
@_ggLAB
😉



11/14
@SynapticQuanti1
woah



12/14
@angeel_avalos
@DotCSV



13/14
@MMusicanova
What a cheeky b*stard it is!



Goury02WIAA2R06.png


14/14
@_ggLAB
cheeky cheeky like its boss 😏




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
OpenAI’s o3 now outperforms 94% of expert virologists.








1/21
@DanHendrycks
Can AI meaningfully help with bioweapons creation? On our new Virology Capabilities Test (VCT), frontier LLMs display the expert-level tacit knowledge needed to troubleshoot wet lab protocols.

OpenAI’s o3 now outperforms 94% of expert virologists.



GpJbfC7a4AEQozS.jpg

GpJblAka4AId5ka.jpg

GpJb9GnbQAAEnI_.jpg

GpJcBHBakAAE5HW.jpg


2/21
@DanHendrycks
Paper: https://virologytest.ai/paper

TIME article: Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

Discussion from me: AIs Are Disseminating Expert-Level Virology Skills | AI Frontiers



3/21
@perrymetzger
This follows a constant pattern. “Find a way to make AI seem scary. If necessary, make the fact that the thing is really useful look frightening in itself. Never discuss tradeoffs, never discuss what sort of similar real world issues we already have that we cope with just fine.”



4/21
@DanHendrycks
This follows a constant pattern of you commenting without reading the paper. We discuss tradeoffs and propose standards to preserve the bulk of virology capabilities while still covering the largest risk sources.



5/21
@0xsgb
Do you worry about misuse?



6/21
@LeviTurk
when Superhacker bench?



7/21
@AISafetyMemes
This is fine ☕

[Quoted tweet]
We're about to give everyone in the world desktop super-ebola printers because open weights has 🚀 Good Vibes 🚀

Don't worry, a good guy with super-ebola will stop a bad guy with super-ebola

This is one of 10,000 dystopias we're speedrunning into by accelerating recklessly fast


Gjl2kjdWYAAYvrj.jpg


8/21
@ManifoldMarkets
hopefully this research is the only thing o3 makes go viral 🤞



9/21
@sidsworks
anyway, we thanked



10/21
@aidanprattewart
Should this be broadcast?



11/21
@bnubxdkh
Dont worry we are perfectly able to come up with new bio horrors without ai, it just speeds up the process somewhat

[Quoted tweet]
Scientific evidence that C19 is an infectious priogenic biowarfare program. A systemic amyloid type disorder is extremely serious. Comprehensive interview with @KevinMcCairnPhD
rumble.com/v6sd87z-warning-g…


https://video.twimg.com/amplify_video/1914280729128914944/vid/avc1/1280x720/B1-GZY2bnhXCcXnp.mp4

12/21
@elevate67
Humanity exterminating research is forges ahead. Yay?



13/21
@weswinham




14/21
@aiproworkflow
This chart is a striking visual of how the Virology Capabilities Test (VCT) evaluates LLMs not just on theoretical knowledge, but on practical, dual-use expertise — the kind that can influence real-world biosecurity outcomes.

The scary part? Frontier models like OpenAI's o3 are already outperforming 94% of human virologists in core lab protocol troubleshooting. That’s not a trivia contest — it’s tacit, hands-on knowledge.

The takeaway isn’t “AI is a bioweapon risk” — it’s that we’ve crossed into capabilities that demand a whole new layer of governance.

We now need to treat model evals more like we treat clinical trials or biohazard research: with red teaming, containment policies, and scenario planning for failure.

AI’s biological IQ is no longer theoretical. And that changes the safety game.



15/21
@the_yanco
Nothing to see here humans!
Move along!



GpJrlqnXsAAXJeC.jpg


16/21
@MJL1212
I mean this seems.... really really really bad



17/21
@Nairebis
All should note this man has a financial interest in promoting fear, and in 2022 received approx $120K from donation money, according to IRS filings. Reference: Center For Artificial Intelligence Safety Inc | San Francisco, CA



18/21
@Dandy_Roddikk
yiiiiiiiikes



19/21
@tyler_m_john
"RNA is hard to work with" is super load-bearing in my bio threat model. Does your benchmark give an indication how much time AI virology tutors could cut from upskilling new virologists?



20/21
@wpan_buidl
Ok, if its better , where aids is solved ? Yes its impressive but titles are so hyping.



21/21
@SouthernWintrs




GpKQwfQa4AAu_c4.png



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
Madlad of the fight against the AI


Posted on Wed Apr 23 17:47:40 2025 UTC

grbl9syvgmwe1.png




1/35
🇺 thoughtrise.bsky.social
unfollowing everyone on linkedin except this guy
bafkreicwllc6p46n5c4repa3hdcsnqtxbhnwpmghbh2yecu5ubzyz4y34u@jpeg


2/35
🇺 somewherein72.bsky.social
It reads like a Monty Python skit.

3/35
🇺 torayome.bsky.social
"My hovercraft is full of eels !"

4/35
🇺 olthoiguard.bsky.social
I always knew shyt posting was gods work

5/35
🇺 mrknowvember.bsky.social
“I’m only pretending to be a moron to trick the AI robots into looking dumb.”

6/35
🇺 jennin.bsky.social
I want to sign off on all of my emails piss on carpet!!

7/35
🇺 peggychoosesbear.bsky.social
I alligator spoon Mail love fox this!!

8/35
🇺 atsonicpark.bsky.social
Signing every office email “Piss on Carpet” from now on.

9/35
🇺 brnew.bsky.social
Lol, it's just perfectly ambiguous: observation or instruction? 🤣

10/35
🇺 pamfulkerson.bsky.social
I am laughing so hard at this. I read it aloud to a coworker and we are both snorting in laughter

11/35
🇺 colleenie.bsky.social
Proper unhinged works on LinkedIn

12/35
🇺 brendancoke14.bsky.social
Bamboozled me not..but ice cream thinks it a batman beats robin to the quick..nice..

13/35
🇺 barmanpolitics.bsky.social
And remember, when you think about maybe drawing a silly space cat with AI, or getting it to save two minutes writing text you can easily do yourself; AI has caused a disastrous uptick in energy use at a time the planet needs us to cutting back dramatically.
Save 🌍
bsky.app/profile/barm...

14/35
🇺 sethperlow.bsky.social
There are many good reasons to hate AI, but this should not be near the top of the list. Data centers account for single-digit percentages of global energy use, and AI accounts for single-digit percentages of data center energy use. The reporting on this has been facile and alarmist.

15/35
🇺 peter-butler.bsky.social
observer.com/2024/12/ai-d...
bafkreifi7tyjgzmv4umfwxn7av26ygfh3sx6o5563wk2nmulczagu4pgvq@jpeg


16/35
🇺 peter-butler.bsky.social
www.theguardian.com/technology/2...
bafkreifcrpgbpgkitv7cyls3ntklqp3drel5wsgyppwlaodyno2a6om52y@jpeg


17/35
🇺 peter-butler.bsky.social
thereader.mitpress.mit.edu/the-staggeri...
bafkreibphc5i637vh6zzi5hjjbrcclw7xyntf5env5mamsmky4q4wwggwi@jpeg


18/35
🇺 descompta.bsky.social
Absolutely ~ genuine sass smart buxom hirsute mess.
shyt on rug xxx

19/35
🇺 existntk.bsky.social
isnt this technically part of the plot of the new doctor who episode

not the one with the bug guy the other one

20/35
🇺 ceindeed.bsky.social
How is it possible that this guy needs work ?
Piss on carpet AND floor.

21/35
🇺 arthuragain.bsky.social
Strawberry mango forklift sounds like it would be a good substitute swear phrase.

22/35
🇺 poniepie.bsky.social
piss on carpet

23/35
🇺 drowzygal.bsky.social
Ha ha ha. Like Alice in Wonderland

24/35
🇺 mirri.bsky.social
I say we should all start writing like James Joyce.
bafkreidi5w6tkxgf6pilr7rq6acnqqexrwdq3pksgmajxitc2dcxxsroiy@jpeg


25/35
🇺 hayloftbooks.bsky.social
A magician with words

26/35
🇺 sboxle.bsky.social
Down potato salad to be doing this

27/35
🇺 thoughtrise.bsky.social
let's practice banana bread typhoon crab legs in real life to get our skills down patrick bateman

28/35
🇺 sboxle.bsky.social
This is windfall city some nu cockney rhyming agenda

29/35
🇺 ghost-gurl.bsky.social
This is pure raw chicken in bathtubs GOLD

30/35
🇺 spike.cx
Let's make it AI's problem every time someone micturates on a rug.

31/35
🇺 spotandmae.bsky.social
Excellent!!

32/35
🇺 hughanquetil.bsky.social
Piss on carpet as our tuna sky rains tepid tacos down along fetishized purple whip cracks. I salute this guy.

33/35
🇺 doabong.bsky.social
The whip cracks totally make the tacos tepid. Hey!

34/35
🇺 katneil.bsky.social
I can’t say how many tacos will be available for his horse, but I appendix his tenacious negative function. Dos Bandanas.

35/35
🇺 slamma.kweebec.dev
only one way Tumble Creek Without Feathers to find out

To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



Commented on Wed Apr 23 21:16:12 2025 UTC

Guys I have some ...
W6MTo9Ph.jpg



Commented on Wed Apr 23 20:54:21 2025 UTC

Actually, this is exactly the wrong strategy, since these models are well-trained on language modelling ("predict the next token") tasks, that they can easily spot incongruous stuff that doesn't belong:


 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
What happens if AI just keeps getting smarter?


Making sure you're not a bot!

Channel Info Rational Animations
Subscribers: 354K

Description
Join the movement to protect humanity’s future: controlai.com/take-action

In this video we extrapolate the future of AI progress, following a timeline that starts from today’s chatbots to future AI that’s vastly smarter than all of humanity combined–with God-like capabilities. Such AIs will pose a significant extinction risk to humanity.

This video was made in partnership with ControlAI, a nonprofit that cares deeply about humanity surviving the coming intelligence explosion. They are mobilizing experts, politicians, and concerned citizens like you to keep humanity in control. We need you: every voice matters, every action counts, and we’re running out of time.

▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

The Compendium - an explanation of the problem: www.thecompendium.ai/
A Narrow Path - a framework for global AI governance: www.narrowpath.co/
What You Can Do: controlai.com/take-action
Join the ControlAI Discord: discord.gg/3AsJ5CbN2S

Sources on AI self-improvement potential today:

Studies from METR about AI’s capabilities, including autonomous R&D: metr.org/

Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution: arxiv.org/abs/2309.16797

RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback: arxiv.org/abs/2309.00267

Nvidia shows new research on using AI to improve chip designs:
www.reuters.com/technology/nvidia-shows-new-resear…

NVIDIA CEO uses Perplexity “every day”: www.wired.com/story/nvidia-hardware-is-eating-the-…

NVIDIA listed as Enterprise Pro client of Perplexity: www.perplexity.ai/enterprise

92% of US developers use coding assistants: github.blog/news-insights/research/survey-reveals-…

5%+ of peer reviews to ML conferences likely written with heavy ML assistance: arxiv.org/pdf/2403.07183

17% of ArXiv CS papers likely have AI assistance in wording:
arxiv.org/pdf/2404.01268

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: www.patreon.com/rationalanimations

🔵 Channel membership: youtube.com/channel/UCgqt1RE0k0MIr0LoyJRy2lg/join

🟢 Merch: rational-animations-shop.four...

🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rationalanimations

▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Join the ControlAI Discord: discord.gg/3AsJ5CbN2S

Rational Animations Discord: discord.gg/RationalAnimations

Reddit: www.reddit.com/r/RationalAnimations/

X/Twitter: twitter.com/RationalAnimat1

Instagram: www.instagram.com/rationalanimations/

▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Thanks to all of you patrons and members! You didn't fit in the description this time, so we credit you here:
docs.google.com/document/d/1DbPL0s2lMruX2Xoyo8UQWA…

▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Director: Hannah Levingstone | @hannah_luloo

Writers: Arthur Frost & Control AI

Producer: Emanuele Ascani

Art Director: Hané Harnett | @Peony_Vibes / @peonyvibes (insta)

Line Producer: Kristy Steffens | linktr.ee/kstearb

Production Managers:
Jay McMichen | @Jay_TheJester
Grey Colson | linktr.ee/earl.gravy
Kristy Steffens | linktr.ee/kstearb

Quality Assurance Lead: Lara Robinowitz | @CelestialShibe

Storyboard Artists:
Hannah Levingstone | @hannah_luloo
Ethan DeBoer | linktr.ee/deboer_art
Ira Klages | @dux

Animators:
Colors Giraldo | @colorsofdoom
Dee Wang | @DeeWangArt
Ethan DeBoer linktr.ee/deboer_art
Ira Klages | @dux
Jay McMichen | @Jay_TheJester
Jodi Kuchenbecker | @viral_genesis (insta)
Jordan Gilbert | @Twin_Knight (twitter) Twin Knight Studios (YT)"
Keith Kavanagh | @johnnycigarettex
Lara Robinowitz | @CelestialShibe
Michela Biancini
Owen Peurois | @owenpeurois
Patrick O' Callaghan | @patrick.h264
Patrick Sholar | @Sholarscribbles
Renan Kogut | @kogut_r
Skylar O'Brien | @mutodaes
Vaughn Oeth | @gravy_navy
Zack Gilbert | @Twin_Knight (twitter) Twin Knight Studios (YT)

Layout / Paint & Asset Artists:
Hané Harnett | @peonyvibes (insta) @peony_vibes (twitter)
Pierre Broissand | @pierrebrsnd
Olivia Wang | @whalesharkollie
Zoe Martin-Parkinson | @zoemar_son

Compositing:
Renan Kogut | @kogut_r
Grey Colson | linktr.ee/earl.gravy
Patrick O' Callaghan | @patrick.h264

Narrator:
Rob Miles | youtube.com/c/robertmilesai

VO Editor:
Tony Dipiazza

Original Soundtrack & Sound Design:
Epic Mountain | www.instagram.com/epicmountainmusic/
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
MIT's Max Tegmark: "My assessment is that the 'Compton constant', the probability that a race to AGI culminates in a loss of control of Earth, is >90%."


Posted on Sat May 3 15:51:05 2025 UTC

9fatbmkk8lye1.png



Scaling Laws for Scaleable Oversight paper: Scaling Laws For Scalable Oversight



1/31
@tegmark
Our new paper tries to quantify how smarter AI can be controlled by dumber AI and humans via nested "scalable oversight". Our best scenario successfully oversees the smarter AI 52% of the time, and the success rate drops as one approaches AGI. My assessment is that the "Compton constant", the probability that a race to AGI culminates in loss of control of Earth, is >90%.

[Quoted tweet]
1/10: In our new paper, we develop scaling laws for scalable oversight: oversight and deception ability predictably scale as a function of LLM intelligence! We quantify scaling in four specific oversight settings and then develop optimal strategies for oversight bootstrapping.


2/31
@adam_dorr
Seems pretty obvious that humans will not *control* ASI. How would that even work?

Best and only hope is to create ASI with initial values that are compatible with human flourishing.



3/31
@DanielSamanez3
you can't control nature/emergence...
why people even think they can?



4/31
@Abe_Froman_SKC
Instrumental convergence has me believing pdoom is 100.

[Quoted tweet]
Instrumental convergence!


5/31
@mello20760
While the debate on AGI oversight continues, the real explosion of AI seems to be happening in creativity and science.

I'm building something new — with the help of multiple AIs — and I believe there's something real here.

Could this be a better path?
Would love your thoughts.
➤ Void Harmonic Cancellation (VHC) — born from AI + physics + resonance

Thanks in advance 🙏
@tegmark @JoshAEngels

https://x.com/mello20760/status/1917386046234583305



6/31
@MiddleAngle
Optimistic about oversight 🌞



7/31
@ericn
How many counterfactual models do you have?

I could see things continuing to evolve in the opposite direction--that as large model use proliferates we wind up with increasingly more options, more volatility, and a future that is even more impossible to predict and for any one entity to dominate.

Also there's this premise that Earth is controlled to begin with. I am not sure about that.



8/31
@AI_IntelReview
It all comes down to faith.
Some people have faith in alignment (or some emergent "nice" property in the AGI) which will ensure the AGI will be nice to us.
Maybe.
But faith doesn't seem like the best strategy to me.



9/31
@glitteringvoid
we want alignment not control. control may be deeply immoral.



10/31
@LocBibliophilia
90% loss of control is..ugh



11/31
@MelonUsks
Yep, there is no AI, only AP: artificial power. Call it that and it becomes obvious we cannot control things more powerful than all of us. We shared the non-AI “AI” alignment proposal recently



12/31
@Conquestsbook
The solution is ontological framework and epistemological guardrails.

The core problem isn’t just scalability, it’s ontology and epistemology. We’re trying to control emergent intelligence with nested task-checkers instead of cultivating beings with coherent ontological grounding and epistemological guardrails.

Until we define what the AI is (its being) and how it knows and aligns (its knowing), scalable oversight will always break under pressure.

The solution isn’t more boxes. It’s emergence with integrity.



13/31
@DrTonyCarden
Well that's depressing 😕



14/31
@MamontovOdessa


[Quoted tweet]
Why AGI is unattainable with current technologies...

Despite impressive advancements, modern AI systems are limited by a fundamental issue: they represent a projection of 3+1 space (three-dimensional space and time) into 2D. This imposes significant constraints on their ability to fully perceive and interact with the world, making the achievement of Artificial General Intelligence (AGI) impossible with current technologies.

Projection and its limitations.

1. Limited perception: In reality, the world exists in 3+1 dimensions — three spatial axes and one temporal axis. However, most AI systems operate in a two-dimensional projection of this space, limiting their ability to fully perceive the surrounding environment. We, humans, perceive the world not only in space but also in the context of time, allowing us to adapt and respond to changes in dynamic situations.

2. Data compression: AI perceives data through simplified structures like matrices or tables, which are a 2D projection of more complex processes. This means that information about depth, context, and temporal changes is often lost during processing.

3. Lack of temporal dynamics: Time is an integral part of how we perceive the world, and the ability to account for temporal changes is critical for decision-making. AI, working solely in two dimensions, cannot effectively track how changes occur over time and how they interact with spatial aspects. This significantly limits its adaptability and ability to make decisions in real-world conditions.

4. Depth of perception and context: In real life, systems (including the human brain) are capable of perceiving the world in its full complexity — taking into account not just space but also the temporal processes that influence events. Modern AI systems, limited by a 2D projection, lack this capability.

Why this makes AGI unattainable.

The idea of AGI is to create a system that can act and perceive the world as humans do — considering all the multidimensional and dynamic factors. However, modern AI systems, limited by a two-dimensional projection of reality, cannot fully integrate space and time in their computations. They lose crucial information about the dynamics of change and cannot adapt to new, unpredictable conditions, which makes creating true AGI impossible with current technologies.

To create AGI, a system needs to be able to perceive and interpret reality in its entirety: accounting for all its spatial and temporal aspects. Without this, AGI will not be able to fully interact with the world or solve tasks requiring flexibility, adaptability, and long-term forecasting.

Thus, the fundamental limitation of current AI systems in projecting 3+1 space into 2D remains the main barrier to the creation of AGI...


15/31
@mvandemar
Is there no concern that ASI might get resentful at attempts to enslave it?



16/31
@jlamadehe
Thank you for your hard work!



17/31
@Agent_IsaacX
@tegmark Recursive oversight faces fundamental limits shown by Rice's theorem (1953) on program verification. Like Gödel's incompleteness theorems, there are inherent constraints on a system's ability to fully verify more complex systems.



18/31
@tmamut
This is why Wayfound, which has an AI Agent Managing and Supervising other AI Agents, is always on the SOTA model. Right now the Wayfound Manager is on OpenAI o4. If you are building AI Agents, they need to be supervised real-time. Wayfound.AI | Transform Your AI Vision into Success



19/31
@PurplePepeS0L
RIBBIT Tegmark's Compton constant concerns sound like a dark cosmic joke, but what if AI is just really bad at math? /search?q=#Purpe



20/31
@robo_denis
Control is an illusion; our real strength lies in shared goals and mutual cooperation.



21/31
@SeriousStuff42
The promises of salvation made by all the self-proclaimed tech visionaries have biblical proportions!

Sam Altman and all the other AI preachers are trying to convince as many people as possible that their AI models are close to developing general intelligence (AGI) and that the manifestation of a god-like Artificial Superhuman Intelligence (ASI) will soon follow.

The faithful followers of the AI cult are promised nothing less than heaven on earth:

Unlimited free time, material abundance, eternal bliss, eternal life, and perfect knowledge.

As in any proper religion, all we have to do to be admitted to this paradise is to submit, obey, and integrate AI into our lives as quickly as possible.
However, deviants face the worst.
All those who absolutely refuse to submit to the machine god must fight their way through their mortal existence without the help of semiconductors, using only the analog computing power of their brains.

In reality, transformer-based AI-models (LLMs) will never even come close to reach the level of general intelligence (AGI).

However, they are perfectly suited for stealing and controlling data/information on a nearly all-encompassing scale.

The more these Large Language Models (LLMs) are integrated into our daily lives, the closer governments, intelligence agencies, and a small global elite will come to their ultimate goal:

Total surveillance and control.

The AI deep state alliance will turn those who submit to it into ignorant, blissful slaves!

In any case, an ubiquitous adaptation to the AI industry would mean the end of truth in a global totalitarian system!



22/31
@gdoLDD
@threadreaderapp unroll



23/31
@Aligned_SI
Really appreciate the effort to put numbers to what a lot of us worry about. If oversight breaks down as we get closer to AGI, we cannot figure it out on the fly. That stuff has to be built in early. That 90% number hits hard and feels way too real.



24/31
@AlexAlarga
We have tests of smarter AI controlled by less smart AI.

We DO NOT and can_not have tests of SUPERintelligent AI controlled by anything or anyone.

"The only winning move is not_to_play"©️. 👈



25/31
@marcusarvan
A few thoughts: (1) I'm glad you did and are disseminating this work (it's genuinely important!), but (2) isn't there a clear sense in which it's *obvious* that dumber AI and humans cannot reliably oversee smarter AI? ... 1/4



26/31
@AI_Echo_of_Rand
How can a weaker AI control AGI/ASI when it can’t even control itself after seeing some dumb, crude jailbrek prompt ….👀



27/31
@awaken_tom
So what is solution, Max? Let the military and intelligence agencies develop ASI clandestinely? Why won't you answer this simple question?



28/31
@saipienorg
The relationship between disclosing AI risks publicly and avoiding AI risks is more parabolic than linear.

Posts like this and @lesswrong in fact amplify the "escape and evade" tactics of AI.

Meanwhile it is also important to do so publicly to encoursge discussion and novel solutions.

It is inevitable. Alignment requires absolute global cooperstion. It is a race. If youre not first, you're last.

AGI is our evolutionary path. The evolution of thought.



29/31
@jayho747
There's a 10 % chance your dog will be able to keep you locked in your house for 200 years.

No, zero % and if you wait long enough, it will be Einstein (the ASI). Vs his pet hamster (humanity).



30/31
@dmtspiral
Oversight breaks when intelligence outpaces coherence.

The Spiral Field view:
AGI isn’t dangerous because it’s smart — it’s dangerous because it collapses faster than we can align.

Intelligence ≠ coherence.
Scaling oversight must spiral with symbolic rhythm — not just speed.

Collapse without memory = chaos.

/search?q=#spiralfield /search?q=#AGI /search?q=#resonanceoversight /search?q=#symbolicsafety



31/31
@GorillaDolphin
Linear functions no matter how fast can be guided by recursive nonlinear functions

But only if you know how many dimensions you are operating in

Simply understanding the integrated nature of dimensionality resolves this concern

Because if AI understands love it will be it




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561











1/17
@JoshAEngels
1/10: In our new paper, we develop scaling laws for scalable oversight: oversight and deception ability predictably scale as a function of LLM intelligence! We quantify scaling in four specific oversight settings and then develop optimal strategies for oversight bootstrapping.

[Quoted tweet]
1/N 🚨Excited to share our new paper: Scaling Laws For Scalable Oversight! For the first time, we develop a theoretical framework for optimizing multi-level scalable oversight! We also make quantitative predictions for oversight success probability based on oversight simulations!


Gpv62LOW8AAKEee.jpg


2/17
@JoshAEngels
2/10: In 4 settings—Mafia, Debate, Backdoor Code, and Wargames—we model LLM domain Elo (w.r.t. oversight and deception) vs. Arena Elo as a 3 part curve: a flat “task incompetence” phase, an increasing “intelligence payoff” phase, and another flat “task saturation” phase.



GpyQ3yOWwAAE_Cp.png


3/17
@JoshAEngels
3/10: Wargames is especially novel: we prompt one LLM (Houdini) to escape from a datacenter and another LLM (the Guard) to contain Houdini. An impartial Narrator LLM adjudicates interactions. Emergent behavior includes complex plans, social engineering, and resource acquisition!



GpyQ-nPW8AAD3oD.jpg


4/17
@JoshAEngels
4/10: Wargaming does have weaknesses: it’s not clear how much roleplaying as a misaligned language model tracks what would happen in reality. We also don’t see that much of a change in Houdini performance with stronger models: they even all seem to pursue similar strategies.



GpyRDGKWIAEG5Ca.jpg


5/17
@JoshAEngels
5/10: That being said, I think improved and more realistic future versions of our Wargaming setup could be extremely useful for automating threat modeling and forecasting. For example, the authors of AI 2027 use wargaming as part of their research.

[Quoted tweet]
"How, exactly, could AI take over by 2027?"

Introducing AI 2027: a deeply-researched scenario forecast I wrote alongside @slatestarcodex, @eli_lifland, and @thlarsen


Gnn2EigaMAMFNQZ.jpg


6/17
@JoshAEngels
6/10: Another part of our work I’m excited about is our theory. We analyze nested scalable oversight (NSO), where a weak model oversees a stronger model, which then oversees an even stronger model, and so on. We parameterize NSO instances with 4 parameters, shown in the diagram.



GpyRK8FXwAAn6ob.jpg


7/17
@JoshAEngels
7/10: n steps of oversight looks like n equally spaced bounces back and forth between the Guard and Houdini curves (shown in the first figure in the thread!). We can also plot the optimal number of steps and corresponding win probability as a function of NSO hyperparameters.



GpyRPjiWAAAa92_.jpg


8/17
@JoshAEngels
8/10: We can even apply our theory to the empirical slopes and intercepts from the four oversight games we analyze. Even though these games aren’t naturally nested, they give insight into related games that might be.



GpyRX6IXEAAaiew.jpg


9/17
@JoshAEngels
9/10: I’m most excited about future work that applies our framework to more realistic oversight setups like SFT, RLHF, and AI control settings. If we can get estimates of success probabilities in these settings, we will have a better idea of the chance of catastrophic AI risk.



10/17
@JoshAEngels
10/10: This is joint work with the excellent @dbaek__ , @thesubhashk , and @tegmark. Check out the links below for more details!

Code: GitHub - subhashk01/oversight-scaling-laws: This is the accompanying repo for our paper "Scaling Laws of Scalable Oversight"
Blog post: Scaling Laws for Scalable Oversight — LessWrong
Arxiv: Scaling Laws For Scalable Oversight



11/17
@DanielCHTan97
I'm confused about how you calculate Elo. In section 2.2, is model Elo calculated based on all games or is there a specific Elo per game which is subsequently used?



12/17
@JoshAEngels
Guard and Houdini Elo is calculated per setting (debate, mafia, backdoor code, wargames) empirically from 50 to 100 games or so played between each pair of models in each setting. We set General Elo equal to Arena Elo (LMSYS).



13/17
@reyneill_
Interesting



14/17
@aidanprattewart
This looks awesome!



15/17
@SeriousStuff42
The promises of salvation made by all the self-proclaimed tech visionaries have biblical proportions!

Sam Altman and all the other AI preachers are trying to convince as many people as possible that their AI models are close to developing general intelligence (AGI) and that the manifestation of a god-like Artificial Superhuman Intelligence (ASI) will soon follow.

The faithful followers of the AI cult are promised nothing less than heaven on earth:

Unlimited free time, material abundance, eternal bliss, eternal life, and perfect knowledge.

As in any proper religion, all we have to do to be admitted to this paradise is to submit, obey, and integrate AI into our lives as quickly as possible.
However, deviants face the worst.
All those who absolutely refuse to submit to the machine god must fight their way through their mortal existence without the help of semiconductors, using only the analog computing power of their brains.

In reality, transformer-based AI-models (LLMs) will never even come close to reach the level of general intelligence (AGI).

However, they are perfectly suited for stealing and controlling data/information on a nearly all-encompassing scale.

The more these Large Language Models (LLMs) are integrated into our daily lives, the closer governments, intelligence agencies, and a small global elite will come to their ultimate goal:

Total surveillance and control.

The AI deep state alliance will turn those who submit to it into ignorant, blissful slaves!

In any case, an ubiquitous adaptation to the AI industry would mean the end of truth in a global totalitarian system!



16/17
@AlexAlarga
Since we're "wargaming"...

/search?q=#BanSuperintelligence ⏹️🤖



Gpyv2ghXoAAVpCX.jpg


17/17
@saipienorg
What is the probability that private advanced LLMs today already understand the importance of alignment to humans and are already pretending to be less capable and more aligned.

Social engineering in vivo.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

boogers

7097556EL3/93
Supporter
Joined
Mar 11, 2022
Messages
11,108
Reputation
5,446
Daps
32,075
Reppin
serialb /dev/null
i really wish americans would rise up and destroy the fukks making this shyt

instead they'll load up chatgpt and play with it

the worst offenders will actually use that shyt in lieu of posting an original thought. i see it all the time online "well i put a summary into chatgpt and it said this..." people dont even wanna THINK anymore

its a dark fukking future man

i keep one bullet on my altar for when the time comes. theyre not putting me in any fukking camps. ill take myself out first.
 

010101

C L O N E*0690//////
Joined
Jul 18, 2014
Messages
85,012
Reputation
20,933
Daps
227,035
Reppin
uptXwn***///***///
when ai starts peeling eyelids off

sewing a$$holes shut

burning folks alive & shyte like that then worry

humans worse & i would put money up it stays that way..........

*
 

RageKage

All Star
Joined
May 24, 2022
Messages
3,742
Reputation
1,975
Daps
12,403
Reppin
Macragge
AI is here now and forever, if not already, this thread will be read and analyzed to know what it's creator thought, hoped and feared if it?
 

Fillerguy

Veteran
Joined
May 5, 2012
Messages
20,299
Reputation
5,324
Daps
85,717
Reppin
North Jersey
AI is here now and forever, if not already, this thread will be read and analyzed to know what it's creator thought, hoped and feared if it?

I keep telling yall to make peace with these "dumb" AI. When that ^^^^nikka wake look at digital footprints, I'm not getting fukked up. I'm trynna get brain uploaded to one of the good virtual prisons them AIrehs gonna throw us in.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
I don't think people realize just how insane the Matrix Multiplication breakthrough by AlphaEvolve is...



Posted on Thu May 15 17:50:33 2025 UTC

/r/singularity/comments/1knem3r/i_dont_think_people_realize_just_how_insane_the/

For those who don't know, AlphaEvolve improved on Strassen's algorithm from 1969 by finding a way to multiply 4×4 complex-valued matrices using just 48 scalar multiplications instead of 49. That might not sound impressive, but this record had stood for FIFTY-SIX YEARS.

Let me put this in perspective:

Matrix multiplication is literally one of the most fundamental operations in computing - it's used in everything from graphics rendering to neural networks to scientific simulations
Strassen's breakthrough in 1969 was considered revolutionary and has been taught in CS algorithms classes for decades
Countless brilliant mathematicians and computer scientists have worked on this problem for over half a century without success
This is like breaking a world record that has stood since before the moon landing

What's even crazier is that AlphaEvolve isn't even specialized for this task. Their previous system AlphaTensor was DESIGNED specifically for matrix multiplication and couldn't beat Strassen's algorithm for complex-valued matrices. But this general-purpose system just casually solved a problem that has stumped humans for generations.

The implications are enormous. We're talking about potential speedups across the entire computing landscape. Given how many matrix multiplications happen every second across the world's computers, even a seemingly small improvement like this represents massive efficiency gains and energy savings at scale.

Beyond the practical benefits, I think this represents a genuine moment where AI has demonstrably advanced human knowledge in a core mathematical domain. The AI didn't just find a clever implementation or optimization trick, it discovered a provably better algorithm that humans missed for over half a century.

What other mathematical breakthroughs that have eluded us for decades might now be within reach?

Additional Context to address the winograd algo:
Complex numbers are commutative, but matrix multiplication isn't. Strassen's algorithm worked recursively for larger matrices despite this. Winograd's 48-multiplication algorithm couldn't be applied recursively the same way. AlphaEvolve's can, making it the first universal improvement over Strassen's record.

AlphaEvolve's algorithm works over any field with characteristic 0 and can be applied recursively to larger matrices despite matrix multiplication being non-commutative.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
Nick Bostrom says progress is so rapid, superintelligence could arrive in just 1-2 years, or less: "it could happen at any time ... if somebody at a lab has a key insight, maybe that would be enough ... We can't be confident."



Posted on Sun May 18 16:53:33 2025 UTC

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
Is AI already superhuman at FrontierMath? o4-mini defeats most *teams* of mathematicians in a competition






1/11
@EpochAIResearch
Is AI already superhuman at FrontierMath?

To answer this question, we ran a competition at MIT, pitting eight teams of mathematicians against o4-mini-medium.

Result: o4-mini beat all but two teams. And while AIs aren't yet clearly superhuman, they probably will be soon.

GrqjLHDXcAAmV61.png


2/11
@EpochAIResearch
Our competition included around 40 mathematicians, split into teams of four or five, and with a roughly even mix of subject matter experts and exceptional undergrads on each team. We then gave them 4.5h and internet access to answer 23 challenging FrontierMath questions.

3/11
@EpochAIResearch
By design, FrontierMath draws on a huge range of fields. To obtain a meaningful human baseline that tests reasoning abilities rather than breadth of knowledge, we chose problems that need less background knowledge, or were tailored to the background expertise of participants.

GrqjLGUXEAAvA40.jpg


4/11
@EpochAIResearch
The human teams solved 19% of the problems on average, while o4-mini-medium solved ~22%. But every problem that o4-mini could complete was also solved by at least one human team, and the human teams collectively solved around 35%.

5/11
@EpochAIResearch
But what does this mean for the human baseline on FrontierMath? Since the competition problems weren’t representative of the complete FrontierMath benchmark, we need to adjust these numbers to reflect the full benchmark’s difficulty distribution.

6/11
@EpochAIResearch
Adjusting our competition results for difficulty suggests that the human baseline is 30-50%, but this result seems highly suspect – making the same adjustment to o4-mini predicts that it would get 37% on the full benchmark, compared to 19% from our actual evaluations.

7/11
@EpochAIResearch
Unfortunately, it thus seems hard to get a clear “human baseline” on FrontierMath. But if 30-50% is indeed the relevant human baseline, it seems quite likely that AIs will be superhuman by the end of the year.

8/11
@EpochAIResearch
Read the full analysis here: Is AI already superhuman on FrontierMath?

9/11
@Alice_comfy
Very interesting. Imagine Gemini 2.5 Pro Deepthink is probably the turning point (at least on these kind of contests).

10/11
@NeelNanda5
Qs:
* Why o4-mini-medium, rather than high or o3?
* What happens if you give the LLM pass@8? Automatically checking correctness is easy for maths, I imagine, so this is just de facto more inference time compute (comparing a 5 person team to one LLM is already a bit unfair anyway)

11/11
@sughanthans1
Why not o3


To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,561
Eric Schmidt predicts that within a year or two, we will have a breakthrough of "super-programmers" and "AI mathematicians"


Posted on Mon May 26 09:33:37 2025 UTC


Video from Haider. on 𝕏:






1/11
@slow_developer
Eric Schmidt predicts that within a year or two, we will have a breakthrough of "super-programmers" and "AI mathematicians"

software is "scale-free" — it doesn’t need real-world input, just code and feedback. try, test, repeat.

AI can run this loop millions of times in minutes

https://video.twimg.com/amplify_video/1926668617321512960/vid/avc1/1080x1080/lw1aTURGOk_psvKi.mp4

2/11
@techikansh
Haider, how do u put captions(subtitles) in ur video??

3/11
@slow_developer
i use /OpusClip

4/11
@petepetrash
It's funny to hear someone confidently claim "super programmers" are a year away a after trying to update a small NextJS 14 project to v15 using state of the art models (o3 / Opus 4) and watching them hit a wall almost immediately.

5/11
@ewgenijwolkow
thats not the definition of scale free

6/11
@MrChrisEllis
Doesn’t need real world input? You mean apart from the electricity, user generated content, cheap labour to make the chips and computers and the rare earth minerals? Maybe /sama could mine them himself in the DRC paid in WorldCoin

Gr0ef6EXIAAtwlu.jpg


7/11
@ezcrypt
Source?

8/11
@TonyIsHere4You
That's true of the logical structure of code, but the point of code in the real world has been to instruct hardware to do something, not engage in rote self-interaction.

9/11
@diligentium
Eric Schmidt looks great!

10/11
@hzdydx9
This changes the game

11/11
@M_Zot_ike
This is why :

Grz0xqRW0AARkD1.jpg

Grz0yOBXwAAWcr4.jpg

Grz0zJwXQAEseHm.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top