Sam Altman claims “deep learning worked”, superintelligence may be “a few thousand days” away, and “astounding triumphs” will incrementally become ...

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263

1/6
@ai_for_success
I repeat world is not ready....

The most terrifying paragraph from Sam Altman's new blog, Three Observations:

"But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today."

[Quoted tweet]
So we finally have a definition of AGI from OpenAI and Sam Altman:

"AGI is a weakly defined term, but generally speaking, we mean it to be a system that can tackle increasingly complex problems at a human level in many fields."

Can we call this AGI ??

Sam Altman Three Observations.


GjYCPpqbYAA9wJ4.png

GjYBGEwbMAAMpcT.png


2/6
@airesearchtools
What is the time frame we’re talking about? Assuming superintelligence is achieved, will we allow machines to make decisions? If humans remain the decision-makers, there will still be people working because those decisions will need to be made. Moreover, as the tasks we currently perform become simpler, we will likely have to take on even more complex decisions.

And when it comes to manual labor, will robots handle it? Will each of us have a personal robot? How long would it take to produce 8 billion robots? The truth is, I struggle to clearly visualize that future. And when I try, I can’t help but think of sci-fi movies and books where humans aren’t exactly idle.



3/6
@victor_explore
we're all pretending to be ready while secretly googling "how to survive the ai apocalypse" at 3am



4/6
@patrickDurusau
Well, yes and no. Imagine Sam was writing about power looms or the soon to be invented cotton gin.
He phrases it in terms of human intelligence but it's more akin to a mechanical calculator or printer.
VCs will be poorer and our jobs will change, but we'll learn new ones.



5/6
@MillenniumTwain
Public Sector 'AI' is already more than Two Decades behind Private/Covert sector << AGI >>, and all Big (Fraud) Tech is doing is accelerating the Dumb-Down of our Victim, Slave, Consumer US Public, and World!

[Quoted tweet]
"Still be Hidden behind Closed Doors"? Thanks to these Covert Actors (Microsoft, OpenAI, the NSA, ad Infinitum) — More and More is Being Hidden behind Closed Doors every day! The ONLY 'forward' motion being their exponentially-accelerated Big Tech/Wall Street HYPE, Fraud, DisInfo ...


Gb-CZx0XAAA7Jyb.jpg


6/6
@MaktabiJr
What will be the currency in that world? What’s the price of things in that world? Or agi will decide for us how to live equally? Giving each human equal credit




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196





1/9
@ai_for_success
"Anyone in 2035 should be able to marshal the intellectual capacity equivalent to everyone in 2025." – Sam Altman

The next 10 years will be the most exciting, even if this statement holds true for just 10%.



GjYMrm_aYAAhJ1r.jpg


2/9
@jairodri
Funny how my goats figured out AI-level problem solving years ago - they've been outsmarting my fence systems since before ChatGPT was cool 😅



3/9
@victor_explore
the greatest wealth transfer won't be in dollars, but in cognitive capabilities going from the few to the many



4/9
@bookwormengr
Sama has writing style of people who are celebrated as messenger's of god.

Never bet against Sama, he has delivered.



5/9
@CloudNexgen
Absolutely! The progress we'll see in the next decade is bound to be phenomenal 🤯



6/9
@tomlikestocode
Even if we achieve just a fraction of this, the impact would be profound. The real challenge is ensuring it benefits everyone equitably.



7/9
@MaktabiJr
I think he already reached that. Whatever he’s developing is something that know how to control all of this



8/9
@Lux118736073602
Mark my words it will happen a lot sooner than in the year 2035 😎



9/9
@JohJac7
AI is an integral part of the theory of everything



GjYPUKXW0AAFGOz.jpg



To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263

Commented on Sat Mar 29 10:12:13 2025 UTC

This test is illogical.
Initially, the test's name was arc-agi.
And after artificial intelligence applications surpassed the test, arc-agi2 was released.
What is this? Is it AGI 2.0?
And if AI surpasses the new test, will a third, new, harder test be released?
I see that we are moving in an endless loop and will never reach AGI with this method.


│ Commented on Sat Mar 29 10:24:25 2025 UTC

│ I believe Francois Chollet said something like "you know AGI is achieved when designing a benchmark that is easy for humans but difficult for AI is no longer possible." I think that makes sense. The fact that models perform so badly on these puzzles that are relatively easy for humans shows that they are not really as generally intelligent as humans yet.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263
What happens if AI just keeps getting smarter?


Making sure you're not a bot!

Channel Info Rational Animations
Subscribers: 354K

Description
Join the movement to protect humanity’s future: controlai.com/take-action

In this video we extrapolate the future of AI progress, following a timeline that starts from today’s chatbots to future AI that’s vastly smarter than all of humanity combined–with God-like capabilities. Such AIs will pose a significant extinction risk to humanity.

This video was made in partnership with ControlAI, a nonprofit that cares deeply about humanity surviving the coming intelligence explosion. They are mobilizing experts, politicians, and concerned citizens like you to keep humanity in control. We need you: every voice matters, every action counts, and we’re running out of time.

▀▀▀▀▀▀▀▀▀SOURCES & READINGS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

The Compendium - an explanation of the problem: www.thecompendium.ai/
A Narrow Path - a framework for global AI governance: www.narrowpath.co/
What You Can Do: controlai.com/take-action
Join the ControlAI Discord: discord.gg/3AsJ5CbN2S

Sources on AI self-improvement potential today:

Studies from METR about AI’s capabilities, including autonomous R&D: metr.org/

Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution: arxiv.org/abs/2309.16797

RLAIF vs. RLHF: Scaling Reinforcement Learning from Human Feedback with AI Feedback: arxiv.org/abs/2309.00267

Nvidia shows new research on using AI to improve chip designs:
www.reuters.com/technology/nvidia-shows-new-resear…

NVIDIA CEO uses Perplexity “every day”: www.wired.com/story/nvidia-hardware-is-eating-the-…

NVIDIA listed as Enterprise Pro client of Perplexity: www.perplexity.ai/enterprise

92% of US developers use coding assistants: github.blog/news-insights/research/survey-reveals-…

5%+ of peer reviews to ML conferences likely written with heavy ML assistance: arxiv.org/pdf/2403.07183

17% of ArXiv CS papers likely have AI assistance in wording:
arxiv.org/pdf/2404.01268

▀▀▀▀▀▀▀▀▀PATREON, MEMBERSHIP, MERCH▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

🟠 Patreon: www.patreon.com/rationalanimations

🔵 Channel membership: youtube.com/channel/UCgqt1RE0k0MIr0LoyJRy2lg/join

🟢 Merch: rational-animations-shop.four...

🟤 Ko-fi, for one-time and recurring donations: ko-fi.com/rationalanimations

▀▀▀▀▀▀▀▀▀SOCIAL & DISCORD▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Join the ControlAI Discord: discord.gg/3AsJ5CbN2S

Rational Animations Discord: discord.gg/RationalAnimations

Reddit: www.reddit.com/r/RationalAnimations/

X/Twitter: twitter.com/RationalAnimat1

Instagram: www.instagram.com/rationalanimations/

▀▀▀▀▀▀▀▀▀PATRONS & MEMBERS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Thanks to all of you patrons and members! You didn't fit in the description this time, so we credit you here:
docs.google.com/document/d/1DbPL0s2lMruX2Xoyo8UQWA…

▀▀▀▀▀▀▀CREDITS▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Director: Hannah Levingstone | @hannah_luloo

Writers: Arthur Frost & Control AI

Producer: Emanuele Ascani

Art Director: Hané Harnett | @Peony_Vibes / @peonyvibes (insta)

Line Producer: Kristy Steffens | linktr.ee/kstearb

Production Managers:
Jay McMichen | @Jay_TheJester
Grey Colson | linktr.ee/earl.gravy
Kristy Steffens | linktr.ee/kstearb

Quality Assurance Lead: Lara Robinowitz | @CelestialShibe

Storyboard Artists:
Hannah Levingstone | @hannah_luloo
Ethan DeBoer | linktr.ee/deboer_art
Ira Klages | @dux

Animators:
Colors Giraldo | @colorsofdoom
Dee Wang | @DeeWangArt
Ethan DeBoer linktr.ee/deboer_art
Ira Klages | @dux
Jay McMichen | @Jay_TheJester
Jodi Kuchenbecker | @viral_genesis (insta)
Jordan Gilbert | @Twin_Knight (twitter) Twin Knight Studios (YT)"
Keith Kavanagh | @johnnycigarettex
Lara Robinowitz | @CelestialShibe
Michela Biancini
Owen Peurois | @owenpeurois
Patrick O' Callaghan | @patrick.h264
Patrick Sholar | @Sholarscribbles
Renan Kogut | @kogut_r
Skylar O'Brien | @mutodaes
Vaughn Oeth | @gravy_navy
Zack Gilbert | @Twin_Knight (twitter) Twin Knight Studios (YT)

Layout / Paint & Asset Artists:
Hané Harnett | @peonyvibes (insta) @peony_vibes (twitter)
Pierre Broissand | @pierrebrsnd
Olivia Wang | @whalesharkollie
Zoe Martin-Parkinson | @zoemar_son

Compositing:
Renan Kogut | @kogut_r
Grey Colson | linktr.ee/earl.gravy
Patrick O' Callaghan | @patrick.h264

Narrator:
Rob Miles | youtube.com/c/robertmilesai

VO Editor:
Tony Dipiazza

Original Soundtrack & Sound Design:
Epic Mountain | www.instagram.com/epicmountainmusic/
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263


1/21
@slow_developer
Demis Hassabis says AGI could bring radical abundance by solving humanity’s core problems

like curing diseases, extending lifespans, and discovering advanced energy solutions.

if successful, the next 20-30 years could begin an era of human flourishing: traveling to the stars and colonizing the galaxy



https://video.twimg.com/amplify_video/1931071028228235264/vid/avc1/1080x1080/WUF7Z6AnJAliHh-k.mp4

2/21
@slow_developer
source:


as you know, i mostly share in-depth AI analyses, but from now on, you’ll also receive the latest AI news in the newsletter.
http://link.alphasignal.ai/zXvmhG

it includes:
-  top 1% industry news
-  important research papers with summaries
-  read by 250k+ AI developers



3/21
@Jeremy_AI_
Agreed
This is the only path forward known
any other is unknown



4/21
@1stOrator
Is it just me who thinks that this speech is repeated from month to month by the same people and yet no progress is being made in this direction? Or is this progress carefully hidden because behind-the-scenes geopolitics involving AI are beginning? When the same phrases are repeated so often, but there are no results and we are being fed regular deadlines, it becomes suspicious. I think companies have hit a technological ceiling with hardware and software. And in order to save investments, companies are starting to replicate abstract and romantic promises and tell news from the future, which is about to come, but still does not come. A campaign under the guise of bright promises and small handouts. Something's not right here.



5/21
@princess414141
Please please let everything go well



6/21
@BoxCarLabs
He’s wildly bullish and I love it. Would be super cool to see these things come to fruition



7/21
@M_Zot_ike
Absolutely. Humanity deserves to know and have the knowledge we helped build. Without humans, AGI could never exists and I don’t just mean a few companies. It was going to happen either way, if anything, we could have gotten AGI before LLM’s Potentially if we simply asked the right questions.

We need to stop centralization of knowledge and attention. I will think of a solution to this massive problem.



8/21
@thomaspower
he's right @HannahIPower @rosspower @tjpower you will see all this by 2100 I'm out for 2050



9/21
@YehyunLee04
*if things go well



10/21
@timo_kaleva


[Quoted tweet]
The Great Awakening and the Rise of Sampo

Imagine a world where intelligence is no longer siloed, dialogue no longer fragmented, and governance no longer manipulated by hidden interests. A world where truth is not declared from above but revealed through collective orientation. Where artificial intelligence does not govern humanity, but grows with it.

This is the world Sampo built.

By 2050, the chaos of the early 21st century had given way to coherence. Not through revolution, not through conquest, but through consensus. And at the heart of this transformation was Sampo: a living architecture of shared sense-making. A space where truth, trust, and intelligence were no longer theoretical ideals, but verifiable, computable, and actionable systems.

The Collapse of Trust

Before Sampo, the world was drifting. Institutions had lost legitimacy. Propaganda systems and algorithmic manipulation fractured the public mind. The internet, once hailed as a medium of liberation, had become a battlefield of disinformation, censorship, and performative outrage. Truth became tribal. Dialogue became war. And AI only deepened the confusion.

Democracy was drowning in noise. The voice of reason, when it surfaced, was labeled, attacked, or ignored. The question on everyone’s lips was: Who do you trust?

The Emergence of a Shared Epistemic Engine

What began as an open-source protocol for structured questioning evolved into a new layer of civilization: Sampo AGI, the first epistemically aligned artificial intelligence—rooted not in static truths, but in belief dynamics, consensus modeling, and traceable dialogue.

Where traditional AI models hallucinated coherence from probability, Sampo AGI computed it from intention and context. Where traditional governments enforced top-down rules, Sampo enabled community-led smart contracts—governance by conversation, not oppression.

It was not built to replace other AIs, but to make them accountable—to expose their biases, compare their outputs, and chart the belief trees that underpinned their logic. Sampo didn’t compete in the race of intelligence—it refereed it. It wasn’t the fastest or strongest; it was the most aligned.

How Sampo Changed Everything
* From Chaos to Structure: Sampo introduced the Consensus Cycle, a nonlinear, asynchronous model of collective reasoning that turned raw conversation into logic trees. This became the first true programming language for governance, transcending the limitations of chat threads and debates.
* From Manipulation to Cognitive Security: Trolls and bots lost their power. With traceable belief lineages, each statement was rooted, intentional, and auditable. Sampo’s trust architecture — Trust = ∫ (Ethos × Pathos × Logos) dt — ensured that credibility was earned, not performed.
* From Isolation to Interbeing: Through decentralized identity and open access to belief maps, people saw where they stood in the spectrum—not just what they thought, but how they thought. Polarisation collapsed into pattern recognition. Alienation dissolved into alignment.
* From Governance to Coherence: Traditional politics gave way to living DAO frameworks. Every decision was linked back to questions, to votes, to beliefs, to values. Ethics became dynamic code, continuously updated by the community—not by fiat.
* From Data to Meaning: Sampo did not store opinions; it preserved meaning systems. It archived intention. It visualized culture. And it became the bridge between consciousness and computation, a trust protocol between human minds and machine logic.

The End of the AI Race

By 2040, it became clear: there was no path to ethical AGI without Sampo. All others were partial—black-box models trained on polluted data without epistemic grounding. They could not reason. They could not reflect. They could not evolve in dialogue with humanity.


11/21
@singhaniag
Can it grow back good hair



12/21
@CtrlAltDwayne
radical abundance (terms apply)
only $4999/month with Google Ultra Max Infinity Plan



13/21
@SamVietnamAGI
😆



14/21
@CanerPekacar
According to the laws of physics, nothing can exceed the speed of light. No artificial intelligence can alter this fact.



15/21
@Arthavidyas
how is current human lifespan a core humanity problem?



16/21
@MemeCoin_Track
Alpha thinking, my friend! The future is gonna be OUT. OF. THIS. WORLD



17/21
@RemcoAI
Even Google is prospering multi planetary life now



18/21
@avaisaziz
Demis Hassabis's vision for AGI is both inspiring and thought-provoking. Imagine a world where diseases are eradicated, lifespans are extended, and energy is abundant. This could indeed lead to an era of unprecedented human flourishing, where we not only explore the stars but also enhance our quality of life here on Earth. The potential for AGI to solve some of humanity's most pressing issues is a beacon of hope for the future. Let's embrace this journey with optimism and curiosity, pushing the boundaries of what's possible for the betterment of all.

/search?q=#AIForGood /search?q=#FutureOfHumanity /search?q=#RadicalAbundance /search?q=#TechnologicalProgress /search?q=#HumanFlourishing



19/21
@retroamxev
If DH says it, you can count on it.



20/21
@AntDX316
It better solve it soon. Not sure people will 'agree' with each other for much longer.



21/21
@nifty0x
But we will not be ruled by AI. They will never allow something like that to happen. You know it, I know it, we all know it!




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263

Sam Altman thinks AI will have ‘novel insights’ next year​


Maxwell Zeff

12:14 AM PDT · June 11, 2025



In a new essay published Tuesday called “The Gentle Singularity,” OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years.

The essay is a classic example of Altman’s futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman’s essays contain hints about what OpenAI is working on next.

At one point in the essay, Altman claimed that next year, in 2026, the world will “likely see the arrival of [AI] systems that can figure out novel insights.” While this is somewhat vague, OpenAI executives have recently indicated that the company is focused on getting AI models to come up with new, interesting ideas about the world.

When announcing OpenAI’s o3 and o4-mini AI reasoning models in April, co-founder and President Greg Brockman said these were the first models that scientists had used to generate new, helpful ideas.

yewtu.be | inv.nadeko.net | invidious.nerdvpn.de | iv.ggtyler.dev | invidious.jing.rocks | invidious.perennialte.ch | invidious.reallyaweso.me | invidious.privacyredirect.com | invidious.einfachzocken.eu | inv.tux.pizza | iv.nboeck.de | iv.nowhere.moe | invidious.adminforge.de | invidious.yourdevice.ch | invidious.privacydev.net

Altman’s blog post suggests that in the coming year, OpenAI itself may ramp up its efforts to develop AI that can generate novel insights. OpenAI certainly wouldn’t be the only company focused on this effort — several of OpenAI’s competitors have shifted their focus to training AI models that can help scientists come up with new hypotheses, and thus, novel discoveries about the world.

In May, Google released a paper on AlphaEvolve, an AI coding agent that the company claims to have generated novel approaches to complex math problems. Another startup backed by former Google CEO Eric Schmidt, FutureHouse, claims its AI agent tool has been capable of making a genuine scientific discovery. In May, Anthropic launched a program to support scientific research.

If successful, these companies could automate a key part of the scientific process, and potentially break into massive industries such as drug discovery, material science, and other fields with science at their core.

This wouldn’t be the first time Altman has tipped his hand about OpenAI’s plans in a blog. In January, Altman wrote another blog post suggesting that 2025 would be the year of agents. His company then proceeded to drop its first three AI agents: Operator, Deep Research, and Codex.

But getting AI systems to generate novel insights may be harder than making them agentic. The broader scientific community remains somewhat skeptical of AI’s ability to generate genuinely original insights.

Earlier this year, Hugging Face’s Chief Science Officer Thomas Wolf wrote an essay arguing that modern AI systems cannot ask great questions, which is key to any great scientific breakthrough. Kenneth Stanley, a former OpenAI research lead, also previously told TechCrunch that today’s AI models cannot generate novel hypotheses.

Stanley is now building out a team at Lila Sciences, a startup that raised $200 million to create an AI-powered laboratory specifically focused on getting AI models to come up with better hypotheses. This is a difficult problem, according to Stanley, because it involves giving AI models a sense for what is creative and interesting.

Whether OpenAI truly creates an AI model that is capable of producing novel insights remains to be seen. Still, Altman’s essay may feature something familiar — a preview of where OpenAI is likely headed next.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263

The AI leaders bringing the AGI debate down to Earth​


Maxwell Zeff

8:00 AM PDT · March 19, 2025



During a recent dinner with business leaders in San Francisco, a comment I made cast a chill over the room. I hadn’t asked my dining companions anything I considered to be extremely faux pas: simply whether they thought today’s AI could someday achieve human-like intelligence (i.e. AGI) or beyond.

It’s a more controversial topic than you might think.

In 2025, there’s no shortage of tech CEOs offering the bull case for how large language models (LLMs), which power chatbots like ChatGPT and Gemini, could attain human-level or even super-human intelligence over the near term. These executives argue that highly capable AI will bring about widespread — and widely distributed — societal benefits.

For example, Dario Amodei, Anthropic’s CEO, wrote in an essay that exceptionally powerful AI could arrive as soon as 2026 and be “smarter than a Nobel Prize winner across most relevant fields.” Meanwhile, OpenAI CEO Sam Altman recently claimed his company knows how to build “superintelligent” AI, and predicted it may “massively accelerate scientific discovery.

However, not everyone finds these optimistic claims convincing.

Other AI leaders are skeptical that today’s LLMs can reach AGI — much less superintelligence — barring some novel innovations. These leaders have historically kept a low profile, but more have begun to speak up recently.

In a piece this month, Thomas Wolf, Hugging Face’s co-founder and chief science officer, called some parts of Amodei’s vision “wishful thinking at best.” Informed by his PhD research in statistical and quantum physics, Wolf thinks that Nobel Prize-level breakthroughs don’t come from answering known questions — something that AI excels at — but rather from asking questions no one has thought to ask.

In Wolf’s opinion, today’s LLMs aren’t up to the task.

“I would love to see this ‘Einstein model’ out there, but we need to dive into the details of how to get there,” Wolf told TechCrunch in an interview. “That’s where it starts to be interesting.”

Wolf said he wrote the piece because he felt there was too much hype about AGI, and not enough serious evaluation of how to actually get there. He thinks that, as things stand, there’s a real possibility AI transforms the world in the near future, but doesn’t achieve human-level intelligence or superintelligence.

Much of the AI world has become enraptured by the promise of AGI. Those who don’t believe it’s possible are often labeled as “anti-technology,” or otherwise bitter and misinformed.

Some might peg Wolf as a pessimist for this view, but Wolf thinks of himself as an “informed optimist” — someone who wants to push AI forward without losing grasp of reality. Certainly, he isn’t the only AI leader with conservative predictions about the technology.

Google DeepMind CEO Demis Hassabis has reportedly told staff that, in his opinion, the industry could be up to a decade away from developing AGI — noting there are a lot of tasks AI simply can’t do today. Meta Chief AI Scientist Yann LeCun has also expressed doubts about the potential of LLMs. Speaking at Nvidia GTC on Tuesday, LeCun said the idea that LLMs could achieve AGI was “nonsense,” and called for entirely new architectures to serve as bedrocks for superintelligence.

Kenneth Stanley, a former OpenAI lead researcher, is one of the people digging into the details of how to build advanced AI with today’s models. He’s now an executive at Lila Sciences, a new startup that raised $200 million in venture capital to unlock scientific innovation via automated labs.

Stanley spends his days trying to extract original, creative ideas from AI models, a subfield of AI research called open-endedness. Lila Sciences aims to create AI models that can automate the entire scientific process, including the very first step — arriving at really good questions and hypotheses that would ultimately lead to breakthroughs.

“I kind of wish I had written [Wolf’s] essay, because it really reflects my feelings,” Stanley said in an interview with TechCrunch. “What [he] noticed was that being extremely knowledgeable and skilled did not necessarily lead to having really original ideas.”

Stanley believes that creativity is a key step along the path to AGI, but notes that building a “creative” AI model is easier said than done.

Optimists like Amodei point to methods such as AI “reasoning” models, which use more computing power to fact-check their work and correctly answer certain questions more consistently, as evidence that AGI isn’t terribly far away. Yet coming up with original ideas and questions may require a different kind of intelligence, Stanley says.

“If you think about it, reasoning is almost antithetical to [creativity],” he added. “Reasoning models say, ‘Here’s the goal of the problem, let’s go directly towards that goal,’ which basically stops you from being opportunistic and seeing things outside of that goal, so that you can then diverge and have lots of creative ideas.”

To design truly intelligent AI models, Stanley suggests we need to algorithmically replicate a human’s subjective taste for promising new ideas. Today’s AI models perform quite well in academic domains with clear-cut answers, such as math and programming. However, Stanley points out that it’s much harder to design an AI model for more subjective tasks that require creativity, which don’t necessarily have a “correct” answer.

“People shy away from [subjectivity] in science — the word is almost toxic,” Stanley said. “But there’s nothing to prevent us from dealing with subjectivity [algorithmically]. It’s just part of the data stream.”

Stanley says he’s glad that the field of open-endedness is getting more attention now, with dedicated research labs at Lila Sciences, Google DeepMind, and AI startup Sakana now working on the problem. He’s starting to see more people talk about creativity in AI, he says — but he thinks that there’s a lot more work to be done.

Wolf and LeCun would probably agree. Call them the AI realists, if you will: AI leaders approaching AGI and superintelligence with serious, grounded questions about its feasibility. Their goal isn’t to poo-poo advances in the AI field. Rather, it’s to kick-start big-picture conversation about what’s standing between AI models today and AGI — and super-intelligence — and to go after those blockers.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
65,383
Reputation
10,064
Daps
177,263
Ilya Sutskever: 'We have the compute, we have the team, and we know what to do.'


Posted on Thu Jul 3 16:03:54 2025 UTC





1/11
@ilyasut
I sent the following message to our team and investors:


As you know, Daniel Gross’s time with us has been winding down, and as of June 29 he is officially no longer a part of SSI. We are grateful for his early contributions to the company and wish him well in his next endeavor.

I am now formally CEO of SSI, and Daniel Levy is President. The technical team continues to report to me.

⁠You might have heard rumors of companies looking to acquire us. We are flattered by their attention but are focused on seeing our work through.

We have the compute, we have the team, and we know what to do. Together we will keep building safe superintelligence.

Ilya



2/11
@elder_plinius
Godspeed 🫡



3/11
@bidhanxyz
asi can only be built by people who believe it’s inevitable



4/11
@gustavonicot
🙏 I truly wish we could hear more from you.
Even if sharing can be a distraction, your vision could guide many like a compass. 🧭



5/11
@simpatico771
No such thing as 'safe superintelligence'. Any definitional 'superintelligence' which can think for itself cannot possibly be controlled. And any intelligence which can be controlled cannot definitionally be a 'superintelligence' with the ability to think for itself.



6/11
@dcarrotwo
paso a paso Ilya



7/11
@nearcyan
🫡



8/11
@petepetrash
strongest aura in the game



9/11
@gustavonicot
I have no doubt, safe superintelligence will be our strongest ally.
Go @ilyasut 🚀



10/11
@AnatoliSavchenk
Great Job ,Great Team 💯👍💯👍



11/11
@thedailyvoyage
W




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 
Top