Top 40 occupations with highest AI applicability score (most at risk, sorted alphabetically):

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,515
Reputation
10,662
Daps
187,937
The list horribly wrong for academics such as mathematicians or historians. AI is basically regurgitating a summary of the knowledge that already exists. Some of the more advanced ones can infer a few things too.



The problem when trying to apply AI to mathematicians or historians, is that as PhD holders it is their job to create new knowledge. They train to bring things into existence that have never existed. This is the exact thing that AI is incapable of doing.


The only thing that I see AI affecting for mathematicians or historians is the bread and butter meta-analysis. These are papers that academics publish just to fluff the resumes. They often just involve summaries of existing research, and at best making an implication or two about their results. But the rest of the day-to-day activity of both historians and mathematicians can't be done by AI

they are literally working to create ai that can do just that, make novel insights.

Sama tweet on gold medal performance, also says GPT-5 soon



Posted on Sat Jul 19 14:10:21 2025 UTC

wp0p50af9udf1.jpg

gh4joz9f9udf1.jpg



OpenAI researcher confirms IMO gold was achieved with pure language based reasoning



Posted on Sat Jul 19 10:51:18 2025 UTC

q7t7vtqw9tdf1.png



[Discussion] What are the new techniques he's talking about?



Posted on Sat Jul 19 12:55:55 2025 UTC

ptnpm2nyvtdf1.png




Commented on Sat Jul 19 14:02:03 2025 UTC

Let's assume OpenAI employees are being forthcoming.

Jerry Tworek: all natural language proofs, no evaluation harness, little IMO-specific work, same RL system as agent/coder

Alexander Wei: no tools or internet, ~100 mins thinking, going beyond "clear-cut, verifiable rewards," general-purpose RL + test-time compute scaling

Sheryl Hsu: no tools like lean or coding, completed the competition in 4.5 hours, the models tests different strategies/hypotheses and makes observations

What they're saying is that they've gone beyond RLVR. Which is pretty wild. With RLVR, you only get reward feedback after completing an entire task. The signal is faint. It sounds like they've figured out how to let the model reward itself for making progress by referencing an internal model of the task. Makes sense? Let the model make competing predictions about how things will unfold, and it can use these to anchor its reasoning.


│ Commented on Sat Jul 19 16:04:06 2025 UTC

│ Noam and others have said RL for unverifiable rewards.

│ We know this is what they did. We know it's a big deal. Like that paradigm scales up to writing great novels and doing hours of low context work (as we saw in coding competition this week).

│ We don't know what was actually done to make that paradigm work, but this is a good guess 👍


Commented on Sat Jul 19 13:17:37 2025 UTC

Since it seems DeepMind also has gold, their inevitable blogpost could give us some pointers.

Though from previous history, it always feels like the super impressive math results don't necessarily translate to other areas' capabilities just as well, so their new techniques could be very tailored to math-oriented CoT, I have no idea.

Tackling the IMO specifically was already a well-known challenge being optimized for (I assume through math formalizers), so we'll need a lot more technical detail from them to know how actually "general" their general LLM is here. (EDIT: Jerry Tworek (@MillionInt) | https://nitter.poast.org/MillionInt/status/1946551400365994077 | https://xcancel.com/MillionInt/status/1946551400365994077 | Jerry Tworek @MillionInt, Twitter Profile | TwStalker trained general models rather than optimizing specifically for the IMO. Really impressiv, damn. It's possible their new techniques still suit formal math proofs better than anything since it's a pretty valued research area since 2023, but the fact the model is actually a general reasoning LLM is seriously impressive)

From what Noam said though it's definitely related to TTC.


GPT 5 won't get Gold IMO capabilities



Posted on Sat Jul 19 15:30:38 2025 UTC

78wg6wqonudf1.png




With new OpenAI thinking model , order of magnitude of thinking time is now in a standard work-day range.



Posted on Sat Jul 19 08:47:36 2025 UTC


2xz0p6qtnsdf1.jpg

eqaa7c2unsdf1.jpg



Post too soon, and you publish a fossil



Posted on Sat Jul 19 09:45:08 2025 UTC

983x6fu3ysdf1.jpeg



I am feeling extremely anxious over the chatgpt Math olympiad results, what exactly are humans supposed to do now?



Posted on Sat Jul 19 11:18:18 2025 UTC

/r/singularity/comments/1m3tras/i_am_feeling_extremely_anxious_over_the_chatgpt/

I loved to learn new things, and from a personal perspective, always wanted myself to be smarter than my previous self.
I loved math and physics.

Now I feel, all that is in vain, as this LLM is going to do what I want to do, and do it even better.
The other day I was making a 3 body problem visualiser for half a day. But some guy on twitter one-shotted a black hole visualiser using Grok Heavy.

I liked doing the "intellectually heavy" tasks. Now? I feel LLM will defeat me in this. If not today, 2 years from now. What exactly am I supposed to do. Art? Gone. Music? Gone. Programming, my passion? Gone. Math and Physics? Going soon. The only thing left to do is be a company founder of sorts, forming just the problem statement, and use these tools to solve problems. But I wanted to be the problem solver.

Edit : Art, music and other fun things may still be relevant. But when its about pushing the boundaries of humanity, I feel humans will no longer be needed.

Sam Altman on the model



Posted on Sat Jul 19 14:20:37 2025 UTC

mww4r7h6budf1.png



He is starting to beleive



Posted on Sat Jul 19 15:35:48 2025 UTC

fyemq6sloudf1.png




GPT-5 reasoning alpha





OpenAI achieved IMO gold with experimental reasoning model; they also will be releasing GPT-5 soon



Posted on Sat Jul 19 08:08:40 2025 UTC

bb0ej5ejgsdf1.png

ufs0jdemgsdf1.png

ji63dgqngsdf1.png

5jzw7bgpgsdf1.png


 

The Plug

plug couldnt trust you now u cant trust the plug
Joined
Feb 11, 2017
Messages
7,390
Reputation
943
Daps
19,726
Tbh if LLMs just replaces the majority of big time corporate jobs and robots replace the majority of manual labor, damn. There will be too many people out of jobs.
 

Vandelay

Life is absurd. Lean into it.
Joined
Apr 14, 2013
Messages
27,373
Reputation
8,288
Daps
100,588
Reppin
Phi Chi Connection
It's gonna effect everything, only way you'll really be safe is if those on the top fukk with you.

Feudalism-for-all... yay!

:francis:
 

Fillerguy

Veteran
Joined
May 5, 2012
Messages
21,146
Reputation
5,674
Daps
89,800
Reppin
North Jersey
I'm sure AI wrote this list. Jobs like Data Scientists are experiencing an uptick in work productivity, like in my job for example.

AI can't replace jobs that require a Human to interpret information to other Humans. Anyone who's using AI to make judgment decisions is setting themselves up for massive lawsuits. These LLMs will straight up ignore data and make shyt up. And if the prompt is wrong, it will lead to the wrong conclusions.
 

Neuromancer

Live Wire Vodoo
Supporter
Joined
Oct 16, 2015
Messages
85,061
Reputation
17,874
Daps
206,174
Reppin
Villa Straylight.
I'm sure AI wrote this list. Jobs like Data Scientists are experiencing an uptick in work productivity, like in my job for example.

AI can't replace jobs that require a Human to interpret information to other Humans. Anyone who's using AI to make judgment decisions is setting themselves up for massive lawsuits. These LLMs will straight up ignore data and make shyt up. And if the prompt is wrong, it will lead to the wrong conclusions.
I'm trying to figure out who wants to talk to AI on a customer service line? Cause I hate that shyt.
 
Top