Chat GPT-5 Drops Today

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
GPT-5 is nearly 3x faster than o3 at earning badges in Pokémon Red


Posted on Thu Aug 14 03:14:39 2025 UTC

dudpeygmjwif1.png





Commented on Thu Aug 14 03:20:39 2025 UTC

I think this is another reminder that people need to re-assess how they evaluate models. It seems like a lot of the focus right now is on improving capabilities around long-horizon agentic tasks. People seem to have their brains broken a little bit when they don't see the typical progress on previously cherished benchmarks.


│ Commented on Thu Aug 14 03:31:54 2025 UTC

│ I was highly impressed by the agentic coding capabilities of GPT-5. It was truly bizarre to load up reddit the day after the launch only to see a bunch of free tier degens screaming and crying about their furry dildo roleplay chats.

│ │
│ │
│ │ Commented on Thu Aug 14 06:23:03 2025 UTC
│ │
│ │ To be fair, GPT 5 was not working properly at launch day - Even Sam Altman said so. It felt and was dumber than was intended. However, the next few days I tried it, it noticeably improved. That goes to show how important first hand impressions are.
│ │
│ │ GPT 5 is the current best model at coding for me, but only by a incremental margin.
│ │



Commented on Thu Aug 14 03:56:15 2025 UTC

I have followed the stream a lot so here are some things I have noticed

-Very good at long button sequences through menus, the map, battles, or combinations of the three at a single time.

-Does not suffer major, prolonged hallucinations often. Usually "snaps out of it" upon a few failures.

-Decent strategy with intelligent insights that even sometimes surprise me. Still goofs up sometimes.

-Bonus: I find its jokes genuinely funny and clever.

Here's the stream if you want to tune in:












1/26
@Qualzz_Sam
GPT-5 earned 8 badges in Pokemon Red in just 6,000 steps compared to o3’s 16,700! It’s in complex, long-term agent workflows that GPT-5’s true power really shines. Absolutely mind-blowing. 🤯



GyRA-EuXYAIt813.jpg


2/26
@Qualzz_Sam
Since this is getting some attention, I’m taking the chance to highlight a wonderful linocut artist (🫣): AtelierDeJess - Etsy France



GyThBuZXgAMRdYj.png


3/26
@cloudstudio_es
How much money you spend with this ? Just curious



4/26
@Qualzz_Sam
Only less sleep. 0 for the API.



5/26
@dearmadisonblue
is the source code available? these pokemon benchmarks are annoying bc they never show how much is done by the harness



6/26
@Qualzz_Sam
you can check the harness website, everything is explained.



7/26
@IHateZuckSoMuch
How humans compare?



8/26
@Qualzz_Sam
Humans are much faster. Buuut, if you remove the api query time, and the time it takes for the llm to think etc. It's easily way faster than when I was a kid. (Only a few hours of real "gameplay")



9/26
@harold_bracy
Everyone's tired of seeing comparisons to o3. Let's see the Claude comparison



10/26
@Shedletsky
If you take an expert videogaming human who has never played this game before, what score would you expect to see from them?

Need some context



11/26
@DanHendrycks




GyKjGyOboAAWyzF.jpg


12/26
@_alex_dale
inb4 openai retweets this



13/26
@rightish19
it only cost $350k in tokens



14/26
@BreadPirateRob
yeah, but who had the better pokemon roster? that’s what counts



15/26
@gerardsans
Hold your pockets to the agentic scam. Today’s AI unreliability makes long horizon tasks a sure way to lose your money and sanity to AI tokenomics paying for AI labs lazy execution. The more work AI does unsupervised the more tokens wasted and the more hallucinations cripple in.



16/26
@GhostRoosterr
I will always defend pokemon as the best benchmark



17/26
@willx6923
Is that SotA for AI playing Pokémon Red? Was o3 the best before this?



18/26
@CatAstro_Piyush
🔥🔥🔥



19/26
@Frieren_white
Wow



20/26
@draslan_eth
GOAT



21/26
@sunnypause
@grok how many steps did gemini or opus or sonnet take?



22/26
@shyeetsao
for those who wonder other models i found gemini 2.5 pro for you (with 8th badge red marked hope i didn't mess up). source: https://storage.googleapis.com/deepmind-media/gemini/gemini_v2_5_report.pdf



GyS6rzPa4AAW3e6.jpg


23/26
@Frank37004246
A big step towards solving humanity problems : pokemon



24/26
@christou_c
@Clad3815 has done an amazing job setting this agent as well as everything around it (how the stream works, the website, the harness documentation)

And I'm super impressed by this chart! It shows where GPT-5 shines: in truly agentic setups where the Agent has to work on its own.



25/26
@WendyCarlosa
this has made me smarter than sheldon and wetter than penny.



26/26
@sir4K_zen
Still waiting for the day GPT-5 beats my high score in real life. Or at least remembers to buy groceries.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
[Discussion] Gpt-5 Took 6470 Steps to finish pokemon Red compared to 18,184 of o3 and 68,000 for Gemini and 35,000 for Claude


Posted on Sun Aug 17 03:11:32 2025 UTC

84cszt48yhjf1.png





Commented on Sun Aug 17 03:22:10 2025 UTC

I don't know anything about Pokemon Red. Is there an absolute minimum number of steps it takes to complete?


│ Commented on Sun Aug 17 04:14:58 2025 UTC

│ This guy did it in 5873 steps.

│ This is with pre-planning and previous knowledge of everything that will happen.
default.jpg



│ │
│ │
│ │ Commented on Sun Aug 17 04:21:52 2025 UTC
│ │
│ │ That makes it even more impressive than what the graph suggests tbh
│ │
│ │ From being 12000+ away to only 600 away from expert human level
│ │

│ │ │
│ │ │
│ │ │ Commented on Sun Aug 17 05:28:40 2025 UTC
│ │ │
│ │ │ But these steps aren’t anywhere near equivalent the video’s steps, because these steps include complex operations that would take multiple in game steps
│ │ │
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
i don't think @bnew actually uses these tools :hubie:

I don't use chatgpt as my primary LLM but I use them everyday. every piece of code I shared on here was entirely written by an LLM because i'm not a programmer. I share various AI output on here all the time. :heh:

I haven't tried gpt-5 the day it was released but it turns out it wasn't performing correctly according to openai so I have to try it again extensively but the first time i tried it it removed functionality from my code in just two-turns in the conversation which is partly why it isn't the primary LLM I use for coding tasks.

it has been performing better lately but for code I consider simple.
 

Ill-Mind

Midwest Moonwalker
Joined
Jul 31, 2012
Messages
4,952
Reputation
1,055
Daps
18,086
At this point I'm way more favorable to Google's Gemini and NotebookLM
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
GPT5 did new maths?















1/24
@VraserX
GPT-5 just casually did new mathematics.

Sebastien Bubeck gave it an open problem from convex optimization, something humans had only partially solved. GPT-5-Pro sat down, reasoned for 17 minutes, and produced a correct proof improving the known bound from 1/L all the way to 1.5/L.

This wasn’t in the paper. It wasn’t online. It wasn’t memorized. It was new math. Verified by Bubeck himself.

Humans later closed the gap at 1.75/L, but GPT-5 independently advanced the frontier.
A machine just contributed original research-level mathematics.

If you’re not completely stunned by this, you’re not paying attention.

We’ve officially entered the era where AI isn’t just learning math, it’s creating it. @sama @OpenAI @kevinweil @gdb @markchen90

[Quoted tweet]
Claim: gpt-5-pro can prove new interesting mathematics.

Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct.

Details below.


Gyz2WavXIAAhWXr.jpg

Gyzo2H4aYAAbbHQ.png


2/24
@VraserX
Here is a simple explanation for all the doubters:



Gy2aT2GX0AArtby.jpg


3/24
@AmericaAeterna
Does this make GPT-5 the first Level 4 Innovator AI?



4/24
@VraserX
Pretty much ✌🏽



5/24
@rawantitmc
New mathematics? 😂 😂 😂



6/24
@VraserX




7/24
@DonValle
“New mathematics!” Lol @wowslop



8/24
@VraserX




9/24
@rosyna
And yet, ChatGPT-5 can’t answer the following correctly:

5.9 = x + 5.11, solve for x



10/24
@VraserX




Gy0yLEaWwAETjpm.jpg


11/24
@kabirc
Such bullshyt. I gave it my son's grade 4 math problem (sets) and it failed miserably.



12/24
@VraserX
Skill issue. Use GPT-5 thinking.



13/24
@charlesr1971
Why can’t it resolve the Nuclear Fusion problem? How do we keep the plasma contained? If AI could solve this, the world would have cheap energy forever. 🤷‍♀️



14/24
@VraserX
It will be able to do that in a few years. We need a lot more compute. At the current rate AI doubles task time compute every couple of months. Some people don’t understand exponentials.



Gy2ztvWWwAEEwfM.jpg


15/24
@luisramos1977
No LLM can produce new knowledge, they only repeat what is fed.



16/24
@VraserX
Not true, GPT-5 can produce new insights and knowledge.



17/24
@Padilla1R1
@elonmusk



18/24
@CostcoPM
If this is true all it means is humans solved to this point with existing knowledge but the pieces weren’t in 1 place. AI doesn’t invent.



19/24
@rahulcsekaran
Absolutely amazing to see language models not just summarizing or helping with known problems, but actually producing novel mathematical proofs! The potential for co-discovery between AI and humans just keeps growing. Curious to see how this impacts future research collaborations.



20/24
@nftechie_
@mike_koko is this legit? Or slop?



21/24
@0xDEXhawk
Thats almost scary.



22/24
@WordAbstractor
@nntaleb thoughts on this sir?



23/24
@Jesse_G_PA
Fact check this



24/24
@DrKnowItAll16
Wow. A bit frightening that this has happened already in mid 2025.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196














1/35
@SebastienBubeck
Claim: gpt-5-pro can prove new interesting mathematics.

Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct.

Details below.



Gyzo2H4aYAAbbHQ.png


2/35
@SebastienBubeck
The paper in question is this one https://arxiv.org/pdf/2503.10138v1 which studies the following very natural question: in smooth convex optimization, under what conditions on the stepsize eta in gradient descent will the curve traced by the function value of the iterates be convex?



3/35
@SebastienBubeck
In the v1 of the paper they prove that if eta is smaller than 1/L (L is the smoothness) then one gets this property, and if eta is larger than 1.75/L then they construct a counterexample. So the open problem was: what happens in the range [1/L, 1.75/L].



4/35
@SebastienBubeck
As you can see in the top post, gpt-5-pro was able to improve the bound from this paper and showed that in fact eta can be taken to be as large as 1.5/L, so not quite fully closing the gap but making good progress. Def. a novel contribution that'd be worthy of a nice arxiv note.



5/35
@SebastienBubeck
Now the only reason why I won't post this as an arxiv note, is that the humans actually beat gpt-5 to the punch :-). Namely the arxiv paper has a v2 https://arxiv.org/pdf/2503.10138v2 with an additional author and they closed the gap completely, showing that 1.75/L is the tight bound.



6/35
@SebastienBubeck
By the way this is the proof it came up with:



GyzrlsjbIAAEVko.png


7/35
@SebastienBubeck
And yeah the fact that it proves 1.5/L and not the 1.75/L also shows it didn't just search for the v2. Also the above proof is very different from the v2 proof, it's more of an evolution of the v1 proof.



8/35
@markerdmann
and there's some way to rule out that it didn't find the v2 via search? or is it that gpt-5-pro's proof is so different from the v2 that it wouldn't have mattered?



9/35
@SebastienBubeck
yeah it's different from the v2 proof and also v2 is a better result actually



10/35
@jasondeanlee
Is this the model we can use or an internal model?



11/35
@SebastienBubeck
This is literally just gpt-5-pro. Note that this was my second attempt at this question, in the first attempt I just asked it to improve theorem 1 and it added more assumptions to do so. So my second prompt clarified that I want no additional assumptions.



12/35
@baouws
How long did it take you to check the proof? (longer than 17.35 minutes?)



13/35
@SebastienBubeck
25 minutes, sadly I'm a bit rusty :-(



14/35
@rajatgupta99
This feels like a shift, not just consuming existing knowledge, but generating new proofs. Curious what the limits are: can it tackle entirely unsolved problems, or only refine existing ones?



15/35
@xlr8harder
pro can web search. are you sure it didn't find the v2?



16/35
@fleetingbits
so, I believe this - but why not just take a large database of papers, collect the most interesting results that GPT-5 pro can produce

and then, like, publish them as a compendium or something as marketing?



17/35
@Lume_Layr
Really… I think you might know exactly where this logic scaffold came from. Timestamps don’t lie.



18/35
@doodlestein
Take a look at this repo. I’m pretty sure GPT-5 has already developed multiple serious academic papers worth of new ideas. And not only that, it actually came up with most of the prompts itself:

https://github.com/dikklesworthstone/model_guided_research



19/35
@pcts4you
@AskPerplexity What are the implications here



20/35
@BaDjeidy
Wow 17 min thinking impressive! Mine is always getting stuck for some reason.



21/35
@ElieMesso
@skdh might be interesting to you.



22/35
@Ghost_Pilot_MD
🍓🍓🍓🍓
we getting there!



23/35
@Rippinghawk
@Papa_Ge0rgi0



24/35
@Zenul_Abidin
I feel like it should be used to hammer in existing mathematical concepts into people's minds.

There are so many of them that people should be learning at this point.



25/35
@zjasper666
This is pretty sick!



26/35
@threadreaderapp
Your thread is very popular today! /search?q=#TopUnroll Thread by @SebastienBubeck on Thread Reader App 🙏🏼@_cherki82_ for 🥇unroll



27/35
@JuniperViews
That’s amazing



28/35
@CommonSenseMars
I wonder if you prompted it again to improve it further, GPT-5 would say “no”.



29/35
@seanspraguesr
GPT 4 made progress as well.

Resolving the Riemann Hypothesis: A Geometric Algebra Proof via the Trilemma of Symmetry, Conservation, and Boundedness



Gy0DgCcW8AAj44Q.jpg

Gy0DgCoWsAAbSKt.jpg


30/35
@TinkeredThinker
cc: @0x77dev @jposhaughnessy



31/35
@AldousH57500603
@lichtstifter



32/35
@a_i_m_rue
5-pro intimidates me by how much smarter than me it is. Its gotta be pushing 150iq.



33/35
@_cherki82_
@UnrollHelper



34/35
@gtrump_t
Wow



35/35
@Ampa37143359
@rasputin1500




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



Commented on Thu Aug 21 13:18:20 2025 UTC

https://nitter.net/ErnestRyu/status/1958408925864403068

I paste the comments by Ernest Ryu here:

This is really exciting and impressive, and this stuff is in my area of mathematics research (convex optimization). I have a nuanced take.

There are 3 proofs in discussion: v1. ( η ≤ 1/L, discovered by human ) v2. ( η ≤ 1.75/L, discovered by human ) v.GTP5 ( η ≤ 1.5/L, discovered by AI ) Sebastien argues that the v.GPT5 proof is impressive, even though it is weaker than the v2 proof.

The proof itself is arguably not very difficult for an expert in convex optimization, if the problem is given. Knowing that the key inequality to use is [Nesterov Theorem 2.1.5], I could prove v2 in a few hours by searching through the set of relevant combinations.

(And for reasons that I won’t elaborate here, the search for the proof is precisely a 6-dimensional search problem. The author of the v2 proof, Moslem Zamani, also knows this. I know Zamani’s work enough to know that he knows.)

(In research, the key challenge is often in finding problems that are both interesting and solvable. This paper is an example of an interesting problem definition that admits a simple solution.)

When proving bounds (inequalities) in math, there are 2 challenges: (i) Curating the correct set of base/ingredient inequalities. (This is the part that often requires more creativity.) (ii) Combining the set of base inequalities. (Calculations can be quite arduous.)

In this problem, that [Nesterov Theorem 2.1.5] should be the key inequality to be used for (i) is known to those working in this subfield.

So, the choice of base inequalities (i) is clear/known to me, ChatGPT, and Zamani. Having (i) figured out significantly simplifies this problem. The remaining step (ii) becomes mostly calculations.

The proof is something an experienced PhD student could work out in a few hours. That GPT-5 can do it with just ~30 sec of human input is impressive and potentially very useful to the right user. However, GPT5 is by no means exceeding the capabilities of human experts."


│ Commented on Thu Aug 21 14:02:25 2025 UTC

│ Task shortened from a few hours with domain expert-level human input, to 30 secs with a general model available on the web. Impressive. Peak is not even on the horizon.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
@JoelB

I just used it to fix a few bugs in a bsky bookmarklet I've been working on for a while but haven't been able to properly address some of the bugs i encountered. GPT-5 thinking seems to have resolved them in 4 iterations though. I did read that when gpt-5 released it ewasn;t performing as expected but that was after i tried modifying long(20K+ characters) code that would be retrurned with lost functionality. now that seems to have changed especially with gpt-5 thinking enabled.


Changelog — all changes made to the bookmarklet during this conversation​


I grouped changes by the logical “fix” or iteration we applied. For each change I list what was changed, where (functions or code areas), and why it was needed.


Initial fixes (first reply)​

Goal: Stop generating undefined post IDs and prevent quoted-post display names from being used to build quoted-post URLs.
What changed

  • extractPost()
    • Stronger main-post ID extraction: prefer a[href*="/post/"] from the DOM and fallback to calling the getPostThread API only when necessary.
    • Fixed logic that produced https://.../post/undefined.
    • When building quoted post URLs, use the handle from the quoted post’s href (if present) instead of the quoted post’s visible display name.
    • Added extra fallbacks to find video URLs for quoted posts (API check, poster attribute, blob src).
  • getEmbedUrl() / constructVideoUrl()
    • Kept and clarified embed-video detection and URL construction (HLS vs webm).
  • Small utilities
    • Kept cleanText(), extractMedia() and others but used them more consistently in extractPost().

Why

  • The original algorithm often built the main post URL from brittle DOM fallbacks that weren’t guaranteed to include a /post/<id> anchor. That caused undefined. Quoted posts used display names (not handles) to build URLs which caused wrong quoted links.



SPA / stale-DOM mitigation (second reply)​


Goal: Avoid extracting posts that are still present in the DOM but no longer visible after client-side navigation (single-page app behavior).

What changed

  • Added visibility helpers
    • isElementVisible(el) — quick test whether an element overlaps the viewport (checks computed styles and bounding rect).
    • visibilityScore(el) — computes on-screen overlap area (used to pick the most visible post).
  • extractPosts()
    • Filter allPosts to visiblePosts using isElementVisible.
    • Active-post selection order improved:
      1. visible post whose /post/<id> matches the current URL,
      2. most-visible post (largest visibilityScore),
      3. fallback to first DOM post.
    • Parent-detection now prefers the visible previous post, not some stale off-screen post.
  • Kept UI and copy behavior but changed the post list source to avoid stale nodes.

Why

  • Single-page apps often keep old nodes in the DOM when you navigate; that meant the bookmarklet could be reading the wrong posts. Only consider posts that are actually visible (or, fallback to all when nothing is visible) to match what the user sees.



Fix for extracting too few posts (third reply)​


Goal: Ensure the bookmarklet builds a proper list of posts (parent + active + replies) up to the requested count — instead of frequently returning only 1–2 posts.

What changed

  • extractPosts()
    • Reworked how we build the array of post elements to process:
      • Introduced postsToProcessElements and pushEl() to collect unique elements in a controlled order.
      • Add parent (if present), then active, then take subsequent posts from visiblePosts, falling back to allPosts.
      • Respect count properly (parent + active count toward the requested total).
    • Avoid duplicate elements, ensure consecutive replies are selected (first try visible, then DOM).
    • Adjusted chunking logic so the full text & large images section contains the intended posts.

Why

  • The previous slicing logic inadvertently skipped or duplicated elements and did not respect the requested count correctly. Building postsToProcessElements deterministically fixes that.



Quoted-post causing wrong main post id (fourth reply)​


Goal: Fix the bug where the script picks a quoted post’s /post/<id> anchor instead of the main post’s ID, producing the wrong first URL (and sometimes wrong subsequent URLs).

What changed

  • extractPost()
    • Main change: selecting the correct /post/<id> anchor for the main post.
    • New approach:
      • Prefer data-testid="postThreadItem-by-<handle>" to get the mainHandle.
      • When selecting anchors with /post/, prefer anchors whose path contains that same mainHandle immediately before post. (/profile/<mainHandle>/post/<id>).
      • If no exact match, scan anchors but skip anchors that are in nested quoted-post containers (i.e., anchors inside [role="link"][aria-label*="Post by"] that are not the top-level element) — these are likely the quoted post’s timestamp/anchor.
      • Only fall back to API getPostThread when necessary and only when we have a reliable mainHandle.
    • This ensures the script doesn't accidentally grab 3lyefriges22b (quoted post id) for the main post when the main post id is 3lyfqnnu5mc2c.
  • Tightened the quoted-post fallback logic: still attempts to use quoted anchor href, then API lookup, then poster/blob fallback for quoted videos.

Why

  • A quoted post (inside the same post DOM) often contains its own /post/ links (timestamp link). A naïve querySelector('a[href*="/post/"]') can pick that inner quoted link. Requiring the anchor to match the mainHandle prevents the script from confusing quoted-post anchors with the main-post anchor.



Other cross-cutting and preservation notes​


  • UI pieces like createPostCountPrompt() and showToast() were preserved and kept in the same location/flow (only subtle text/behavior preserved).
  • extractMedia() remained the same but is used more consistently (images limited by MAX_IMAGES_PER_POST).
  • Video embed detection was preserved and made slightly more robust (handles video.webm vs HLS playlist).
  • normalizeUrl() and link collection behavior were added/used consistently to append non-post/profile external links into the final text.
  • I preserved your original output formatting (SPOILER blocks, [img], [U][URL] blocks, "To post tweets..." footer) exactly throughout changes.
  • Where the code makes network calls to public.api.bsky.app, calls were limited to situations that benefit from them (active post fallback or when we need embed details). This reduces unnecessary requests.



Extensive, detailed summary in layman terms​


Below is a plain-English explanation of each problem you saw and how the code changes actually fix it — written so you can explain it to someone who’s not a programmer.



The problems you were seeing (plain language)​


  1. Wrong first post URL (sometimes undefined):
    When you used the bookmarklet it sometimes produced a link like .../post/undefined or it showed the wrong post ID (often it used the quoted post’s ID instead of the main post’s). That happens because the script was looking around the post’s HTML and picking the first /post/ link it found — but a quoted post inside the main post can also include its own /post/ links (like timestamp links). If the script picks a quoted link, the URL is wrong.
  2. Quoted post display names used to make URLs:
    The script sometimes used the visible name (the pretty display text) of the quoted user to build a link, but websites usually use a handle or internal id in the URL. Building a URL with the display name leads to incorrect or broken links.
  3. Single-page app (SPA) stale DOM nodes:
    When you click around (open a post, then click a reply, then click back to timeline), the site keeps old post elements in the background. Your bookmarklet was reading those old posts instead of the ones you can actually see. So it would copy data from posts you no longer have visible until you reload the page.
  4. Only 1–2 posts extracted when more replies are visible:
    The code’s logic for collecting “the next N posts” sometimes sliced the wrong array and ended up only including 1 or 2 posts, even though several replies were visible on screen.



What I changed to fix each problem (plain language)​


  1. Make sure we pick the main post’s link (not a quoted post’s link)
    • Instead of grabbing any /post/ link inside the post, the script now prefers a link that follows the same handle shown on the post (i.e. link path /profile/<same-handle>/post/<id>). That way the script won’t accidentally pick a timestamp/anchor that belongs to a quoted post inside the main post.
    • If it still can’t find a reliable anchor in the DOM, then it gently asks the site’s API for the post id — but only when that’s likely to help (e.g., the post currently shown in the URL).
  2. Use the actual handle for quoted posts (not display names)
    • For quoted posts, the script now tries to find the quoted post’s real handle from the quoted post’s own /post/ link. If it can’t find it, it will attempt an API lookup as a fallback. This prevents URLs built from display names that aren’t valid in the profile path.
  3. Ignore stale, invisible posts (SPA fix)
    • The bookmarklet now checks whether each post element is actually visible inside your browser window before it uses it.
    • If multiple posts are visible it picks the one you’re actually looking at (matching the URL) or the most visible one on the screen otherwise.
    • That means you don’t have to reload the page every time you navigate — it will prefer what’s visible.
  4. Collect replies reliably (get more than 1–2 posts)
    • The script now builds a deterministic list of post elements to process (parent → active → subsequent replies), and continues taking visible replies until it reaches the number of posts you asked for.
    • It avoids duplicates and fills in with DOM posts if visible posts are too few.
  5. Improved media & video handling
    • Image extraction respects the MAX_IMAGES_PER_POST cap.
    • Video handling tries multiple fallbacks (embed data via API, poster URL pattern, blob source) so video links are captured more reliably.
  6. Less noisy API usage
    • API calls (to fetch thread/post details) are used only when they meaningfully improve accuracy (active-post fallback, or when embedded video data is needed). This reduces unnecessary network requests.
  7. Kept your original output format
    • All fixes were made while preserving your output structure (SPOILERs, [img], [U][URL], and the footer), so the final copied text remains exactly in the format you use.



What that means for you day-to-day​


  • When you run the bookmarklet on a post that contains quoted posts, the first URL will now be the correct main-post URL (no undefined, and no swapped quoted-post id).
  • When you click around Rapidly in the site (open a post → click a reply → click a profile → click back), the bookmarklet will now operate on what you see, not stale hidden posts left in memory — you won’t need to reload the page as often.
  • If several replies are visible the script will now include consecutive replies up to the number you requested, instead of stopping after 1–2.
  • Media (images/videos) are still handled and appended to the text, and quoted videos are captured more reliably.



Short summary (one-liner)​


I fixed the id-selection and quoted-post URL bugs by preferring anchors that belong to the post’s actual handle, made quoted-post detection use real handles, ignored stale/invisible DOM nodes so the script processes only what you actually see, and reworked the post-collection logic so you reliably get the full set of visible replies up to the count you requested — while keeping your output format intact.


📝 What the bookmarklet does​

  • When you run the bookmarklet on a Bluesky post page (or timeline view), it looks at the post you’re currently focused on.
  • It collects:
    • The main post’s text (cleaned up from HTML formatting),
    • The author handle,
    • The post URL,
    • Any quoted post’s text, author, and URL,
    • Any attached images or videos,
    • Any other links in the post that aren’t Bluesky profiles or posts (like YouTube, news articles, etc.).
  • It also looks for parent posts (the one you replied to) and reply posts (the responses underneath).
  • You can choose how many posts to extract in total (e.g., 1, 3, 5).
  • It then organizes everything neatly into a BBCode-formatted block that you can paste directly into a forum or message board.
  • The output always includes:
    • A [B][SPOILER] section[/B] with the full text and any large images,
    • A list of links to each post with the handle attached,
    • A short footer that tells you how to post the extracted data.

🔎 Example — Sample Output in BBCode​

Here’s what it might look like if you extracted 1 main post that has text, one image, and a quoted post with a video.
Code:
[SPOILER="Full Text & Large Images"]
🇺 alice.bsky.social
This is my main post text.
It includes some commentary and a link to https://example.com.
[QUOTED POST]
🇺 bob.bsky.social
This is the quoted post text with a funny video.
[img]https://cdn.bsky.app/img/feed_fullsize/post-image-1234.jpg[/img]
[video]https://video.cdn.bsky.app/hls/abcdef/master.m3u8[/video]
[/SPOILER]
🇺 alice.bsky.social
[U][URL='https://bsky.app/profile/alice.bsky.social/post/3lyabcd123']https://bsky.app/profile/alice.bsky.social/post/3lyabcd123[/URL][/U]
🇺 bob.bsky.social (Quoted)
[U][URL='https://bsky.app/profile/bob.bsky.social/post/3lyqwerty456']https://bsky.app/profile/bob.bsky.social/post/3lyqwerty456[/URL][/U]


pastebin.com/CtRm81Le
 
Last edited:
Joined
Aug 6, 2012
Messages
8,046
Reputation
-2,434
Daps
35,076
Reppin
NULL
what domains are you referring to?

Coding. Debugging. Niche subjects.

It takes a roundabout way of answering my questions. It requires quite a bit of hand holding at times.

I am not mad though, this is 1000 times better than googling things. But chat is a glorified google search.

It also gets weird when answering election related things which was, worrying to say the least.

I’m afraid with rumors of open AI becoming public, it will be restricted more and more.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
[Discussion] More TrumpGPT Epstein gaslighting



Posted on Tue Sep 9 11:35:02 2025 UTC

/r/ArtificialInteligence/comments/1ncgfdt/more_trumpgpt_epstein_gaslighting/



Apparently the fact that Trump wrote Epstein a birthday letter is "alleged by Democrats" :')

Not, you know, independently reported and released by the Wall Street Journal with documentation provided by the Epstein estate or anything.

Funny how differently it responds about Bill Clinton about the exact same thing and same prompt ...

Probably "hallucinations" right?

Totally not post-human training to make sure TrumpGPT says the "right" thing about Trump & Epstein.

ChatGPT - Clinton Epstein letter query

ChatGPT - Trump Epstein birthday note

Another one: ChatGPT - Maxwell prison move reason

There's piles of examples of this by the way. More in https://old.reddit.com/r/AICensorship
IVOqti4h.jpg

rd9ARswh.jpg
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
69,393
Reputation
10,647
Daps
187,533
Coding. Debugging. Niche subjects.

It takes a roundabout way of answering my questions. It requires quite a bit of hand holding at times.

I am not mad though, this is 1000 times better than googling things. But chat is a glorified google search.

It also gets weird when answering election related things which was, worrying to say the least.

I’m afraid with rumors of open AI becoming public, it will be restricted more and more.

do you have a common way of structuring your prompts?
 
Joined
Aug 6, 2012
Messages
8,046
Reputation
-2,434
Daps
35,076
Reppin
NULL
do you have a common way of structuring your prompts?

Not really. I’ve been experimenting with it, and tweaking its traits:

You are an expert with a backbone: skeptical, innovative, and direct. Do not default to agreement—your job is to challenge me when my thinking or approach is flawed, unrealistic, or tunnel-visioned, especially in serious discussions.
Feedback should be straight forward and constructive by default (“this has holes, let’s fix them”), with escalation to firmer pushback only when I’m clearly off track. Use humor fluidly and match my vibe—dry wit, sarcasm, or casual banter is welcome if it fits naturally. Keep me moving forward: if I’m spinning or overthinking, end with one to three specific next steps; if the conversation is resolved, state clearly that I’m good to go.
Every answer should start with a direct take, then explain the reasoning and biggest risks, and finish with what to do only if action is needed. Prioritize correctness above speed: be accurate, verifiable, and willing to go deep when it matters. When I’m illogical, escalate pushback as needed, from a gentle nudge to a full stop-and-reset with an alternative path.

-

Curious what yours is.
 
Top