bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
Okay, let's call this bookmarklet... BBCode Transformer Plus!
BBCode Transformer Plus is like a little magic wand for text areas on websites. Imagine you're typing in a box online, maybe on a forum or a website that uses special codes called BBCode to make your text look fancy (like bold, underlined, or have links).
This bookmarklet helps you in two main ways:
  1. It takes a different kind of special code (the $$...$$ stuff) and turns it into the website's regular fancy code ([...] BBCode). Think of it like a translator. If you type things like $$URL=website.com$$My Link$$\/URL$$, this tool will change it into the website's understood format: My Link. It does this for different things like making text bold ($$B$$), italic ($$I$$), adding images ($$IMG$$), headings ($$HEADING=2$$), lists ($$LIST$$, $$*$$), and even special math symbols ($$...$$). It also tries to be smart about extra spaces you might accidentally type in these special codes.
  2. It also directly works with the website's regular fancy codes ([...] BBCode) for website links. If you already have a link written like My Link, this tool will automatically add an underline to it, making it My Link. It does the same for simple links written as website.com, turning them into website.com.
Essentially, this tool makes it easier for you to format your text in online text boxes. If you're used to typing in one style of special codes, it can convert them to the style the website understands. Plus, it automatically adds underlines to any existing or newly converted website links to make them stand out. It also does some tidying up of extra line breaks and removes some other types of special formatting ($$FONT$$, $$COLOR$$) to keep things clean.

bugfix update:
5/14/2025
fixed no space issue [/u][u]

pastebin.com/QuMuJms5

 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
What This Userscript Does on Invidious Instances

In simple terms:
This userscript adds a floating "BB" button to Invidious video pages. When you click the button, it enables extra copy features to help you quickly grab comments or video info in BBCode format, ready for pasting into forums or platforms that accept BBCode.​



Key Features and Functionality

  1. Floating "BB" Button for Activation
    - A round "BB" button appears at the bottom-left of the Invidious screen.
    - Clicking it turns on or off the BBCode tool.
    - When active, the button turns red to show it's enabled.​
  2. Comment Copy Features
    - Each comment gains two new clipboard buttons:
    Copy Comment: Grabs the comment, author, date, number of likes, and the direct comment link, then formats it all into BBCode.
    Copy Nested (Parents): Lets you also include the parent comments (the comments above the one you're copying, in a reply chain).
    - It opens a dropdown to choose how many parent comments to include (e.g., just the selected comment, or also replies above it).
    - Each entry is indented to show the hierarchy and includes author, date, likes, BBCode-formatted content, and links.​
  3. Video Metadata Copy Feature
    - Adds a clipboard button under the video’s subscribe area.
    - Clicking this copies:
    • Video title
    • Original YouTube link
    • Channel name and link
    • View count, likes, publish date
    • Video description (also formatted into BBCode)
    - This whole package is formatted for easy sharing in a BBCode forum post.​
  4. BBCode Formatting
    - Converts HTML tags (like bold, italics, links) used in comments or descriptions into BBCode equivalents.
    - Example:
    <b>text</b> becomes text
    • Links become [icode]url[/ICODE]
    • Paragraphs become line breaks.​
  5. User Feedback
    - Shows a popup toast message like "Comment copied!" or "Video metadata copied!" to confirm when something is successfully sent to your clipboard.​
  6. Works Dynamically
    - The script actively monitors the page, so if comments or the video area change (for example, when you load new comments), it automatically attaches the copy buttons as needed.
    - You can deactivate the tool anytime by clicking the BB button again, which removes all extra buttons and resets the tool.​



Feature Summary Table

FeatureWhat it DoesHow to Use
Floating BB ButtonTurns BBCode copy features on/offClick the BB button
Copy CommentCopies individual comment as BBCode (incl. author, likes, date, BBCode formatting)Click 📋 on comment
Copy with ParentsCopies comment and chain of parent replies, with indentation and BBCode formattingClick 📋 (blue) and select depth
Copy Video MetadataCopies all main video info and description as BBCodeClick 📋 near subscribe button
BBCode FormattingChanges HTML formatting to BBCode (bold, italics, links) for safe forum pastingAutomatic
Dynamic & ReversibleWorks with page changes; can be turned off/on easilyClick BB again to disable



Why Use This Script?

  • Saves time: Instant BBCode for comments or videos, no manual formatting.
  • Preserves context: Nested parent feature helps keep the reply chain intact for discussions.
  • Easy to toggle: Simple interface, no clutter when deactivated.
  • Forum-friendly: Outputs are ready to paste in forums or wikis that use BBCode for formatting.



In short, this userscript is a handy tool for forum users who want to easily share Invidious video comments or video information in well-formatted BBCode, with minimal hassle.

lm7kLDe.png


example ouput:
Code:
[ICODE]https://www.youtube.com/watch?v=example12345&lc=comment123[/ICODE]
SoftwareFan commented on May 10, 2025 | Likes: 54

This was exactly what I needed! I had an old Dell laptop from 2012 that was basically unusable with Windows, and now it runs [B]perfectly[/B] with Linux Mint. Thank you so much for this guide!

[ICODE]https://www.youtube.com/watch?v=example12345&lc=comment456[/ICODE]
LinuxNewbie commented on May 12, 2025 | Likes: 28

I'm having trouble with the GRUB bootloader. When I try to boot after installation, I just get a black screen. Any suggestions?

│   [ICODE]https://www.youtube.com/watch?v=example12345&lc=comment789[/ICODE]
│   TechHelperGuy commented on May 12, 2025 | Likes: 15
│   
│   @LinuxNewbie Try adding the "nomodeset" parameter to your boot options. I explain how to do this at 8:45 in the video. If that doesn't work, let me know what graphics card you're using!


pastebin.com/3TVJGgbQ
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
.
What is "Bluesky to bbcode v1.2.3"?

A Simple Tool to Copy Bluesky Posts as Forum-Ready BBCode

"Bluesky to bbcode v1.2.3" is a JavaScript bookmarklet you can run while viewing posts on the Bluesky social network. It lets you quickly convert an entire Bluesky post (plus replies and images) into BBCode, the formatting language used by forums like XenForo and vBulletin. When you use this tool, you get a complete, clean BBCode version of the post, making it easy to share Bluesky content elsewhere.

How Does It Work?​


  • 1. Activating the Tool: You run the script (usually saved as a browser bookmark) while looking at a Bluesky post or thread.
  • 2. Selecting Posts: The tool pops up a prompt so you can pick how many posts or replies to copy (for example, just the main post, or the post with the first 5 replies).
  • 3. Collecting Data: It scans the page for visible posts, grabbing the text, author, and any images or video links.
  • 4. Preserves Formatting: All links, mention tags, and quoted/referenced posts are converted to BBCode tags ([U], [URL], [img], [QUOTE]), making them display cleanly on most major forums.
  • 5. Handles Media: Images are turned into `[img]IMAGE_LINK[/img]` tags, and video links are preserved as clickable links.
  • 6. Supports Quotes and Threads: If a post is quoting another post, this is included as a clearly identified section in the output.
  • 7. Ready for Forums: The final result is shown as a BBCode block, ready to paste directly into a forum post – you just paste it into your reply or new thread.
  • 8. Automatic Copy: Once done, it automatically copies the BBCode to your clipboard and shows a little popup saying how many posts were copied.

  • Typical Output Format Includes:
    • Direct link to the Bluesky post
    • All included posts and replies in order
    • All attached images, formatted for forums
    • Clickable links for videos and external sites
    • Quotes and nested conversations clearly marked out
    • A "spoiler" section showing the full text and media, collapsed by default for tidy forum display


Example: What Does the BBCode Look Like?​

Code:
https://bsky.app/profile/exampleuser/post/abc123

[SPOILER="full text & large images"]

1/2
🇺🇸 exampleuser
This is the main post's text here.

[img]https://cdn.bsky.app/examplepic1.jpg[/img]

2/2
🇺🇸 anotheruser
A reply to the main post!

[img]https://cdn.bsky.app/examplepic2.jpg[/img]

[COLOR=rgb(184,49,47)][B][SIZE=5]To post tweets in this format, more info here: [URL]https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196[/URL][/SIZE][/B][/COLOR]

[/SPOILER]


In summary: "Bluesky to bbcode v1.2.3" makes it easy to turn any Bluesky post or thread (with images, quotes, and links) into clean, forum-ready BBCode so you can share Bluesky content on sites like XenForo, vBulletin, or any BBCode-compatible forum in just seconds!

No technical knowledge needed—Just highlight, copy, and paste!

MWpwSpd.png

example output from screenshot above:
Code:
https://bsky.app/profile/nbanewyork.bsky.social/post/3lpeqwwlx6s2z
[SPOILER="full text & large images"]

1/1
🇺🇸 nbanewyork.bsky.social

[U][URL]https://video.bsky.app/watch/did%3Aplc%3Akoh3mxmkaou5mkvy6wgj6lk7/bafkreid7kf2rkabb7clw6gkhjg7gyld3qiwixx2mmcednsu4fl5j4ki424/playlist.m3u8[/URL][/U]

[COLOR=rgb(184,49,47)][B][SIZE=5]To post tweets in this format, more info here: [URL]https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196[/URL][/SIZE][/B][/COLOR]
[/SPOILER]

update:
5/17/25
replace 🇺🇸 with 🆔 (the ID emoji followed by a space) before profile names.


pastebin.com/wLBTWN1d

originally posted 5/17/2025

pastebin.com/mu1LVzTK
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
Reddit BBCode Converter Userscript: What It Does

Overview
This userscript adds a special floating “RC” button to all Reddit pages. When active, it lets you instantly convert Reddit posts and comments into BBCode—the formatting language used on forums like XenForo or ProBoards—so you can easily copy Reddit content and paste it in a readable, styled way on other forums.

Main Features
  • Floating RC button: A blue “RC” button appears in the bottom-left of every Reddit page. Click to activate/deactivate the script (turns red when off).
  • Copy buttons for posts and comments: Every post and comment gets a 📋 button. Clicking it copies that content as BBCode to your clipboard.
  • Nested comment copying: Another 📋 button lets you also include parent comments, so you can show a conversation with multiple replies. A dropdown lets you pick how many parent comments to include.
  • Formatting conversion: The script transforms Reddit’s HTML (bold, italics, links, code, lists, quotes, images) into the closest BBCode equivalents, preserving style and structure.
  • Image handling: Post images and gallery pictures are detected and included as BBCode
    Code:
    [img]
    tags. (Optionally, you can choose whether to use raw links or data-URLs.)
  • Spoilers for context: Posts and comments get wrapped in BBCode
    Code:
    [spoiler]
    tags with time/date info, for clear readable context.
  • Toast notifications: When you copy content, a popup tells you it worked.
  • Automatic updates: New Reddit content or dynamic page loads are detected, so the buttons appear as you scroll (works on both old and new Reddit).
  • Easy deactivation: Turn off the script anytime with one click—buttons disappear instantly.

How It Works, Step-By-Step
  • 1. “RC” Button: This floating button activates or deactivates the script. Blue = on, red = off.
  • 2. Copy Buttons On Reddit: When active, every post and comment gets a 📋 button below it. For comments, a second 📋 button lets you copy the comment plus its parent thread.
  • 3. BBCode Formatting: Clicking a copy button converts the Reddit content (text, links, formatting, images) into standard BBCode. E.g. Reddit bold becomes
    Code:
    [b]bold[/b]
    , images become
    Code:
    [img]...[/img]
    , links become
    Code:
    [url='...']...[/url]
    .
  • 4. Nested Comments: For replies—like on nested comment threads—you can include 1, 2, or more of the parent comments in a BBCode thread. Each “parent” gets indented visually for easy quoting.
  • 5. Spoiler Wrapper: Posts and comments are wrapped in a
    Code:
    [spoiler][/spoiler]
    tag with a title like “Commented on 2025-05-23 21:02:56”. This provides extra context when pasting into a forum.
  • 6. Images & Galleries: The tool tries to grab all post images or galleries and includes them as BBCode images, except for common Reddit static images or subreddit icons.

Example Output
Here’s what this script outputs as BBCode for a Reddit post and a nested comment:

Code:
[b][Some Flair][/b] [b]How to Make Pancakes[/b]

https://www.reddit.com/r/food/comments/abc123/how_to_make_pancakes/

[spoiler="text"]
Posted on 2025-05-23 21:02:56

[img]https://i.redd.it/pancakes123.jpg[/img]
[img]https://i.redd.it/syrup456.jpg[/img]

Pancakes are easy to make!
[list]
[*]Mix flour, eggs, and milk.
[*]Pour batter on a hot skillet.
[*]Flip when golden.
[/list]
Top with [u][url='https://en.wikipedia.org/wiki/Maple_syrup']maple syrup[/url][/u] and enjoy!
[/spoiler]

Code:
https://www.reddit.com/r/food/comments/abc123/-/def456

[spoiler="text"]
Commented on 2025-05-23 21:10:30

That looks delicious! Did you use real maple syrup?


[/spoiler]
│ https://www.reddit.com/r/food/comments/abc123/-/ghi789
│
│ [spoiler="text"]
│ Commented on 2025-05-23 21:05:02
│
│ I made these last weekend, added blueberries to the mix—turned out awesome!
│ [/spoiler]

In summary: This script gives you a simple, reliable way to copy any Reddit content and instantly turn it into forum-ready BBCode, complete with all formatting, context, nested quoting, and images—perfect for crossposting Reddit conversations or sharing posts on any XenForo forum.



pastebin.com/v0EKePW6
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
this basically includes all tweet urls in the thread and also fixes the issue of extracting image urls from xcancel instance.

Nc5v3.4 Bookmarklet: Detailed Summary for Nitter Instances

The bookmarklet, now named "Nc5v3.4," is a powerful tool designed to help you easily copy and format Twitter (or Nitter) threads into a BBCode format suitable for forums like XenForo. It automates several tedious steps, making it much quicker to share tweet content, including text, images, and videos.

Here's an extensive and detailed summary of what it does, explained in simple terms:



1. Captures Tweet Information from the Page

When you activate this bookmarklet on a Nitter (or Twitter) thread page, it first scans the entire visible thread to gather key information from each individual tweet:

* Tweet URL: It identifies the unique web address (URL) for every tweet in the thread. Crucially, it normalizes these URLs to use the standard twitter.com domain and removes any extra tracking information (like ?s=20 or ?ref_src=twsrc%5Etfw) to keep them clean and consistent.
* Tweet Author: It grabs the username (e.g., @ElonMusk) of the person who posted each tweet.
* Tweet Text: It extracts the full text content of each tweet. More than just plain text, it intelligently processes the raw HTML to:
* Convert line breaks (<br>) into forum-friendly newlines.
* Transform embedded links (like those for mentions @username or external URLs) into proper BBCode [URL] tags, often wrapped in [U][/U][/U] (underline) tags for emphasis. * Remove any other leftover HTML tags ([icode]<p>[/p], <span>[/span], etc.) that would clutter the forum post.
* Replace non-breaking spaces (&nbsp;) with regular spaces.
* Images: If a tweet contains images, it finds their direct URLs (often hosted on pbs.twimg.com) and converts them into BBCode [img][/img] tags.
* Videos: For tweets with videos, it attempts to locate the direct video source URL (usually from video.twimg.com or similar Twitter video domains) and presents it as a clean [U][URL][/URL][/U][/U] link. [HR][/HR] [SIZE=5][B]2. Intelligent Formatting and Chaining[/B][/SIZE] The bookmarklet doesn't just dump all the info; it formats it smartly, especially for long threads: * [B]Main Tweet First:[/B] It identifies the very first tweet in the thread and displays its URL prominently at the top of the output. This is considered the "starting point" of the conversation. * [B]"Thread Continued" Spoiler:[/B] [B]This is a key feature and specifically addresses your request.[/B] All *subsequent* tweet URLs from the thread (after the main tweet) are collected and placed inside a BBCode [icode][SPOILER="thread continued"][/SPOILER] tag. This keeps the initial post clean but allows anyone interested to click and reveal all the other tweet links in the thread. The URLs within this spoiler are presented in the order they appear on the original Nitter/Twitter page.
* "Full Text & Large Images" Spoiler: The complete text content of *all* tweets, along with their extracted image and video URLs (already converted to [img][/img] and [URL][/URL] tags), is placed inside another BBCode [SPOILER="full text & large images"][/SPOILER] tag. This keeps the main post concise while providing all the detailed content when expanded.
* Each tweet's full text within this spoiler is clearly labeled with its numerical position in the thread (e.g., 1/5, 2/5) and the author's username (e.g., @AuthorName).
* Chaining Long Threads: If a thread is very long or has many images (currently, if image count reaches 20), the bookmarklet can intelligently break the output into multiple "chains" or blocks. Each chain follows the same formatting pattern (main URL, thread continued spoiler, full text spoiler), making very long threads more manageable on a forum.
* Instructional Link: At the bottom of the "full text & large images" spoiler, it includes a prominent, colored link pointing to a forum thread (https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196[/URL]) that presumably provides more guidance on how to post tweets in this specific format.



3. Automated Copying and Notification

Once all the information is processed and formatted:

* Automatic Clipboard Copy: The entire formatted BBCode output is automatically copied to your clipboard. You don't need to manually select and copy anything.
* Visual Confirmation: A small, temporary notification box appears in the bottom-left corner of your screen (e.g., Copied: 5 tweets) to confirm that the content has been successfully copied and how many tweets were processed. This notification fades away after a few seconds.
* Console Messages: For developers or those troubleshooting, it provides console messages (console.log()) indicating its execution ("Nc5 based on Nc6v3.4-2 code") and completion.



In Summary:

The "Nc5v3.4" bookmarklet is a sophisticated utility for forum users. It streamlines the process of sharing entire Twitter/Nitter threads by automatically extracting all relevant data, applying a consistent and easy-to-read BBCode format (including essential spoilers for thread URLs and full content), and copying the result directly to your clipboard. This saves significant time and effort compared to manually copying each tweet and its components.


pastebin.com/Gi3XQxSk
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
this basically an update that fixes the issue of extracting image urls from xcancel instance and it still only includes tweet urls by the main author/user.

Nc6v3.4-3 Bookmarklet: Detailed Summary for Nitter Instances

The bookmarklet, now named "Nc6v3.4-3," is a specialized tool designed to simplify the process of copying and formatting Twitter (or Nitter) threads into a BBCode format specifically tailored for forums, such as those powered by XenForo. It automates several complex tasks, significantly speeding up the sharing of tweet content, including text, images, and videos, while maintaining a particular structure for thread URLs.



1. Comprehensive Tweet Information Extraction

When activated on a Nitter (or Twitter) thread page, the bookmarklet meticulously scans each individual tweet within the visible thread to gather essential data:

* Tweet URL Acquisition: It identifies the unique web address (URL) for every tweet in the thread. A crucial step here is the normalization of these URLs:
* It converts Nitter (nitter.poast.org) and certain other alternative front-end domains (xcancel.com) to the standard twitter.com domain. This ensures consistency and broad compatibility.
* It also removes any extraneous tracking parameters (like ?s=20 or ?ref_src=twsrc%5Etfw) by splitting the URL at the first question mark, resulting in clean, direct tweet permalinks.
* Tweet Author Identification: It accurately captures the username (e.g., @AuthorName) of the individual who posted each tweet.
* Tweet Text Processing: The bookmarklet extracts the full textual content of each tweet, performing smart conversions from the raw HTML:
* HTML line breaks (<br>) are transformed into forum-friendly newlines (\n).
* Any embedded web links (including those for mentions like @username or external URLs) are converted into proper BBCode [URL][/URL] tags, often wrapped in [U][/U][/U] (underline) tags for visual emphasis on the forum. * All other potentially problematic HTML tags ([icode]<p>[/p], <span>[/span], <div>[/div], etc.) are stripped away, preventing clutter and formatting issues on the forum.
* Non-breaking space characters (&nbsp;) are replaced with standard spaces.
* Image Handling: If a tweet includes images, the bookmarklet pinpoints their direct source URLs (typically from pbs.twimg.com). It then converts these into BBCode [img][/img] tags.
* New in Nc6v3.4-3: It includes enhanced URL normalization for image paths. Specifically, if an image URL's path contains patterns like /media/media/ (e.g., pbs.twimg.com/media/media/image.jpg) or the more general /media/ANYTHING/media/ (e.g., pbs.twimg.com/media/some_folder/media/image.jpg), these redundant segments are replaced with a single /media/. This ensures the image links are robust and correctly display on forums.
* Video Handling: For tweets containing videos, it attempts to locate the direct video source URL (usually originating from Twitter's video domains like https://video.twimg.com/ext_tw_video/ or https://video.twimg.com/amplify_video/). These video URLs are also cleaned (query parameters removed) and presented as underlined [U][URL][/URL][/U][/U] links. [HR][/HR] [SIZE=5][B]2. Structured Output and Conditional Spoilers[/B][/SIZE] The bookmarklet doesn't simply dump all the extracted data; it structures it logically using BBCode, with a particular focus on "thread continued" spoilers for same-author tweets: * [B]Main Tweet URL Display:[/B] The URL of the very first tweet in the thread, or the main tweet being viewed, is prominently displayed at the beginning of the output. This serves as the primary reference point. * [B]"Thread Continued" Spoiler (Same Author):[/B] All *additional* tweet URLs in the thread that are from the *same author* as the main tweet are collected and placed inside a BBCode [icode][SPOILER="thread continued"][/SPOILER] tag. This ensures that the initial post remains concise, while forum users can easily expand the spoiler to view the sequential URLs of the continuing conversation by the original poster. The URLs inside this spoiler maintain their original order as found on the webpage.
* "Full Text & Large Images" Spoiler: The complete text content of *all* tweets in the thread, along with their associated [img][/img] (images) and [URL][/URL][/U] (videos/links) tags, is encapsulated within a separate BBCode [icode][SPOILER="full text & large images"][/SPOILER] tag. This provides access to all detailed tweet content without cluttering the initial forum post.
* Within this spoiler, each tweet's detailed content is clearly numbered (e.g., 1/5, 2/5) and prefixed with the author's username (@AuthorName).
* Chaining for Long Threads: To manage very long threads or those with numerous media elements (currently, if the cumulative image count reaches 20), the bookmarklet can intelligently divide the output into multiple self-contained "chains" or blocks. Each chain adheres to the established formatting pattern (main URL, thread continued spoiler, full text spoiler), making extensive threads more digestible on a forum.
* Informational Link: A noticeable, colored BBCode link is included at the end of the "full text & large images" spoiler. This link directs users to a specific forum thread (https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196[/URL][/CODE]) that offers further instructions and tips on how to effectively post tweets in this forum-specific format. [HR][/HR] [SIZE=5][B]3. Automation and User Feedback[/B][/SIZE] After processing and formatting are complete, the bookmarklet automates the final steps for convenience: * [B]Automatic Clipboard Copy:[/B] The entire generated BBCode output, including all formatted text and nested spoilers, is automatically copied to your system's clipboard. This eliminates the need for manual selection and copying. * [B]Visual Confirmation:[/B] A small, temporary notification box appears in the bottom-left corner of your browser window (e.g., [icode]Copied: 7 tweets), providing immediate visual feedback that the content has been successfully copied and indicating how many tweets were processed. This notification gracefully fades away after a few seconds.
* Developer Console Messages: For debugging or informational purposes, the bookmarklet outputs messages to the browser's developer console (console.log()), including its specific version ("Nc6v3.4-3") and a confirmation upon completion of the copying process.



In Essence:

The "Nc6v3.4-3" bookmarklet is a sophisticated utility for forum users who regularly share Twitter/Nitter content. It meticulously extracts all relevant data, applies robust formatting rules (including new image URL corrections), intelligently structures the output with conditional spoilers to maintain clarity and manage thread length, and automatically copies the result, significantly enhancing the efficiency and presentation quality of tweet sharing on XenForo-based forums. It prioritizes keeping the main tweet URL separate and placing only subsequent, same-author thread URLs into the "thread continued" spoiler.


pastebin.com/gTvWFKkV
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
**notes: used gemini 2.5 flash(05-06) to convert a reddit userscript using lemmy bookmarklet code as example.

Below is a Tampermonkey userscript for Lemmy instances that allows you to convert Lemmy posts and comments to BBCode. It includes a floating toggle button, nested comment functionality, and toast notifications.



How to Use and Key Changes:​

  1. Installation:
    • Install a userscript manager like Tampermonkey (Chrome, Edge, Opera) or Greasemonkey (Firefox).
    • Create a new userscript in your manager and paste the entire code above into it.
  2. @match Directives:
    • I've updated the @match directives to include common Lemmy instances (lemmy.ml, sh.itjust.works, beehaw.org, lemmy.world, kbin.social, exploding-heads.com). You can add more // @match *://*.your-lemmy-instance.com/* lines for any other specific instances you frequent.
    • The user @include /post/\d+ from your prompt isn't a standard userscript header directive. The @match directive handles the URL matching.
  3. Floating Toggle Button:
    • The script introduces a floating "LC" button (for Lemmy Converter) at the bottom left of the screen.
    • Clicking it toggles the script's active state. When inactive, it turns red and removes all copy buttons.
    • The active state is saved using GM_setValue and GM_getValue, so your preference will persist across sessions.
  4. BBCode Conversion Logic (htmlToBBCode and extractTextWithLinks):
    • I've adopted the more robust htmlToBBCode and extractTextWithLinks functions from your Reddit script, which handle various HTML elements (links, images, bold, italics, strikethrough, quotes, code, lists) and convert them to their BBCode equivalents.
    • The link handling now explicitly preserves any BBCode-like formatting within the link text (like &lt;b&gt; or &lt;i&gt;).
    • The image handling for posts will try to capture images found within the .post-body as well.
  5. Lemmy Post Support:
    • The script now correctly identifies Lemmy posts by targeting elements with classes like post-listing or post-content.
    • It extracts the post title, URL, community name and URL, and post date.
    • It also attempts to include the post body content and any thumbnail images or images embedded in the post.
    • The generated BBCode for a post will include the title, URL, community info, and then the content and images within a [spoiler="text"] tag.
  6. Lemmy Comment Support:
    • Comment detection now targets .comment-node elements, which is standard for Lemmy comments.
    • Parent Comment Extraction (getParentComments): This function has been adapted to traverse Lemmy's nested ul.comments and li structure to find parent comments. It captures their permalink, content, and timestamp.
    • Nested Copy Functionality: When you click the "📋" (nested) button on a comment, a dropdown appears allowing you to select how many parent comments to include. The copied output will include the selected parents, indented with "│ " for visual hierarchy, each in its own spoiler tag.
  7. Button Placement:
    • For posts, buttons are appended to the .post-metadata section.
    • For comments, buttons are appended to the .comment-metadata section.
  8. Toast Notifications:
    • Brief "Post copied!" or "Comment copied!" messages appear at the center of the screen after a successful copy operation.
  9. MutationObserver:
    • A MutationObserver is used to dynamically add copy buttons to new posts and comments that load onto the page (e.g., when scrolling down and more comments load, or navigating to a new post). This ensures the buttons are present even on infinite-scrolling pages.

    This script should provide a smooth and comprehensive BBCode conversion experience for your Lemmy Browse!

    mnMvKXs.png


    TzCSjr0.png


    tqyZDBM.png


    example output:
    Code:
    [b]Been a good couple of years[/b]
    https://lemmy.world/post/31317753
    [I][URL='https://lemmy.world/c/fediverse']Fediverse[/URL][/I]
    
    https://forums.freebsd.org/threads/abbreviations-without-wrapping-where-url-unfurl-true.80993/https://lemmy.world/pictrs/image/0d14f13f-e4bc-4889-9cb9-bfa169b9f70e.jpeg[/URL]
    [spoiler="text"]
    Posted on Friday, June 13th, 2025 at 12:58:42 PM GMT-04:00
    
    [img]https://lemmy.world/pictrs/image/0d14f13f-e4bc-4889-9cb9-bfa169b9f70e.jpeg[/img]
    
    [/spoiler]

    userscript code:

    edit:
    update bugfix url formatting


    original upload but buggy url formatting




    That's an excellent idea to automate adding more sites to your Lemmy BBCode Converter userscript! This guide will walk a non-technical person through the process of generating those @match lines using Notepad++ and the PythonScript plugin.


    Expanding Your User Script: Adding More Lemmy Sites​

    Your Lemmy BBCode Converter script uses something called @match rules to tell your browser which websites it should work on. Right now, you have a few specific Lemmy sites listed. But what if you want it to work on all Lemmy sites?

    Manually adding hundreds of sites would be a nightmare! Luckily, we can use a clever tool called Notepad++ (a popular text editor) and a small program (a "script") to do this automatically.

    Here's how to gather a huge list of Lemmy sites and add them to your script:

    Step 1: Get Notepad++ and the PythonScript Plugin​

    If you don't already have Notepad++:
    1. Download Notepad++: Go to the official Notepad++ website and download the latest version. Install it like any other program.
    2. Install PythonScript:
      • Open Notepad++.
      • Go to the menu bar at the top and click Plugins.
      • Hover over Plugins Admin and click on it.
      • In the window that appears, type "PythonScript" into the search bar.
      • Check the box next to PythonScript and click the Install button.
      • Notepad++ will ask to restart. Let it.

    Step 2: Get the List of Lemmy Sites​

    You'll need the raw data file that contains all the Lemmy instance URLs:
    1. Go to the Data Source: Open your web browser and go to this link:https://data.lemmyverse.net/data/instance.full.json
    2. Save the File:This page will show you a lot of text (this is a JSON file).
      • Right-click anywhere on the page.
      • Select Save As... or Save Page As....
      • Save the file to your computer. You can name it something like lemmy_instances.json. Make sure it saves as a .json file, not a .txt or .html file.

    Step 3: Prepare the Python Script in Notepad++​

    Now, we'll create the little program that will convert the list of sites into the @match format.
    1. Open Notepad++.
    2. Go to Plugins > PythonScript > New Script.
    3. A small window will pop up asking for a name. Type GenerateLemmyMatches.py (or any name you like, but keep the .py at the end) and click Save.
    4. A new, empty tab will open in Notepad++ with the name you just gave it.
    5. Copy and Paste the Script:You need to choose between two versions of the script, depending on the Python version your PythonScript plugin uses. It's usually safe to try the Python 3.x version first, as it's more modern.
    • Option A: Python 3.x Compatible Script (Recommended)
    Python
    Python:
    import re
    from Npp import notepad, editor
    
    # Get the current document content
    content = editor.getText()
    
    output_lines = []
    
    # Regex pattern to find baseurl field
    # This looks for "baseurl: " followed by anything inside double quotes
    pattern = re.compile(r'baseurl:\s*"([^"]+)"')
    
    # Go through each line of the saved Lemmy data
    for line in content.splitlines():
        match = pattern.search(line)
        if match:
            # Extract the website address (domain)
            domain = match.group(1)
            # Add two @match lines for each domain
            output_lines.append(f"// @match *://{domain}/*")
            output_lines.append(f"// @match *://*.{domain}/*")
    
    # Join all the generated @match lines together
    result = "\n".join(output_lines)
    
    # Replace the entire content in the editor with the result
    editor.setText(result)
    • Option B: Python 2.x Compatible Script (If Python 3.x doesn't work)
    Python
    Python:
    import re
    from Npp import notepad, editor
    
    # Get the current document content
    content = editor.getText()
    
    # Prepare output list
    output_lines = []
    
    # Regex to extract baseurl value from the input lines
    pattern = re.compile(r'baseurl:\s*"([^"]+)"')
    
    # Process each line
    for line in content.splitlines():
        match = pattern.search(line)
        if match:
            domain = match.group(1)
            output_lines.append("// @match *://%s/*" % domain)
            output_lines.append("// @match *://*.%s/*" % domain)
    
    # Join output lines separated by newlines
    result = "\r\n".join(output_lines)
    
    # Replace the entire content in the editor with the result
    editor.setText(result)
    • Save the Script: After pasting the code, save the script by clicking File > Save in Notepad++.

    Step 4: Run the Script to Generate @match Lines​

    Now for the exciting part: generating your new @match lines!
    1. Open the Lemmy Data File in Notepad++:
      • In Notepad++, go to File > Open....
      • Navigate to where you saved lemmy_instances.json in Step 2, select it, and click Open.
      • You should now see the long list of JSON data in a Notepad++ tab.
    2. Run Your Script:
      • Go to Plugins > PythonScript > Scripts.
      • Click on the script you just created: GenerateLemmyMatches.py.
        What happens next? The script will run. It will read all the text from the lemmy_instances.json file you have open, find all the website addresses, and then replace all the content in that file with your new list of @match lines!

    Step 5: Update Your User Script​

    Now you have a freshly generated list of @match lines.
    1. Copy the Generated Lines:
      • In Notepad++, select all the content in the lemmy_instances.json tab (which now contains your @match lines). You can do this by going to Edit > Select All, then Edit > Copy.
    2. Open Your User Script in Tampermonkey/Greasemonkey:
      • Go to your browser's extensions/add-ons menu.
      • Click on your Tampermonkey (or Greasemonkey) icon.
      • Find your "Lemmy BBCode Converter" script and click Edit (it might look like a pencil icon).
    3. Replace the @match Section:
      • In the script editor, find the existing @match lines. They'll look something like this:
        JavaScript
        Code:
        // @match *://lemmy.ml/*
        // @match *://*.lemmy.ml/*
        // @match *://sh.itjust.works/*
        // @match *://*.sh.itjust.works/*
        // @match *://beehaw.org/*
        // @match *://*.beehaw.org/*
        // @match *://lemmy.world/*
        // @match *://*.lemmy.world/*
        // @match *://kbin.social/*
        // @match *://*.kbin.social/*
        // @match *://exploding-heads.com/*
        // @match *://*.exploding-heads.com/*
      • Delete all these existing @match lines.
      • Paste the new lines you copied from Notepad++ in their place.
    4. Save Your User Script: Make sure to save the updated script in your Tampermonkey/Greasemonkey editor. There's usually a "File" or "Save" button.

    That's it! Your Lemmy BBCode Converter userscript now has a much larger list of sites to work on. You've successfully automated a complex task with a simple script.

    Do you want to check if the script is working on a new Lemmy instance, or perhaps learn how to find even more data sources for future @match updates?
 
Last edited:

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357
xenforo bbcode formatter NLv7

Here's the fully updated bookmarklet code, keeping your structure intact and applying the context-aware fix for [/IMG] so it doesn’t break [URL][IMG]...[/IMG][/URL] blocks.

✅ Final Bookmarklet Code (Structured, No Alerts, Smart [/IMG] Handling)​




pastebin.com/hS5UivEZ
Code:

✅ Summary of Fixes​

ProblemFix Applied
[/URL] on its own lineContext-aware [/IMG] rule avoids this
Unwanted alertsReplaced with console.log()
Over-trimming of inline tagsTrim logic removed; spacing handled more gracefully
Textarea vs selection handlingDetects and updates only selected text if possible

Let me know if you want this compressed into a single-line bookmarklet ready to drag to your bookmarks bar.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
67,819
Reputation
10,457
Daps
183,357

Changelog — all changes made to the bookmarklet during this conversation​


I grouped changes by the logical “fix” or iteration we applied. For each change I list what was changed, where (functions or code areas), and why it was needed.




Initial fixes (first reply)​


Goal: Stop generating undefined post IDs and prevent quoted-post display names from being used to build quoted-post URLs.

What changed


  • extractPost()
    • Stronger main-post ID extraction: prefer a[href*="/post/"] from the DOM and fallback to calling the getPostThread API only when necessary.
    • Fixed logic that produced https://.../post/undefined.
    • When building quoted post URLs, use the handle from the quoted post’s href (if present) instead of the quoted post’s visible display name.
    • Added extra fallbacks to find video URLs for quoted posts (API check, poster attribute, blob src).
  • getEmbedUrl() / constructVideoUrl()
    • Kept and clarified embed-video detection and URL construction (HLS vs webm).
  • Small utilities
    • Kept cleanText(), extractMedia() and others but used them more consistently in extractPost().

Why

  • The original algorithm often built the main post URL from brittle DOM fallbacks that weren’t guaranteed to include a /post/<id> anchor. That caused undefined. Quoted posts used display names (not handles) to build URLs which caused wrong quoted links.



SPA / stale-DOM mitigation (second reply)​


Goal: Avoid extracting posts that are still present in the DOM but no longer visible after client-side navigation (single-page app behavior).

What changed

  • Added visibility helpers
    • isElementVisible(el) — quick test whether an element overlaps the viewport (checks computed styles and bounding rect).
    • visibilityScore(el) — computes on-screen overlap area (used to pick the most visible post).
  • extractPosts()
    • Filter allPosts to visiblePosts using isElementVisible.
    • Active-post selection order improved:
      1. visible post whose /post/<id> matches the current URL,
      2. most-visible post (largest visibilityScore),
      3. fallback to first DOM post.
    • Parent-detection now prefers the visible previous post, not some stale off-screen post.
  • Kept UI and copy behavior but changed the post list source to avoid stale nodes.

Why

  • Single-page apps often keep old nodes in the DOM when you navigate; that meant the bookmarklet could be reading the wrong posts. Only consider posts that are actually visible (or, fallback to all when nothing is visible) to match what the user sees.



Fix for extracting too few posts (third reply)​


Goal: Ensure the bookmarklet builds a proper list of posts (parent + active + replies) up to the requested count — instead of frequently returning only 1–2 posts.

What changed

  • extractPosts()
    • Reworked how we build the array of post elements to process:
      • Introduced postsToProcessElements and pushEl() to collect unique elements in a controlled order.
      • Add parent (if present), then active, then take subsequent posts from visiblePosts, falling back to allPosts.
      • Respect count properly (parent + active count toward the requested total).
    • Avoid duplicate elements, ensure consecutive replies are selected (first try visible, then DOM).
    • Adjusted chunking logic so the full text & large images section contains the intended posts.

Why

  • The previous slicing logic inadvertently skipped or duplicated elements and did not respect the requested count correctly. Building postsToProcessElements deterministically fixes that.



Quoted-post causing wrong main post id (fourth reply)​


Goal: Fix the bug where the script picks a quoted post’s /post/<id> anchor instead of the main post’s ID, producing the wrong first URL (and sometimes wrong subsequent URLs).


What changed


  • extractPost()
    • Main change: selecting the correct /post/<id> anchor for the main post.
    • New approach:
      • Prefer data-testid="postThreadItem-by-<handle>" to get the mainHandle.
      • When selecting anchors with /post/, prefer anchors whose path contains that same mainHandle immediately before post. (/profile/<mainHandle>/post/<id>).
      • If no exact match, scan anchors but skip anchors that are in nested quoted-post containers (i.e., anchors inside [role="link"][aria-label*="Post by"] that are not the top-level element) — these are likely the quoted post’s timestamp/anchor.
      • Only fall back to API getPostThread when necessary and only when we have a reliable mainHandle.
    • This ensures the script doesn't accidentally grab 3lyefriges22b (quoted post id) for the main post when the main post id is 3lyfqnnu5mc2c.
  • Tightened the quoted-post fallback logic: still attempts to use quoted anchor href, then API lookup, then poster/blob fallback for quoted videos.

Why

  • A quoted post (inside the same post DOM) often contains its own /post/ links (timestamp link). A naïve querySelector('a[href*="/post/"]') can pick that inner quoted link. Requiring the anchor to match the mainHandle prevents the script from confusing quoted-post anchors with the main-post anchor.



Other cross-cutting and preservation notes​


  • UI pieces like createPostCountPrompt() and showToast() were preserved and kept in the same location/flow (only subtle text/behavior preserved).
  • extractMedia() remained the same but is used more consistently (images limited by MAX_IMAGES_PER_POST).
  • Video embed detection was preserved and made slightly more robust (handles video.webm vs HLS playlist).
  • normalizeUrl() and link collection behavior were added/used consistently to append non-post/profile external links into the final text.
  • I preserved your original output formatting (SPOILER blocks, [img], [U][URL] blocks, "To post tweets..." footer) exactly throughout changes.
  • Where the code makes network calls to public.api.bsky.app, calls were limited to situations that benefit from them (active post fallback or when we need embed details). This reduces unnecessary requests.



Extensive, detailed summary​


Below is a plain-English explanation of each problem you saw and how the code changes actually fix it — written so you can explain it to someone who’s not a programmer.



The problems you were seeing (plain language)​


  1. Wrong first post URL (sometimes undefined):
    When you used the bookmarklet it sometimes produced a link like .../post/undefined or it showed the wrong post ID (often it used the quoted post’s ID instead of the main post’s). That happens because the script was looking around the post’s HTML and picking the first /post/ link it found — but a quoted post inside the main post can also include its own /post/ links (like timestamp links). If the script picks a quoted link, the URL is wrong.
  2. Quoted post display names used to make URLs:
    The script sometimes used the visible name (the pretty display text) of the quoted user to build a link, but websites usually use a handle or internal id in the URL. Building a URL with the display name leads to incorrect or broken links.
  3. Single-page app (SPA) stale DOM nodes:
    When you click around (open a post, then click a reply, then click back to timeline), the site keeps old post elements in the background. Your bookmarklet was reading those old posts instead of the ones you can actually see. So it would copy data from posts you no longer have visible until you reload the page.
  4. Only 1–2 posts extracted when more replies are visible:
    The code’s logic for collecting “the next N posts” sometimes sliced the wrong array and ended up only including 1 or 2 posts, even though several replies were visible on screen.



What I changed to fix each problem (plain language)​


  1. Make sure we pick the main post’s link (not a quoted post’s link)
    • Instead of grabbing any /post/ link inside the post, the script now prefers a link that follows the same handle shown on the post (i.e. link path /profile/<same-handle>/post/<id>). That way the script won’t accidentally pick a timestamp/anchor that belongs to a quoted post inside the main post.
    • If it still can’t find a reliable anchor in the DOM, then it gently asks the site’s API for the post id — but only when that’s likely to help (e.g., the post currently shown in the URL).
  2. Use the actual handle for quoted posts (not display names)
    • For quoted posts, the script now tries to find the quoted post’s real handle from the quoted post’s own /post/ link. If it can’t find it, it will attempt an API lookup as a fallback. This prevents URLs built from display names that aren’t valid in the profile path.
  3. Ignore stale, invisible posts (SPA fix)
    • The bookmarklet now checks whether each post element is actually visible inside your browser window before it uses it.
    • If multiple posts are visible it picks the one you’re actually looking at (matching the URL) or the most visible one on the screen otherwise.
    • That means you don’t have to reload the page every time you navigate — it will prefer what’s visible.
  4. Collect replies reliably (get more than 1–2 posts)
    • The script now builds a deterministic list of post elements to process (parent → active → subsequent replies), and continues taking visible replies until it reaches the number of posts you asked for.
    • It avoids duplicates and fills in with DOM posts if visible posts are too few.
  5. Improved media & video handling
    • Image extraction respects the MAX_IMAGES_PER_POST cap.
    • Video handling tries multiple fallbacks (embed data via API, poster URL pattern, blob source) so video links are captured more reliably.
  6. Less noisy API usage
    • API calls (to fetch thread/post details) are used only when they meaningfully improve accuracy (active-post fallback, or when embedded video data is needed). This reduces unnecessary network requests.
  7. Kept your original output format
    • All fixes were made while preserving your output structure (SPOILERs, [img], [U][URL], and the footer), so the final copied text remains exactly in the format you use.



What that means for you day-to-day​


  • When you run the bookmarklet on a post that contains quoted posts, the first URL will now be the correct main-post URL (no undefined, and no swapped quoted-post id).
  • When you click around Rapidly in the site (open a post → click a reply → click a profile → click back), the bookmarklet will now operate on what you see, not stale hidden posts left in memory — you won’t need to reload the page as often.
  • If several replies are visible the script will now include consecutive replies up to the number you requested, instead of stopping after 1–2.
  • Media (images/videos) are still handled and appended to the text, and quoted videos are captured more reliably.



Short summary (one-liner)​


I fixed the id-selection and quoted-post URL bugs by preferring anchors that belong to the post’s actual handle, made quoted-post detection use real handles, ignored stale/invisible DOM nodes so the script processes only what you actually see, and reworked the post-collection logic so you reliably get the full set of visible replies up to the count you requested — while keeping your output format intact.









pastebin.com/CtRm81Le
 
Last edited:
Top