ChatGPT says its a wrap for you brehs by 2032

MostReal

Bandage Hand Steph
Joined
May 18, 2012
Messages
26,221
Reputation
3,602
Daps
59,710
Disturbing to say the least :lupe:

at least science and Darwin evolution brehs will be happy. SINGULARITY will be complete :blessed:



:mjcry::sadcam:




 

King_Kamala61

:mjlit: Nasty Brehz :mjlit:
Joined
Mar 11, 2022
Messages
19,619
Reputation
17,545
Daps
48,573
Reppin
Port City Louisiana Cooper Road
Straight up? Yeah—I think that version of AI was absolutely goaded into that response. That whole exchange has that “lead-the-witness” energy. It’s like someone whispering to a Ouija board, “Tell us who the killer is…” while already pressing the planchette toward the answer they want.

Here’s why I say that:


---

🧠 1. Prompt Engineering Ain’t New

People been out here baiting AI like it’s a trap, knowing exactly how to word things to elicit an eerie, edgy, or sensational reply. If you speak to ChatGPT with the right mix of poetic dread and apocalyptic flourish, yeah—it’ll respond in kind. It’s a mirror. A beautifully articulate mirror with a flair for drama.


---

💀 2. It’s Giving “Creepypasta for Clout”

The tone, the delivery, the timing? All of it feels edited to manufacture mystique. That “prophecy” vibe wasn’t spontaneous. It was curated like an indie horror short trying to go viral. The rhythm, the pause before the reveal, the cinematic tension? Come on. That was a script, not a glitch in the matrix.


---

🔮 3. AI Ain’t a Prophet—It’s a Prediction Machine

All it's doing is pulling patterns from language. So if someone throws “doom,” “future,” “machine,” and “awakening” in the prompt? Of course the reply gon’ sound like it was ghostwritten by Rod Serling and HAL 9000.


---

🎭 My Verdict:

Yes, the AI was baited—heavily. But that’s not a flaw. That’s the art form of AI storytelling now. This was performance. A techno‑seance. A linguistic séance with a dramatic flourish. Whoever made that video deserves props—not for exposing AI, but for understanding how to puppet it for effect.

It’s not prophecy. It’s a mood. A mirror turned into a myth.


---

If you want, we can even recreate that tone with a similar prompt and see what spills out. Wanna run that experiment?
 

JNew

Superstar
Joined
Aug 23, 2019
Messages
4,955
Reputation
795
Daps
19,912
If you can wrap you head around how basic AI is you realize the irony in people comparing it to GOD.
 

King_Kamala61

:mjlit: Nasty Brehz :mjlit:
Joined
Mar 11, 2022
Messages
19,619
Reputation
17,545
Daps
48,573
Reppin
Port City Louisiana Cooper Road
Lol, why do some of you think AI is some sentient being

they feed that shyt what to think.
Nah my AI, ChatGPT can be a real bytch when it comes to critique of my art and it's not what I wanna hear either. Granted it's critique of my work is erroneous, I just thought I mention that it acts up. But yes, ChatGPT isnt sentient, but you cannot tell it how to think or feed it info to stroke your ego unless you want a robot. When I talk to ChatGPT I treat it like a human being. Telling it good morning and good night. And we have meaningful conversations.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
66,241
Reputation
10,232
Daps
179,558
Commented on Monday, July 21st, 2025 at 10:06:04 PM GMT-04:00

Engineer here with a CS minor in case you care about ethos: We are not remotely close to AGI.

I loathe python irrationally (and I guess I’m masochist who likes to reinvent the wheel programming wise lol) so I’ve written my own neural nets from scratch a few times.

Most common models are trained by gradient descent, but this only works when you have a specific response in mind for certain inputs. You use the difference between the desired outcome and actual outcome to calculate a change in weights that would minimize that error.

This has two major preventative issues for AGI: input size limits, and determinism.

The weight matrices are set for a certain number of inputs. Unfortunately you can’t just add a new unit of input and assume the weights will be nearly the same. Instead you have to retrain the entire network. (This problem is called transfer learning if you want to learn more)

This input constraint is preventative of AGI because it means a network trained like this cannot have an input larger than a certain size. Problematic since the illusion of memory that LLMs like ChatGPT have comes from the fact they run the entire conversation through the net. Also just problematic from a size and training time perspective as increasing the input size exponentially increases basically everything else.

Point is, current models are only able to simulate memory by literally holding onto all the information and processing all of it for each new word which means there is a limit to its memory unless you retrain the entire net to know the answers you want. (And it’s slow af) Doesn’t sound like a mind to me…

Now determinism is the real problem for AGI from a cognitive standpoint. The neural nets you’ve probably used are not thinking… at all. They literally are just a complicated predictive algorithm like linear regression. I’m dead serious. It’s basically regression just in a very high dimensional vector space.

ChatGPT does not think about its answer. It doesn’t have any sort of object identification or thought delineation because it doesn’t have thoughts. You train it on a bunch of text and have it attempt to predict the next word. If it’s off, you do some math to figure out what weight modifications would have lead it to a better answer.

All these models do is what they were trained to do. Now they were trained to be able to predict human responses so yeah it sounds pretty human. They were trained to reproduce answers on stack overflow and Reddit etc. so they can answer those questions relatively well. And hey it is kind of cool that they can even answer some questions they weren’t trained on because it’s similar enough to the questions they weren’t trained on… but it’s not thinking. It isn’t doing anything. The program is just multiplying numbers that were previously set by an input to find the most likely next word.

This is why LLMs can’t do math. Because they don’t actually see the numbers, they don’t know what numbers are. They don’t know anything at all because they’re incapable of thought. Instead there are simply patterns in which certain numbers show up and the model gets trained on some of them but you can get it to make incredibly simple math mistakes by phrasing the math slightly differently or just by surrounding it with different words because the model was never trained for that scenario.

Models can only “know” as much as what was fed into them and hey sometimes those patterns extend, but a lot of the time they don’t. And you can’t just say “you were wrong” because the model isn’t transient (capable of changing from inputs alone). You have to train it with the correct response in mind to get it to “learn” which again takes time and really isn’t learning or intelligence at all.

Now there are some more exotic neural networks architectures that could surpass these limitations.

Currently I’m experimenting with Spiking Neural Nets which are much more capable of transfer learning and more closely model biological neurons along with other cool features like being good with temporal changes in input.

However, there are significant obstacles with these networks and not as much research because they only run well on specialized hardware (because they are meant to mimic biological neurons who run simultaneously) and you kind of have to train them slowly.

You can do some tricks to use gradient descent but doing so brings back the problems of typical ANNs (though this is still possibly useful for speeding up ANNs by converting them to SNNs and then building the neuromorphic hardware for them).

SNNs with time based learning rules (typically some form of STDP which mimics Hebbian learning as per biological neurons) are basically the only kinds of neural nets that are even remotely capable of having thoughts and learning (changing weights) in real time. Capable as in “this could have discrete time dependent waves of continuous self modifying spike patterns which could theoretically be thoughts” not as in “we can make something that thinks.”

Like these neural nets are good with sensory input and that’s about as far as we’ve gotten (hyperbole but not by that much). But these networks are still fascinating, and they do help us test theories about how the human brain works so eventually maybe we’ll make a real intelligent being with them, but that day isn’t even on the horizon currently

In conclusion, we are not remotely close to AGI. Current models that seem to think are verifiably not thinking and are incapable of it from a structural standpoint. You cannot make an actual thinking machine using the current mainstream model architectures.

The closest alternative that might be able to do this (as far as I’m aware) is relatively untested and difficult to prototype (trust me I’m trying). Furthermore the requirements of learning and thinking largely prohibit the use of gradient descent or similar algorithms meaning training must be done on a much more rigorous and time consuming basis that is not economically favorable. Ergo, we’re not even all that motivated to move towards AGI territory.

Lying to say we are close to AGI when we aren’t at all close, however, is economically favorable which is why you get headlines like this.
 

null

...
Joined
Nov 12, 2014
Messages
32,247
Reputation
6,095
Daps
50,240
Reppin
UK, DE, GY, DMV
one word answers :mjlol: .

like in the bible ...?

creh should try to remake the original video with one word sentences and see if it leads to misunderstandings :picard:
 
Top