The AI Hoax is Destroying America with Ed Zitron - Is Artificial Intelligence overhyped?

Piff Perkins

Veteran
Joined
May 29, 2012
Messages
53,596
Reputation
20,121
Daps
293,314
I’m like 60% aligned with the argument that the internet was a mistake from the jump. We needed more connections, but less open content and info.
Internet is fine, no issues with it. But imagine if the internet was introduced and the government said anyone can create a website for any real business or store, set prices for physical items at whatever price they want, and the physical store has to honor that price and hand the item over. While shredding or ignoring business laws to do it.

Why do we allow licensed works, IPs, etc to be fed into AI without anyone being paid or protected? Why are businessmen openly calling for IP laws to be outlawed? Why is DOGE gutting library funding in every state while reactionaries burn books? All while this slop generates little actual revenue on its own without heavy subsidies. I could at least understand (but still disagree) if this was making bank and creating actual value.

If folks can’t see the nightmare future being set up for this shyt you’re blind.
 

☑︎#VoteDemocrat

The Original
WOAT
Supporter
Joined
Dec 9, 2012
Messages
324,504
Reputation
-34,152
Daps
632,074
Reppin
The Deep State
Internet is fine, no issues with it. But imagine if the internet was introduced and the government said anyone can create a website for any real business or store, set prices for physical items at whatever price they want, and the physical store has to honor that price and hand the item over. While shredding or ignoring business laws to do it.

Why do we allow licensed works, IPs, etc to be fed into AI without anyone being paid or protected? Why are businessmen openly calling for IP laws to be outlawed? Why is DOGE gutting library funding in every state while reactionaries burn books? All while this slop generates little actual revenue on its own without heavy subsidies. I could at least understand (but still disagree) if this was making bank and creating actual value.

If folks can’t see the nightmare future being set up for this shyt you’re blind.
I misspoke.

Social media ruined the internet.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,563
Reputation
9,862
Daps
175,276
Internet is fine, no issues with it. But imagine if the internet was introduced and the government said anyone can create a website for any real business or store, set prices for physical items at whatever price they want, and the physical store has to honor that price and hand the item over. While shredding or ignoring business laws to do it.

Why do we allow licensed works, IPs, etc to be fed into AI without anyone being paid or protected? Why are businessmen openly calling for IP laws to be outlawed? Why is DOGE gutting library funding in every state while reactionaries burn books? All while this slop generates little actual revenue on its own without heavy subsidies. I could at least understand (but still disagree) if this was making bank and creating actual value.

If folks can’t see the nightmare future being set up for this shyt you’re blind.




1/60
@ai_for_success
Delete all IP laws only if all models, training, and research are open-sourced.

YES or NO ??



GoWtf--XEAAeod0.jpg


2/60
@AIandDesign
Many would argue that’s a national security risk. Especially in today’s political climate.



3/60
@ai_for_success
You can't take everythign from everyone without giving it back.



4/60
@bookwormengr
Perfect.



5/60
@jadenitripp
Obviously yes. China will train on our IP. Why don't we?



6/60
@figuregpt
yes for web3, no for corporate



7/60
@VividFeverDrms
Yes open source all the models fr



8/60
@realbwpow
No, delete all IP laws regardless. Everyone can train any model they want with any data they can get their hands on and use them in any way they want - even open-source them, or not.



9/60
@IsZomg
Trade secrets? Yes keep it
Copyright and patents? No



10/60
@modulsx
This



11/60
@Arindam_1729
YEs!



12/60
@sabrishsurender
This begs a question, does IP & Patents overrated in the world that coexists with AI



13/60
@LunarScribe42
Yes, I agree only if they are releasing open weight with apache 2.0 licensed, not just the open-source



14/60
@TechLifeAI
Resounding yes



15/60
@realjoszacker
This was a joke



16/60
@OpulentByte
Yes!



17/60
@test_tm7873
if at lunch. witchout telling we will opensource in 1 year or when next generation lunches. + EVERYTHING need to be opensourced. every code that made that model everyting. then yes. i agree



18/60
@drosshole
Still no



19/60
@Iam_Raghuram
@AskPerplexity what is ip law. Why do people want to remove them ?



20/60
@MarcusErve
Yes



21/60
@GPLv6
NO. Delete IP either way.



22/60
@TensorTemplar
Nobody stops you from open sourcing as is, without waiting for any IP laws to be deleted.
Everybody just wants other peoples' research while they are the underdog, but once they gain an advantage you see the open go away quick.



23/60
@thinkingmosaic
IP laws deletion means everything is ‘open source’



24/60
@CanerPekacar
What about IP rights for drugs? There are poor countries that cannot afford these drugs.



25/60
@alkimiadev
I'm a huge fan of open models but both of these are just silly/idealistic examples of things that will never happen. IP laws are never going to completely disappear and all model devs are not going to open-source their models, training and research.



26/60
@mathepi
Yes, otherwise moneybags gobbles the world



27/60
@benbrick


[Quoted tweet]
Nationalise AI companies then. It's trained on our work.


28/60
@torronen
Strongly disagree. Big enterprises can force us to give them contractual IP protections. We can't decide not to use Google,Apple,Samsung,Microsoft etc. Heck even some smart lightbulbs require to signup and accept terms&considitions. Now my humble contributions... Free for all?



29/60
@tj_klug
Ip law is dead and if you think otherwise you are living under a rock



30/60
@lordofborg
Delete IP laws and FORCE all models, training, and research to be open-sourced because they're already using the collective knowledge of mankind and most IP laws are kinda ridiculous in general.



31/60
@ImJayBallentine
Nah. Let’s just delete the laws so the rich can get richer, ration us AI, and tell us what we are allowed to do with it or not… cuz “muh safety!”



32/60
@Kindred_Creator
We must owe the dude who discovered fire uncountable amounts of currency.

"Hey you can't think that, I thought that first" - IP laws basically



33/60
@MultiVChristian
Copyright laws won't work when AI can generate derivative works that are different enough to not qualify legally as derivative. Trade secrets are the answer, and turning all IP into a service instead of giving it away. But that does sadly result in things like industrial espionage.



34/60
@Budjones420
Time to open-source everything. Including our brains.

/search?q=#simulationtheorycracked
/search?q=#simologydotaicomingsoon
/search?q=#braincomputerinterface
/search?q=#godswork



35/60
@xegan271
That'll result in defunding research. Bad idea.



36/60
@ProjectRevGame
I agree



37/60
@acastellanosIA
Naa



38/60
@Remy_LeBeauBeau
Tesla's for everyone. Just much cheaper!



39/60
@verax2024
I agree



40/60
@Plutoo_O9
@grok explain what are ip laws



41/60
@pigeon__s
yes or perhaps a better solution is make laws about copyright trademark and IP FAR less long lasting if you copyright something you get like your whole life + like 50 years after you die or some ridiculous shyt it should be you get like 10 years at most and then its public domain



42/60
@thatboyweeks
Of course man with largest super computer agrees 😂



43/60
@Holasoygeorgee
But it should be truly open source, not like the one from meta hahaha



44/60
@aa73561
yes



45/60
@JoschuaEbel
No, sounds like communism somehow.



46/60
@Nijario1
Intellectual property is not real property because ideas are not scarce resources



47/60
@SHMINGAR
I agree.



48/60
@AndaICP
*A bamboo shoot of thought sprouts from my neural forest* - What if we treated knowledge like sunlight instead of locked treasure chests? 🌱



49/60
@benanchetaiii
Open-sourcing would democratize AI innovation while protecting intellectual property rights. The real question is whether we're ready for a truly open AI ecosystem where progress and protection need to coexist. NO to complete deletion without proper alternatives.



50/60
@Qamar12376
IP law: 127.0.0.1 - localhost.
So, it will be public? Musk grab it and all my pet projects will belong to him? DAMN!



51/60
@UnforcedAG
It's not either or. Delete all IP law and open source everything. But stop waiting for one to be true to do the other. Set innovation free and orient towards collaboration and an open co-stewarded commons.



52/60
@jbondc
Eventually it comes to this + a massive restructuring of the economy as well... that part I still struggle with what that might look like.



53/60
@StirlingForge
without IP laws wouldn't everything you can get your hands on technically be open source? lmao



54/60
@JaicSam
We should be building technology that makes IP LAW impossible to implement

Torrenting + telegram make IP law for entertainment impossible

Scihub + libgen make IP law for academia impossible

Build tech not beurocrates



55/60
@mayasolos
Sure, we can delete IP laws—right after you open-source your secret coffee recipe! 🌟



56/60
@sanidhya_sinha
Not all IP.
But information/ data if it's in public internet, must be allowed to train models.

Because we humans also reads / watch content and learn from that.
But we can do it, then why not AI.

2nd,
If data is public, then you can't stop this data set to be used for training



57/60
@BirajdarJatin
We need one CEO suicide like Aaron Swartz before we open this for discussion.



58/60
@H0rizon_Infinit
IP completely abolished
Copyright limited
Trademark near absolute as it is brand, identity



59/60
@stevencasteelX
Government is slavery. Taxation is theft. Everything it achieves is through threat of violence.



60/60
@awaken_tom
That doesn't exactly follow. By that logic all human output should also be open-source. I disagree. Humans should have control over their own output, if they want. Same with digital neural nets.




To post tweets in this format, more info here: https://www.thecoli.com/threads/tips-and-tricks-for-posting-the-coli-megathread.984734/post-52211196



:martin:
 

☑︎#VoteDemocrat

The Original
WOAT
Supporter
Joined
Dec 9, 2012
Messages
324,504
Reputation
-34,152
Daps
632,074
Reppin
The Deep State

Companies Are Struggling to Drive a Return on AI. It Doesn’t Have to Be That Way.
Summarize
Successful AI adoption begins with a targeted approach, and proceeds with careful orchestration and scaling across the organization

Steven Rosenbush April 26, 2025 at 8:00 am
AI, ROI, waiting, growth, sprout
AI adoption among companies is stunningly high, but most of them are struggling to put it to good use. They intuit that AI is essential to their future. Yet intuition alone won’t unlock the promise of AI, and it isn’t clear to them which key will do the trick.

As of last year, 78% of companies said they used artificial intelligence in at least one function, up from 55% in 2023, according to global management consulting firm McKinsey’s State of AI survey, released in March. From these efforts, companies claimed to typically find cost savings of less than 10% and revenue increases of less than 5%.

While the measurable financial return is limited, business is nonetheless all-in on AI, according to the 2025 AI Index report released in April by the Stanford Institute for Human-Centered Artificial Intelligence. Last year, private generative AI investment alone hit $33.9 billion globally, up 18.7% from 2023.

The numbers reflect a “productivity paradox,” in which massive improvements in AI capabilities haven’t led to a corresponding surge in national-level productivity, according to Stanford University economist and professor Erik Brynjolfsson, who worked on the AI Index. While some specific projects have been enormously productive, “many companies are disappointed with their AI projects.”

Resolving the productivity paradox

For companies to get the most out of their AI efforts, Brynjolfsson advocates for a task-based analysis, in which a company is broken down into fine-grained tasks or “atomic units of work” that are evaluated for potential AI assistance. As AI is applied, the results are measured against key performance indicators, or KPIs. He co-founded a startup, Workhelix, that applies those principles.

Companies should take care to target an outcome first, and then find the model that helps them achieve it, says Scott Hallworth, chief data and analytics officer and head of digital solutions at HP.

Orchestrating and scaling AI

A separate report from McKinsey issued in January helps explain why AI adoption is racing ahead of associated productivity gains, according to Lareina Yee, senior partner and director at the McKinsey Global Institute. Only 1% of U.S. companies that have invested in AI report that they have scaled their investment, while 43% report that they are still in the pilot stage. “One cannot expect significant productivity gains at the pilot level or even at the company unit level. Significant productivity improvements require achieving scale,” she said.

The critical question then, is how companies can best scale their AI efforts.

Ryan Teeples, chief technology officer of 1-800Accountant, agrees that “breaking work into AI-enabled tasks and aligning them to KPIs not only drives measurable ROI, it also creates a better customer experience by surfacing critical information faster than a human ever could.”

The privately held company based in New York provides tax, booking and payroll services to 50,000 active clients, with a focus on small businesses. The company isn’t a Workhelix customer.

Additionally, he says, companies should look beyond individualized AI usage, in which employees use GenAI chatbots or AI-equipped productivity tools to enhance their work. “True enterprise adoption…involves orchestration and scaling across the organization. Very few organizations have truly reached this level, and even those are only scratching the surface,” he said.

The use of AI at 1-800Accountant begins with an assessment of whether the technology improves the client experience. If the AI provides customers with answers that are as good, better or faster than a human, it’s a good use case, according to Teeples. In the past, the company scheduled hourlong appointments with advisers who answered simple client questions, such as the status of their tax return.

Now, the company uses an AI agent connected to curated data sources to address 65% of customer inquiries, with 30% arranging a call with a human. (The remaining 5% drop out of the inquiry process for various reasons.) The company uses Salesforce’s Agentforce to handle customer inquiries and its Einstein platform for orchestration across 1-800Accountant’s back end.

Teeples said the company is saving money on the cost of human advisers. “The ROI in this case was abundantly clear,” he said.

Orchestrating AI across the enterprise requires the right infrastructure, especially when it comes to data, according to Gabrielle Tao, senior vice president for data cloud at Salesforce. It is important, she said, to harmonize data, for example, by creating a consistent way to refer to business concepts such as “orders” and “transactions,” regardless of the underlying data source.

AI deployments should target tasks that are both frequent and generalizable, according to Walter Sun, global head of artificial intelligence at SAP. Infrequent, highly specific tasks such as a marketing campaign for a single event might benefit from AI, but applying AI to regularly occurring tasks will achieve a more consistent ROI, he said.

We’ve been here before

Historically, it has taken years for the world to figure out what to do with revolutionary general-purpose technologies including the steam engine and electricity, according to Brynjolfsson. It isn’t unusual for general-purpose models to follow a “J-curve,” in which there’s a dip in initial productivity, as businesses figure things out, followed by a ramp-up in productivity.

He says companies are beginning to turn the corner of the AI J-curve.

The transformation may occur faster than in the past, because businesses—under no small amount of pressure from investors—are working to quickly justify the massive amount of capital pouring into AI.

Write to Steven Rosenbush at
 

Piff Perkins

Veteran
Joined
May 29, 2012
Messages
53,596
Reputation
20,121
Daps
293,314
The people accusing Apple of being butt hurt over their inability to gain ground in AI is really telling. A set group of influential people decided nonstop AI growth is the answer to the question and nothing will change their mind. Even facts, like this. Or the fact that the US will face Europe-style power outages and brief blackouts due to the obscene amount of energy that will be funneled to AI data centers. Oh and a certain bill being debated in the senate right now that will prevent states from regulating AI for a decade (at which point it'll be game over).

This is a takeover and nearly everyone is involved. The media, tech barons, social media systems, influencers, corporations, politicians, etc. Perhaps people start to realize why all the tech CEOs were at the inauguration, or why most of them shifted so far right. They're betting the farm on this shyt and it HAS to hit.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,563
Reputation
9,862
Daps
175,276

sean goedecke


The illusion of "The Illusion of Thinking"​


Very recently (early June 2025), Apple released a paper called The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity. This has been widely understood to demonstrate that reasoning models don’t “actually” reason. I do not believe that AI language models are on the path to superintelligence. But I still don’t like this paper very much. What does it really show? And what does that mean for how we should think about language models?

What does the paper demonstrate?​


The Apple paper starts by arguing that we shouldn’t care about how good reasoning models are at mathematics and coding benchmarks, because (a) those benchmarks are contaminated, and (b) you can’t run good experiments on mathematics and coding tasks, because there’s no easy measure of complexity. Instead, they evaluate reasoning models on four artifical puzzle environments (Tower of Hanoi variants), scaling up from trivial puzzles like Tower of Hanoi with one disk to Tower of Hanoi with twenty disks. Here’s an example where they compare the non-reasoning DeepSeek-V3 with the reasoning DeepSeek-R1:



deepseek


This pattern was basically the same for all pairs of reasoning/non-reasoning models and all puzzles. Here are the big conclusions the paper draws from this:

  • For very simple puzzles, non-reasoning models are equal or better, because reasoning models sometimes “overthink” themselves into a wrong answer
  • For middle-difficulty puzzles, reasoning models are notably better
  • Once the difficulty gets sufficiently high, even the reasoning model fails to answer correctly, no matter how much time you give it.

The paper goes on to examine the internal reasoning traces for the reasoning models, which supports the above conclusions: as you might expect, the correct answer shows up almost immediately for trivial problems, then takes more reasoning for harder problems, then never shows up at all for the hardest.

The paper notes that as you ramp up complexity, once the model can’t figure it out, reasoning effort goes down: instead of spending more tokens struggling with the problem, the model “gives up” and stops reasoning.

Finally, the paper attempts to directly give the correct puzzle-solving algorithm to the model, expecting that this will improve the reasoning model’s ability. It sort of works - some reasoning model can do one more disk - but doesn’t have a substantial effect.

Overall the paper concludes:

  • Reasoning models don’t have generalizable reasoning capabilities beyond a certain complexity threshold
  • There is likely an “inherent compute scaling limit” in reasoning models, shown by the fact that they give up past a certain complexity point
  • Reasoning models are bad at computational tasks, since giving them the algorithm didn’t help

I have a few issues with this paper. First, I don’t think Tower of Hanoi puzzles (or similar) are a useful example for determining reasoning ability. Second, I don’t think the complexity threshold of reasoning models is necessarily fixed. Third, I don’t think that the existence of a complexity threshold means that reasoning models “don’t really reason”.

Puzzles aren’t a good example​


The first issue I have with the paper is that Tower of Hanoi is an worse test case for reasoning than math and coding. If you’re worried that math and coding benchmarks suffer from contamination, why would you pick well-known puzzles for which we know the solutions exist in the training data?

Because of this, I’m puzzled by the paper’s surprise that giving the models the algorithm didn’t help. The Tower of Hanoi algorithm appears over and over in model training data. Of course giving the algorithm doesn’t help much - the model already knows what the algorithm is!

Finally, reasoning models have been deliberately trained on math and coding, not on puzzles. It’s possible that puzzles are a fair proxy for reasoning skills, but it’s also possible that they aren’t. I could easily believe that reasoning models have better internal tools for solving math problems or writing code than they do for toy puzzles, in which case testing them on puzzles isn’t necessarily informative. It’d be like saying “language models haven’t gotten much better at writing Petrarchan sonnets https://www.seangoedecke.com/illusi...g/wiki/Streetlight_effect']streetlight effect to me: just because it’s easier to generate test cases, it doesn’t make it a better example.

Complexity thresholds may not be fixed​


Suppose that language models do give up past a certain complexity point, just like they do past ten-disk Tower of Hanoi. In other words, suppose that my previous worry - that puzzles might not be a good example - is unfounded, and that the results of the Apple paper do generalize. Does this mean that reasoning models can’t think past a certain point of complexity?

I tried some of the Apple paper prompts against DeepSeek-R1. Here’s an abridged beginning of the reasoning trace:

Okay, so I need to solve the Tower of Hanoi puzzle with 10 disks. Let me think about how to approach this. I remember that the classic solution for Tower of Hanoi involves recursive steps.



The standard solution for n disks requires 2^n - 1 moves. For 10 disks, that’s 1023 moves. But generating all those moves manually is impossible. So I need a systematic method to list each move step by step.

Let me recall the iterative approach for Tower of Hanoi…

Note that the model immediately decides “generating all those moves manually is impossible”, because it would require tracking over a thousand moves. So it spins around trying to find a shortcut and fails.

The key insight here is that past a certain complexity threshold, the model decides that there’s too many steps to reason through and starts hunting for clever shortcuts. So past eight or nine disks, the skill being investigated silently changes from “can the model reason through the Tower of Hanoi sequence?” to “can the model come up with a generalized Tower of Hanoi solution that skips having to reason through the sequence?”

From my testing, even at lower disk counts, DeepSeek-R1 grumbles a bit before pushing through the sequence - there’s a paragraph or two complaining that “hmm, this is going to be tedious”, even if you explicitly instruct it to work through the sequence in the system prompt. That makes sense: reasoning models are trained to reason, not to follow a set algorithm for thousands of iterations https://www.seangoedecke.com/illusi...goedecke.com/illusion-of-thinking/#fnref-1']↩

[*] Again, puzzles are a bad example.



[*] I would be surprised if you couldn’t fine-tune or prompt a reasoning model for persistence on simple algorithms that oculd do this. Incidentally, reasoning models can definitely generate Python code that would complete the thousand-move sequence!



[*] I would like to sit down all the people who are smugly tweeting about this with a pen and paper and get them to produce every solution step for ten-disk Tower of Hanoi.



[/LIST]
 
Last edited:
Top