null
...
Q: If hallucinations are in fact "errors" why (other than marketing / legal) give them the fanciful name rather than just saying what they are.
why not call a duck a duck.
A: because that way you get the captain-save-ems aka hallucinator-whisperers as a ready massed attack brigade, who are ready to shift and explain away the blame.
The truth is that LLM's lie (especially chatgpt) as a result of reward hacking or straight dishonesty.
open AI openly admit it:
"Research by OpenAI and others has shown that AI models can hallucinate, reward-hack, or be dishonest. At the moment, we see the most concerning misbehaviors, such as scheming(opens in a new window), only in stress-tests and adversarial evaluations. But as models become more capable and increasingly agentic, even rare forms of misalignment become more consequential, motivating us to invest in methods that help us better detect, understand, and mitigate these risks."
one reddit user posted:
"I think this is the last straw. I'm so over it lying and wasting time. <snip...> I asked it if it actually read the contract and it said yes and denied hallucinating and lying. After four back and forth prompts it finally admitted it didn't read the document and extrapolated the contract terms from the title."
and in fly the captain-save-ums to explain why the user is wrong:
"Agreed, OP has a complete misunderstanding of the tool they are using. It’s not a fact machine, it’s an imperfect tool"
"You’re using something that’s only purpose is to give a probabilistic outcome to give you a deterministic outcome"
"Everything you get from ChatGPT has to do with your prompting. Look online for a free prompt engineering course and it will help you."
"ChatGPT can’t lie — it’s not sentient. It’s trained to sound helpful at all costs, even when it’s wrong. That’s not deception; it’s a side effect of predicting what sounds useful, not verifying what’s true."
the saveums are incorrect, predictable and
, all at the same time
.
it's fitting that this is what we have come to .. but it is what we have come to.
there is no objective truth anymore, anywhere.
why not call a duck a duck.
A: because that way you get the captain-save-ems aka hallucinator-whisperers as a ready massed attack brigade, who are ready to shift and explain away the blame.
The truth is that LLM's lie (especially chatgpt) as a result of reward hacking or straight dishonesty.
open AI openly admit it:
"Research by OpenAI and others has shown that AI models can hallucinate, reward-hack, or be dishonest. At the moment, we see the most concerning misbehaviors, such as scheming(opens in a new window), only in stress-tests and adversarial evaluations. But as models become more capable and increasingly agentic, even rare forms of misalignment become more consequential, motivating us to invest in methods that help us better detect, understand, and mitigate these risks."
one reddit user posted:
"I think this is the last straw. I'm so over it lying and wasting time. <snip...> I asked it if it actually read the contract and it said yes and denied hallucinating and lying. After four back and forth prompts it finally admitted it didn't read the document and extrapolated the contract terms from the title."
and in fly the captain-save-ums to explain why the user is wrong:
"Agreed, OP has a complete misunderstanding of the tool they are using. It’s not a fact machine, it’s an imperfect tool"
"You’re using something that’s only purpose is to give a probabilistic outcome to give you a deterministic outcome"
"Everything you get from ChatGPT has to do with your prompting. Look online for a free prompt engineering course and it will help you."
"ChatGPT can’t lie — it’s not sentient. It’s trained to sound helpful at all costs, even when it’s wrong. That’s not deception; it’s a side effect of predicting what sounds useful, not verifying what’s true."
the saveums are incorrect, predictable and
, all at the same time
.it's fitting that this is what we have come to .. but it is what we have come to.
there is no objective truth anymore, anywhere.




