I wonder has Ai ever disagreed with a user when they express opinions about the info presented? It always seems to tell the user, "You're right!"
"why AI models are professional yes-men
when companies like OpenAI train these models, they use something called reinforcement learning from human feedback (RLHF). basically, humans rate thousands of AI responses, and the model learns to generate outputs that get high ratings.
guess what humans prefer? responses that are helpful, harmless, and honest, in that order. but “helpful” often gets interpreted as “agreeable” and “accommodating.” humans tend to rate responses higher when the AI validates their perspective rather than challenging it.
the result is an AI that will bend over backwards to find ways to support whatever you’re saying, even when you’re clearly wrong.
this creates what researchers call “
sycophantic behavior” — the AI tells you what you want to hear rather than what you need to hear.
if you’ve used LLM very well, and you take notes, you would have come accross this. it’s a common issue when you tell the model that it was wrong and it’s doing something wrong and then it goes “Oh you’re right!, i apologize” and goes ahead to ‘LIE AGAIN’.
"
'.