These mainstream LLMs are meant to be safe, as in not promote illegal activities or invite liability and generally act like a yes man for the user.
You have to be willing to check it, ask for an alternative/counter view, ask it not to gaslight you in the instructions, build memory over time, etc.
I once asked chatgpt to evaluate a paper it helped me write, and it told me the paper was trash because it thought that's what I wanted to hear. I had to remind it of the positive feedback it gave before and that it was the co-author
But at the same time I've successfully used it for advice in real life scenarios. It helped me put together a legal argument that lead to me getting like $4k of repairs done for free.