no. you post hundreds of links to LLMs and yet you ask me that
what do you think people do when they hack LLMs?
why do you think LLMs structure their answers in a similar structure and avoid certain topics?
after the primary training of the model the model goes through a process called alignment to constrain behaviour
have you ever thought about how the attention window works? you can reverse engineer a lot of it by observation when using them. jus6t engage your brain.
then on top of that you have hard-wired controls - such as to enforce hard lines, filter dangerous/known prompt hacks.
why not go watch some technical LLM videos/lessons instead of the cheerleader spamming.