I’ll give you an example. Recently I was setting up an operator maintained by a third-party vendor in a K8s environment.
There are multiple ways of installing the operator but the vendor primarily supports and recommends one approach in particular. Due to restrictions in the K8s environment I couldn’t take that approach…the issue was that with the approach I needed to take, a specific feature that I wanted to enable couldn’t be done in a straightforward way.
I queried ChatGPT on how to enable this feature with the route I took and it flat out made up answers on how to enable that feature. It got to the point where I basically was pressing the machine on where it found that answer (lol) and it fessed up that there was no official documentation on what it was suggesting and the answer it gave me was from reverse engineering the recommended approach for installation.
So I went ahead and looked at the source code of what it was reverse engineering and guess what…there was zero evidence of the solution ChatGPT was trying to suggest. In fact, once I looked at the source code I figured out on my own what I needed to do in order for my approach to work. After a little bit more prodding on ChatGPT it eventually course corrected and gave me a viable solution(which I still had to tweak on my own).
All that is to say that these tools can be useful but they can start spewing garbage when you’re coloring outside the lines or taking an unconventional approach, which is a common occurrence in this industry. If you don’t have the knowledge or expertise to actually know if the solutions you’re getting are legitimate you might hang yourself relying too heavily on them.