null
...
To me, prompt engineering is like writing a SQL query or web-search:
which gets pushed through a query analyzer + optikmizer. sort of how functional languages or JIT try to figure out what you actually want and/or how best to get it.
those skilled @ translating what they are trying to do into machine-friendly logical statements are good at finding the data they want... But, if queries are an exercise in logic, machines will always have greater 'logical' potential than humans, and IMO have a short runway to best the average human re: using logic to return data.
that's rot. computers are poor theory provers.
LLMs are getting increasingly better at using logic/probabilities (read:educated guesses?) to guess what we are trying to do. There's the simple stuff like understanding when we misspell/forget a word, but also the probabilistic stuff when it answers based on what is most probable... (On a related note, it is interesting to use 'thinking' models toseeread behind the curtains as they form that context on the fly)
thinking models?
Obviously current models are not ready for it yet, but I do think that as models get better at contextualizing, it will negate the need for prompt-engineering.
yeah like how query engines (sql as you mentioned for example) remove the need for query and architecture prompts

I don't disagree that it is very useful, I just have reservations about it being a standalone career-path beyond the near-term. I could be wrong tho![]()