Software Development and Programming Careers (Official Discussion Thread)

null

...
Joined
Nov 12, 2014
Messages
31,542
Reputation
5,537
Daps
49,389
Reppin
UK, DE, GY, DMV
To me, prompt engineering is like writing a SQL query or web-search:

which gets pushed through a query analyzer + optikmizer. sort of how functional languages or JIT try to figure out what you actually want and/or how best to get it.

those skilled @ translating what they are trying to do into machine-friendly logical statements are good at finding the data they want... But, if queries are an exercise in logic, machines will always have greater 'logical' potential than humans, and IMO have a short runway to best the average human re: using logic to return data.

that's rot. computers are poor theory provers.

LLMs are getting increasingly better at using logic/probabilities (read:educated guesses?) to guess what we are trying to do. There's the simple stuff like understanding when we misspell/forget a word, but also the probabilistic stuff when it answers based on what is most probable... (On a related note, it is interesting to use 'thinking' models to see read behind the curtains as they form that context on the fly)

thinking models?

Obviously current models are not ready for it yet, but I do think that as models get better at contextualizing, it will negate the need for prompt-engineering.

yeah like how query engines (sql as you mentioned for example) remove the need for query and architecture prompts :mjlol:

I don't disagree that it is very useful, I just have reservations about it being a standalone career-path beyond the near-term. I could be wrong tho :yeshrug:
 

Macallik86

Superstar
Supporter
Joined
Dec 4, 2016
Messages
6,831
Reputation
1,642
Daps
22,567
what does that mean?
It's been a while, so I can't recall the specifics of the initial article I read. I believe it was a simplified idea of Self-Supervised Reinforcement Learning.

Personally, I think that context and logic go hand in hand. My theory is that skilled humans, whether they are a CEO/athlete/politician. still just takes small logical actions but grounded in experience/context and that is what makes the difference.

I think a lack of context is a large factor in a lot of illogical LLM responses, and as they get better at creating and refining a context, our specific guidance becomes less necessary. I think that the improvements we've seen when models are allowed to reason are a great example of this, both figuratively and literally if you read them create a context on the fly
 

Macallik86

Superstar
Supporter
Joined
Dec 4, 2016
Messages
6,831
Reputation
1,642
Daps
22,567
I've been meaning to ask you @null are you autistic? Your posting style of engaging with people while also lacking the tact to have a converation that doesn't seem antagonistic/confrontational seems in line with that 🤔
yeah like how query engines (sql as you mentioned for example) remove the need for query and architecture prompts :mjlol:
More like how the people who use social media to search for people have results pre-filters based on proximity.

Also re: thinking models, they are more commonly referred to as reasoning models
 

null

...
Joined
Nov 12, 2014
Messages
31,542
Reputation
5,537
Daps
49,389
Reppin
UK, DE, GY, DMV
It's been a while, so I can't recall the specifics of the initial article I read. I believe it was a simplified idea of Self-Supervised Reinforcement Learning.

Personally, I think that context and logic go hand in hand. My theory is that skilled humans, whether they are a CEO/athlete/politician. still just takes small logical actions but grounded in experience/context and that is what makes the difference.

LLMs (the ANN part) do not use logic. the idea is that logic is an emergent property from a trained LLM.

I think a lack of context is a large factor in a lot of illogical LLM responses, and as they get better at creating and refining a context, our specific guidance becomes less necessary. I think that the improvements we've seen when models are allowed to reason are a great example of this, both figuratively and literally if you read them create a context on the fly

computers will in general have to approximate a precise context (based on guesswork) in the absence of understanding.

and that is complicated by LLM prompt feedback, relative importance of context, adjunct reasoning vs. mainline solution path etc.

LLMs are not using deductive/inductive and other forms of reasoning, as humans might do.

these concepts are hard to impossible to grasp or even approximate without understanding.

and LLMs do not understand.
 

null

...
Joined
Nov 12, 2014
Messages
31,542
Reputation
5,537
Daps
49,389
Reppin
UK, DE, GY, DMV
I've been meaning to ask you @null are you autistic? Your posting style of engaging with people while also lacking the tact to have a converation that doesn't seem antagonistic/confrontational seems in line with that 🤔

maybe .. who. knows. i am mostly working while posting on here so have to get to the point.

i appreciate learning and dislike babble esp. tech babble. i am all for learning on here but find it annoying when people make bold flawed assertions or confidently post things which are outright wrong.

that may be the norm in the US but maybe that is just my germanic influenced abrasive low tolerance for babble.

you can can call that autism or whatever you want.

you are not gonna catch me beefing / posting against a lot of what goes on on here but some of the tech stuff said on here is egregious.

More like how the people who use social media to search for people have results pre-filters based on proximity.

Also re: thinking models, they are more commonly referred to as reasoning models

i think you mean LLMs which explain their output derivation as part of their full output.

reasoning i.e. logics based models are not used in LLMs. not yet anyway.

LLMs are statistical based models.
 

GoldenGlove

😐😑😶😑😐
Staff member
Supporter
Joined
May 1, 2012
Messages
61,000
Reputation
5,990
Daps
143,701
have you ever used an LLM to code anything complex? if so in what languages?
No but I'm in the process of building an AI first site/platform now with Cursor and Lovable. I'm not sure what LLM Lovable uses to generate prototypes, but in Cursor I've been using Gemini 2.5. Language is mostly Typescript

It's been an iterative process so far, the more senior you are at programming the faster you will build for sure.

I don't know how complex it is, essentially it'll function as a directory/repository that references documents that I either upload or link to(still thinking this through on best approach). It's also going to generate content based on inputs provided. The generations will be grounded with documentation that is cross referenced with the user inputs... In my head this isn't too complex at it's core, but I'm sure I'll learn a lot along the way.

I feel like as long as I know how to articulate what I'm trying to accomplish and thinking through this step by step I'll be able to do this with AI pair programming along with me.

This stuff isn't capable of just executing a complete full stack development build from a prompt yet, but it definitely makes the idea of building much more accessible
 

null

...
Joined
Nov 12, 2014
Messages
31,542
Reputation
5,537
Daps
49,389
Reppin
UK, DE, GY, DMV
No but I'm in the process of building an AI first site/platform now with Cursor and Lovable. I'm not sure what LLM Lovable uses to generate prototypes, but in Cursor I've been using Gemini 2.5. Language is mostly Typescript

It's been an iterative process so far, the more senior you are at programming the faster you will build for sure.

I don't know how complex it is, essentially it'll function as a directory/repository that references documents that I either upload or link to(still thinking this through on best approach). It's also going to generate content based on inputs provided. The generations will be grounded with documentation that is cross referenced with the user inputs... In my head this isn't too complex at it's core, but I'm sure I'll learn a lot along the way.

I feel like as long as I know how to articulate what I'm trying to accomplish and thinking through this step by step I'll be able to do this with AI pair programming along with me.

This stuff isn't capable of just executing a complete full stack development build from a prompt yet, but it definitely makes the idea of building much more accessible

typescript is popular, not knotty and largely backward compatible so LLMs can probably avoid some of the issues that they have with more highly changeable frameworks/libs/libraries.

more generally i think that the ability of LLMs to program is vastly overstated. I use chatgpt and probably should use claude duing my dev and it is annoying as hell. it's makes stuff up, it gets stuck in loops, deprecations and versions catch it out, it cannot track complex problem solutions, it is influenced by user input and it forgets. and those are just some of the problems.
 

bnew

Veteran
Joined
Nov 1, 2015
Messages
64,339
Reputation
9,832
Daps
174,927
typescript is popular, not knotty and largely backward compatible so LLMs can probably avoid some of the issues that they have with more highly changeable frameworks/libs/libraries.

more generally i think that the ability of LLMs to program is vastly overstated. I use chatgpt and probably should use claude duing my dev and it is annoying as hell. it's makes stuff up, it gets stuck in loops, deprecations and versions catch it out, it cannot track complex problem solutions, it is influenced by user input and it forgets. and those are just some of the problems.

use the same prompt on multiple models and you can reduce hallucinations by specifying libraries and version numbers to use. if that doesn't work specify the deprecated library or version number and comment out the code that uses it after using the library or version number you actually want. usually having both in output will reduce hallucinations.
 

GoldenGlove

😐😑😶😑😐
Staff member
Supporter
Joined
May 1, 2012
Messages
61,000
Reputation
5,990
Daps
143,701
typescript is popular, not knotty and largely backward compatible so LLMs can probably avoid some of the issues that they have with more highly changeable frameworks/libs/libraries.

more generally i think that the ability of LLMs to program is vastly overstated. I use chatgpt and probably should use claude duing my dev and it is annoying as hell. it's makes stuff up, it gets stuck in loops, deprecations and versions catch it out, it cannot track complex problem solutions, it is influenced by user input and it forgets. and those are just some of the problems.
Before I started working on this, I did quite a bit of research on which models are best for programming. I've read that Claude and Gemini are superior to ChatGPT. Also, when you use Cursor it allows you to select the model you want to use (by default it's set on auto).

To me, the vibe coding tools are better than the chat models for app dev. It feels like they're fine tuned better for programming. OpenAI realized they were falling behind in this space, that's why they just bought Windsurf (I haven't tried this one, but I've heard it's right on par with Cursor) for 3B rather than make their own version. The time it would take for them to make it and roll it out wasn't worth the potential loss in users/adoption.

Also, Google has Firebase Studio that's free as another option. All this stuff is moving fast af
 
Top