Explaining a design, problem, etc and trying to find solutions is extremely useful.
I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.
The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.
Neither is the LLM
LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing.
But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.
Unfortunately they can also validate some really bad ideas.
That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.
I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.
Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?
I guess I must feel it's slightly useful overall as I still do it.
This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.
I think the only reason it’s seen as good anywhere is there are a lot of tasteless and talentless people who can pretend they created whatever was curled out. This goes for code as well.
If I offend anyone I will not be apologising for it.
It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.
What it considers best, is what occurs most often, which can be the most average answers. Unless the service is tuned for search (perplexity, or google itself for example), others will not provide as complete an answer.
How well we ask can make all the difference. It's like asking a coworker. Providing too little information, or too much context can give different responses.
Try asking the model to not provide it's most common or average answer.
Been using it this way for 2, almost 3 years.