upvote
Is your argument that there is no imaginable situation where someone who was competent at software development could find use for a semi-automated tool for writing software?

That would imply that either the person in question has infinite time, or has access to all software that could ever be of utility to them, which seems unlikely.

reply
There's a reason I call it spicy autocomplete.
reply
Which is what?
reply
.... that an IDE providing a suggestion about what comes next as you type is not new, and the entire basis of how an LLM works is "what word probably comes next".

I'd have thought someone who's so enamoured with the tech would have at least a basic understanding of how it works.

reply
Indeed. To be honest, I think everyone on HN is aware of how LLMs work at this point, it’s not actually adding a great deal to the discussion to keep going on about autocomplete or ‘stochastic parrots’.
reply
"I've posited for a while now" and you post the most lukewarm and outdated take like it's an enlightenment. I've been coding for 20 years and can very well do everything the AI does, and so can all devs I know. We use it because it amplifies us, not because we couldn't otherwise. You've chosen a very ridiculous hill to die on.
reply
Initially I wanted to write more but I can boil it down to taste and context mismatch. By that I mean some people see LLM output as tasteless or kitsch (which I ascribe to generally) and another set of people (though sometimes overlapping more often than not) hold disdain or at the very least look funny at heavy LLM users like gym-goers would look at someone in the middle of the gym loudly suggesting using a dolly or forklift instead of barbell training.

So yeah, I guess the value of doodles has shot up simply because of optics.

Somewhere else in this comment section someone tried to broaden the definition of nerd so much so that pretty much anybody who is a consummate professional is also a nerd. The hill I will die on is that people don't actually dislike all this new AI stuff but more so the attitude of people heavily invested in it.

And to add another data point regarding your hill my drawing/painting moment was NLP stuff. Now if I want to do (rudimentary) sentiment analysis or keyword extraction I can lean on a local LLM. Yet I don't go around yelling Snowball (I think?) is obsolete.

reply
> more so the attitude of people heavily invested in it

Exactly.

LLM bros are just the new blockchain/crypto bros, but they aren't necessarily even writing their own spruiking comments any more.

reply
While you are dying on a hill, with the help of LLMs, I'm shipping quality software and features to my customers at a pace I haven't been able to before. And no, not some nextjs slop. If you are letting your LLM look at StackOverflow, you are doing it wrong - it needs to be grounding in your stacks official docs and any other style/rules you prefer wired with other tooling like linting/formatting, duplication checking, etc. And yes, you have to constantly monitor the output and review every line of code - but it's still faster and if managed correctly, produces better code and (this is the hill I will die on) better test suites and documentation than I would have written.
reply
> If you are letting your LLM look at StackOverflow, you are doing it wrong

So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?

> I'm shipping quality software and features to my customers at a pace I haven't been able to before.

I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.

reply
I’m just saying that you can’t just let it rip based on its training alone, it needs to be grounded and harnessed in stack specific tooling.
reply
I'd be more general and say it needs verification to guide it, and narrowed scope so it doesn't wander off. How those get provided can vary. While I can do what I'm asking it to do, and have so many times that I don't want to anymore, I can't do it as fast as it can. But as someone said, it is stupid really fast. The bottleneck is now me slowing down this intern who thinks fast by stopping it to redirect it when it does bad things. The more pre prompting and context and verification tools I give it the less I have to do that, so the faster it goes. Then I get to solve the parts of the problem I haven't done until its boring.
reply