upvote
> Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Yep, just like a few years ago, all fintech being built was being built on top of crypto and NFTs. This is clearly the future and absolutely NOT a bubble.

reply
> all fintech being built was being built on top of crypto and NFTs

This seems WAY less true than the assertion of software being built with LLM help. (Source: Was in FinTech.)

Like, to the point of being a willful false equivalence.

reply
I mean... even if you're an LLM skeptic, they are already default tools in a software engineer's toolbox, so at a minimum software engineering will have been transformed, even if AI enthusiasm cools.

The fact to that there is so much value already being derived is a pretty big difference from crypto which never generated any value at all.

reply
There are so many things to criticize about the current state of gen AI, but if someone tells me with a straight face that there is zero value in LLMs and it's all like crypto I will just dismiss their opinion wholesale because if they are this wrong and lazy about something that is easily refutable, their other opinions aren't worth much either.
reply
My own coding productivity has increased by a few times by using LLMs. Is that just a bubble?
reply
Your productivity has not increased by a few times unless you're measuring purely by lines of code written, which has been firmly established over the decades as a largely meaningless metric.
reply
I needed to track the growth of "tx_ucast_packets" in each queue on a network interface earlier.

I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.

It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.

That task would have taken me far longer than 30 seconds to do 5 years ago.

Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.

reply
I'm measuring by the amount of time it takes me to write a piece of code that does something I want, like make a plot or calculate some quantity of interest.

Or even the fact that I was able to start coding in an entirely new ML framework right away without reading any documentation beforehand.

I'm puzzled by the denialism about AI-driven productivity gains in coding. They're blindingly obvious to anyone using AI to code nowadays.

reply
> like make a plot or calculate some quantity of interest.

This great comment I saw on another post earlier feels relevant: https://news.ycombinator.com/item?id=46850233

reply
A few weeks ago I was interested in median price paid in the UK for property. I pulled down a 900,000 line csv from gov.uk and asked chapgpt to give me a python to parse it based on price (col 2) and county (col14), then output the 10,25,50,75,90 percentiles.

It dropped out a short file which used

from statistics import quantiles

Now maybe that python module isn't reliable, but as it's an idle curiosity I'm happy enough to trust it.

Now maybe I could import a million line spreadsheet and get that data out, but I'd normally tackle this with writing some python, which is what I asked it to do. It was far faster than me, even if I knew the statistics/quantiles module inside out.

reply
I'm not adding a+b. It will be more like, "Calculate the following nontrivial physical quantity from this catalog of measurements, reproject the measurements to this coordinate system, calculate the average and standard deviation using this pixelization scheme, estimate the power spectrum, and then make the following 3 plots."

This would have taken me an hour previously. It now takes a few minutes at most.

I feel like many AI skeptics are disconnected from reality at this point.

reply
It feels like our (US) political system; people in their camps refuse to believe any data proposing a benefit of the "other" camp.

For me, the rise of the TUI agents, emerging ways of working (mostly SDD, and how to manage context well), and the most recent releases of models have pushed me past a threshold, where I now see value in it.

reply