upvote
A few weeks ago I was interested in median price paid in the UK for property. I pulled down a 900,000 line csv from gov.uk and asked chapgpt to give me a python to parse it based on price (col 2) and county (col14), then output the 10,25,50,75,90 percentiles.

It dropped out a short file which used

from statistics import quantiles

Now maybe that python module isn't reliable, but as it's an idle curiosity I'm happy enough to trust it.

Now maybe I could import a million line spreadsheet and get that data out, but I'd normally tackle this with writing some python, which is what I asked it to do. It was far faster than me, even if I knew the statistics/quantiles module inside out.

reply
I'm not adding a+b. It will be more like, "Calculate the following nontrivial physical quantity from this catalog of measurements, reproject the measurements to this coordinate system, calculate the average and standard deviation using this pixelization scheme, estimate the power spectrum, and then make the following 3 plots."

This would have taken me an hour previously. It now takes a few minutes at most.

I feel like many AI skeptics are disconnected from reality at this point.

reply
It feels like our (US) political system; people in their camps refuse to believe any data proposing a benefit of the "other" camp.

For me, the rise of the TUI agents, emerging ways of working (mostly SDD, and how to manage context well), and the most recent releases of models have pushed me past a threshold, where I now see value in it.

reply