upvote
I know right? 8-year-old me dreamed of being able to articulate software to a computer without having to write code. It (along with the original Stable Diffusion) are Definitely one of the coolest inventions to ever come along in my lifetime
reply
Coding assistants are currently quite hard to run locally with anything like SOTA abilities. Support in the most popular local inference frameworks is still extremely half-baked (e.g. no seamless offload for larger-than-RAM models; no support for tensor-parallel inference across multiple GPUs, or multiple interconnected machines) and until that improves reliably it's hard to propose spending money on uber-expensive hardware one might be unable to use effectively.
reply
This is an argument against the grandparent's points (1) and (2), not their point (3).
reply
It's one clear argument for the (so get to work!) part.
reply
Computers get better and cheaper. That’s not a forever problem.
reply
Source?

GPU and RAM prices have definitely not made consumer PC's cheaper than they were before bitcoin blew up or before AI blew up.

Maybe you could make an argument that they are more cost efficient for the price point... But that's not the same as cheaper when every application or program is poorly optimized. For example why would a browser take up more than a GB or two of RAM?

And I'd postulate that R&D to develop localized AI is another example, the big players seem hellbent that there needs to be a most and it's data centers... The absolute opposite of optimization

reply
Moore's Law.

We've had RAM shocks before. We nerds can't control the Wall Street or Virginians who like to break the world every so often for the lulz. However, a wobble on the curve doesn't change the curve's destination.

reply
You have to look a bit more long term? 256Mb of what today is slow af RAM used to be pretty pricey. Price will pullback.
reply
No killer products... just robots that can do vulnerability analysis at the level of a decent security engineer and write code without tiring.
reply
I've also been using the LLM in Posthog and it has been impressive. I need to check if I can also plug a MCP/Skill to my actual claude code so that I can cross reference the data from my other data source (stripe, local database, access logs etc.) for in depth analysis
reply
This might be up your alley - have Posthog and a ton of other SaaS tools connected so you can run analysis across quant/qualitative data sources: https://dialog.tools
reply
> Coding assistants and LLM's in general are the single most awe-inspiring achievement of humanity in my lifetime

Landing a man on the moon is way more impressive. Finding several vaccines for a once in a century pandemic within a year of its outbreak is and achievement that in its impact and importance dwarfs what the entire LLM industry put together has achieved. The near-complete eradication of polio, once again, way more important and impactful.

reply
Those are all good things, but with the current AI boom we've invented something with the potential to invent those kinds of things on its own, if not now then in the near future. It's far more important and impactful to invent a digital mind that can invent an arbitrary number of vaccines than to just invent one vaccine, no matter how hard it was to invent the vaccine by hand.
reply
yeah, painting yourself into a corner at 10x speed is hardly the most awe-inspiring achievement of humanity.
reply