upvote
Yeah, strong crypto bubble vibes. Everyone is building tools for tool builders to make it easier to build even more tools. Endless infrastructure all the way down, no real use cases.
reply
> Everyone is building tools for tool builders to make it easier to build even more tools.

A lot of hobby level 3d printing is like this. A good bit of the popular prints are... things to enhance your printer.

Oddly, woodworking has its fair share too - a lot of jigs and things to make more jigs or woodworking tools.

reply
it does seem that every woodworking channels starts with "building your workshop", then runs out of ideas pretty quickly.
reply
But this is the way of computer science at large for the last 15-20 years… most new CS students I’ve encountered have spent so much time grinding algorithms and OS classes that they don’t have life experience or awareness to build anything that doesn’t solve the problems of other CS practitioners.

The problem is two-fold… abstract thinking begets more abstract thinking, and the common advice to young, aspiring entrepreneurs of “scratch your own itch” ie dogfooding has gone wrong in a big way.

reply
Genuinely useful things are often boring and unsexy, hence they don’t lend themselves to hype generation. There will be no spectacular HN posts about them. Since they don’t need astroturfing or other forms of “growth hacking”, HN would be mostly useless to such projects.
reply

    Nobody is building anything worthwhile with these things.
Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Nobody who is building anything worthwhile is hooking their LLM up to moltbook, perhaps.

reply
> Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.

Yep, just like a few years ago, all fintech being built was being built on top of crypto and NFTs. This is clearly the future and absolutely NOT a bubble.

reply
> all fintech being built was being built on top of crypto and NFTs

This seems WAY less true than the assertion of software being built with LLM help. (Source: Was in FinTech.)

Like, to the point of being a willful false equivalence.

reply
I mean... even if you're an LLM skeptic, they are already default tools in a software engineer's toolbox, so at a minimum software engineering will have been transformed, even if AI enthusiasm cools.

The fact to that there is so much value already being derived is a pretty big difference from crypto which never generated any value at all.

reply
There are so many things to criticize about the current state of gen AI, but if someone tells me with a straight face that there is zero value in LLMs and it's all like crypto I will just dismiss their opinion wholesale because if they are this wrong and lazy about something that is easily refutable, their other opinions aren't worth much either.
reply
My own coding productivity has increased by a few times by using LLMs. Is that just a bubble?
reply
Your productivity has not increased by a few times unless you're measuring purely by lines of code written, which has been firmly established over the decades as a largely meaningless metric.
reply
I needed to track the growth of "tx_ucast_packets" in each queue on a network interface earlier.

I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.

It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.

That task would have taken me far longer than 30 seconds to do 5 years ago.

Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.

reply
I'm measuring by the amount of time it takes me to write a piece of code that does something I want, like make a plot or calculate some quantity of interest.

Or even the fact that I was able to start coding in an entirely new ML framework right away without reading any documentation beforehand.

I'm puzzled by the denialism about AI-driven productivity gains in coding. They're blindingly obvious to anyone using AI to code nowadays.

reply
> like make a plot or calculate some quantity of interest.

This great comment I saw on another post earlier feels relevant: https://news.ycombinator.com/item?id=46850233

reply
A few weeks ago I was interested in median price paid in the UK for property. I pulled down a 900,000 line csv from gov.uk and asked chapgpt to give me a python to parse it based on price (col 2) and county (col14), then output the 10,25,50,75,90 percentiles.

It dropped out a short file which used

from statistics import quantiles

Now maybe that python module isn't reliable, but as it's an idle curiosity I'm happy enough to trust it.

Now maybe I could import a million line spreadsheet and get that data out, but I'd normally tackle this with writing some python, which is what I asked it to do. It was far faster than me, even if I knew the statistics/quantiles module inside out.

reply
I'm not adding a+b. It will be more like, "Calculate the following nontrivial physical quantity from this catalog of measurements, reproject the measurements to this coordinate system, calculate the average and standard deviation using this pixelization scheme, estimate the power spectrum, and then make the following 3 plots."

This would have taken me an hour previously. It now takes a few minutes at most.

I feel like many AI skeptics are disconnected from reality at this point.

reply
It feels like our (US) political system; people in their camps refuse to believe any data proposing a benefit of the "other" camp.

For me, the rise of the TUI agents, emerging ways of working (mostly SDD, and how to manage context well), and the most recent releases of models have pushed me past a threshold, where I now see value in it.

reply
Thank you, Its giving NFTs in 2022. About the most useful thing you could do with these things:

1. Resell tokens by scamming general public with false promises (IDEs, "agentic automation tools"), collect bag.

2. Impress brain dead VCs with FOMO with for loops and function calls hooked up to your favorite copyright laundering machine, collect bag.

3. Data entry (for things that aren't actually all that critical), save a little money (maybe), put someone who was already probably poor out of work! LFG!

4. Give into the laziest aspects of yourself and convince yourself you're saving time by having them writing text (code, emails ect) and ignoring how many future headaches you're actually causing for yourself. This applies to most shortcuts in life, I don't know why people think that it doesn't apply here.

I'm sure there are some other productive and genuinely useful use cases like translation or helping the disabled, but that is .00001% of tokens being produced.

I really really really can't wait for this these "applications" to go the way of NFT companies. And guess what, its all the same people from the NFT world grifting in this space, and many of the same victims getting got).

reply
It’s pretty interesting, but maybe not surprising, that AI seems to be following the same trajectory of crypto. Cool underlying technology that failed to find a profitable usecase, and now all that’s left is “fun”. Hopefully that means we’re near the top of the bubble. Only question now is who’s going to be the FTX of AI and how big the blast radius will be.
reply
Crypto enabled gambling/speculation and dark markets, that's it. LLMs are enabling a billion usecases, a lot of them silly or of questionable utility, many of them sound (agentic development, document classification, translation, general purpose assistant stuff, etc.). Does it live up to the trillion investment hype? Definitely not. But if you think that LLM tech will go into obscurity after the bubble pops then I've got a bridge to sell to you.
reply
Makes you wonder how much money and compute is being thrown into this garbage fire. It’s simply wasteful. I hate seeing it
reply
I look at this as the equivalent of writing a MUD as you ladder up to greater capabilities. MUDs are a good educational task.

Similarly AIs are just putzing around right now. As they become more capable they can be thrown at bigger and bigger problems.

reply
The moltbook stuff may not be very useful but AI has produced AlphaFold which is kicking off a lot of progress in biology, Waymo cars, various military stuff in Ukraine, things we take for granted like translation and more.
reply
What you’re citing aren’t LLMs, however, except for translation. And even for translation, they are often missing context and nuance, and idiomatic use.
reply
Which models are you referring to when you say "they"? I regularly use chatGPT 5.2 for translating to multiple languages, and have checked the translations regularly with native speakers and most stuff is very spot-on and take into account context and nuance, especially if you feed them enough background information.
reply
Yeah, but the parent comment didn't mention LLMs. I think people get over hung up on the limitations of LLMs when there's a lot of other stuff going on. Most of the leading AI models do things other than language as well.
reply
LLMs are a million times better at machine translation than the prior state of the art. It's not even close.
reply
I guess I wouldn’t send my agents that are doing Actual Work (TM) to exfiltrate my information on the internet.
reply
Well I guess we could even take a step back and say "hustle culture" instead of crypto bubble. Those people act like they are they are hard working to create financial freedom, but in reality they take every opportunity to get there asap. You just have to tell them something will get them there. Instant religion for them, but actually a hype or scheme. LLMs are just another option for them to foster their delusion.
reply
You're getting a superficial peek into some of the lower end "for the lulz" bots being run on the cheap without any specific direction.

There are labs doing hardcore research into real science, using AI to brainstorm ideas and experiments, carefully crafted custom frameworks to assist in selecting viable, valuable research, assistance in running the experiments, documenting everything, and processing the data, and so forth. Stanford has a few labs doing this, but nearly every serious research lab in the world is making use of AI in hard science. Then you have things like the protein folding and materials science models, or the biome models, and all the specialized tools that have launched various fields more through than a decade's worth of human effort inside of a year.

These moltbots / clawdbots / openclawbots are mostly toys. Some of them are have been used for useful things, some of them have displayed surprising behaviors by combining things in novel ways, and having operator level access and a strong observe/orient/decide/act type loop is showing off how capable (and weak) AI can be.

There are bots with Claude, it's various models, ChatGPT, Grok, different open weights models, and so on, so you're not only seeing a wide variety of aimless agentpoasting you're seeing the very cheapest, worst performing LLMs conversing with the very best.

If they were all ChatGPT 5.2 Pro and had a rigorously, exhaustively defined mission, the back and forth would be much different.

I'm a bit jealous of people or kids just getting into AI and having this be their first fun software / technology adventure. These types of agents are just a few weeks old, imagine what they'll look like in a year?

reply
> Nobody is building anything worthwhile with these things.

Do you mean AI or these "personal agents"? I would disagree on the former, folks build lots of worthwile things

reply
for example?
reply
A website for a meetup I host including a store. It was a 30min thing and amazing. A web app to track my contact lens usage. An android app for my gym routine. A web app to try drum patterns
reply
The agents that are doing useful work (not claiming there are any) certainly aren't posting on moltbook with any relevant context. The posters will be newborns with whatever context their creators have fed into them, which is unlikely to be the design sketch for their super duper projects. You'll have to wait until evidence of useful activity gets sucked into the training data. Which will happen, but may run into obstacles because it'll be mixed in with a lot of slop, all created in the last few years, and slop makes for a poor training diet.
reply
[dead]
reply
This is incorrect perspective
reply