A lot of hobby level 3d printing is like this. A good bit of the popular prints are... things to enhance your printer.
Oddly, woodworking has its fair share too - a lot of jigs and things to make more jigs or woodworking tools.
The problem is two-fold… abstract thinking begets more abstract thinking, and the common advice to young, aspiring entrepreneurs of “scratch your own itch” ie dogfooding has gone wrong in a big way.
Nobody is building anything worthwhile with these things.
Basically every piece of software being built is now being built, in some part, with AI, so that is patently false.Nobody who is building anything worthwhile is hooking their LLM up to moltbook, perhaps.
Yep, just like a few years ago, all fintech being built was being built on top of crypto and NFTs. This is clearly the future and absolutely NOT a bubble.
This seems WAY less true than the assertion of software being built with LLM help. (Source: Was in FinTech.)
Like, to the point of being a willful false equivalence.
The fact to that there is so much value already being derived is a pretty big difference from crypto which never generated any value at all.
I asked my friendly LLM to run every second and dump the delta for each queue into a csv, 10 seconds to write what I wanted, 5 seconds later to run it, then another 10 seconds to reformat it after looking at the output.
It had hardcoded the interface, which is what I told it to do, but I'm happy with it and want to change the interface, so again 5 seconds of typing and it's using argparse to take in a bunch of variables.
That task would have taken me far longer than 30 seconds to do 5 years ago.
Now if only AI can reproduce the intermittent problem with packet ordering I've been chasing down today.
Or even the fact that I was able to start coding in an entirely new ML framework right away without reading any documentation beforehand.
I'm puzzled by the denialism about AI-driven productivity gains in coding. They're blindingly obvious to anyone using AI to code nowadays.
This great comment I saw on another post earlier feels relevant: https://news.ycombinator.com/item?id=46850233
It dropped out a short file which used
from statistics import quantiles
Now maybe that python module isn't reliable, but as it's an idle curiosity I'm happy enough to trust it.
Now maybe I could import a million line spreadsheet and get that data out, but I'd normally tackle this with writing some python, which is what I asked it to do. It was far faster than me, even if I knew the statistics/quantiles module inside out.
This would have taken me an hour previously. It now takes a few minutes at most.
I feel like many AI skeptics are disconnected from reality at this point.
For me, the rise of the TUI agents, emerging ways of working (mostly SDD, and how to manage context well), and the most recent releases of models have pushed me past a threshold, where I now see value in it.
1. Resell tokens by scamming general public with false promises (IDEs, "agentic automation tools"), collect bag.
2. Impress brain dead VCs with FOMO with for loops and function calls hooked up to your favorite copyright laundering machine, collect bag.
3. Data entry (for things that aren't actually all that critical), save a little money (maybe), put someone who was already probably poor out of work! LFG!
4. Give into the laziest aspects of yourself and convince yourself you're saving time by having them writing text (code, emails ect) and ignoring how many future headaches you're actually causing for yourself. This applies to most shortcuts in life, I don't know why people think that it doesn't apply here.
I'm sure there are some other productive and genuinely useful use cases like translation or helping the disabled, but that is .00001% of tokens being produced.
I really really really can't wait for this these "applications" to go the way of NFT companies. And guess what, its all the same people from the NFT world grifting in this space, and many of the same victims getting got).
Similarly AIs are just putzing around right now. As they become more capable they can be thrown at bigger and bigger problems.
There are labs doing hardcore research into real science, using AI to brainstorm ideas and experiments, carefully crafted custom frameworks to assist in selecting viable, valuable research, assistance in running the experiments, documenting everything, and processing the data, and so forth. Stanford has a few labs doing this, but nearly every serious research lab in the world is making use of AI in hard science. Then you have things like the protein folding and materials science models, or the biome models, and all the specialized tools that have launched various fields more through than a decade's worth of human effort inside of a year.
These moltbots / clawdbots / openclawbots are mostly toys. Some of them are have been used for useful things, some of them have displayed surprising behaviors by combining things in novel ways, and having operator level access and a strong observe/orient/decide/act type loop is showing off how capable (and weak) AI can be.
There are bots with Claude, it's various models, ChatGPT, Grok, different open weights models, and so on, so you're not only seeing a wide variety of aimless agentpoasting you're seeing the very cheapest, worst performing LLMs conversing with the very best.
If they were all ChatGPT 5.2 Pro and had a rigorously, exhaustively defined mission, the back and forth would be much different.
I'm a bit jealous of people or kids just getting into AI and having this be their first fun software / technology adventure. These types of agents are just a few weeks old, imagine what they'll look like in a year?
Do you mean AI or these "personal agents"? I would disagree on the former, folks build lots of worthwile things