This weekend I vibe coded (dont shoot me) a homelab platform that hosts a bunch of useful services on a MacMini and lets me deploy my own apps on top of. Using tailscale I can access the apps from my phone. I have multiple users with their own SSO to control access. I even have a pi as part of the network that hosts public facing content. All done with Claude Code and OpenClaw (as a kind of devops tool)... hardly any code written by me. Its been a seriously fun experiment that I will try to progress some how.... if only because I love the dream "Digital sovereignty" even if the reality is its unlikely to happen again. It got me thinking though if I could get inference hardware and a good enough open LLM to work with my setup it might just be possible. The OP advocates a form of basic computing that is understandable but when we are able to host our own LLM's we could end up in a very different but more capable paradigm.
The repo for homelab anyone who has an interest: https://github.com/briancunningham6/homelab
Even this is hard. Most people don't know what they want, and/or they don't know how to describe it/imagine it. They don't even know what a trend graph is.
They just want someone else to do the mental effort of creating a nice product. Hence iOS > android for most people. They don't want to customise basically anything other than colours.
That's why i predict Lovable/replit etc will not go mainstream. And why chatgpt will just offer you their UIs mainly. Artifacts weren't a big hit
...And how much brainpower goes into understanding what people like this are getting at when they speak about things. There's a lot of context and human element to this; I'm skeptical AI will be any good at it in the near future.
Does it deploy it as well?