Well, for those among us that are not aristocracy already, except for the vanishingly small number of people required to oversee such processes, we’re probably the closest we’re going to get to it. If they don’t need people to do the tech labor, we’ve got way more people than we need, so that’s a huge oversupply of tech skills, which means tech skills are rapidly becoming worthless. Glad to see how fast we’re moving in our very own race to the bottom!
Sounds like a great starting plot for an interesting story.
However…
I have to acknowledge my craft of SE has been putting people out of work for decades. I myself came up with business process improvement that directly let the company release about 20 people. I did this twice.
So… fair play.
Yeah, but why does it need to take the fun jobs first, like painting, writing poems, coding, making music, ...
I want the AI to cook, do the dishes, take out the trash, etc.
It truly was joyful to have this available to me. It didn’t have to have mass appeal or need me to pay the right artists the right amounts. I had it in moments.
It’s a wonderful world.
Like beg on the corners and starve in the street? Trying to figure out how the basics of capitalism where labor is exchanged for money is not going to work well when the only jobs left are side gigs. Something will have to change and a lot of People will fight said change.
The work will become even more fulfilling however.
1) It’s not my job to fix all the problems of Capitalism. It’s painful to try to fight the system without collective action. My family and I have to eat too.
2) We have had a solution all along for the particular problem of AI putting devs out of work. It’s called professional licensure, and you can see it in action in engineering and medical fields. Professional Software Engineers would assume a certain amount of liability and responsibility for the software they develop. That’s regardless of whether they develop it with LLM tools or something else.
For example, you let your tools write slop that you ship without even looking? And it goes on to wreak havoc? That’s professional malpractice. Bad engineer.
If we do this then Software Engineers become the responsible humans in the loop of so-called “AI” systems.
Say you found a job shooting people in the head for money. Like if you work for ICE or something…
You need to feed your family. Is this job ok? You may decide yes. I decided no. I will find another way to feed my family.
You don’t get to escape consequences because you are a small cog in a large system.
In the bigger picture, automation should free people from labor. But that requires some very greedy people to relax their grip ever so slightly. I imagine they see automation as a way to reduce reliance on labor, and if they don’t need labor, they don’t need people. So let them starve and stop having kids.
You probably choose not to steal, rob, impersonate someone else, or generally make money illegally.
It can be traitors all the way down.
What can the good guys do? Fire up Claude to improve their systems? Unless you have it working fully autonomously to counter-act abuse, I don't see how you can beat the "bad guys". There may be some industries where this is a solved problem (e.g. you can do all the validation server-sided, religiously follow best practices to prevent and mitigate abuse), but a lot of stuff like multiplayer video games will be doomed unless they move to a "you must use a locked down system we control" model. I honestly don't consider it liberating as someone that has various hobby projects, that now in addition to plain old DDoS I'll also have people spin up layer 7 attacks with just their credit card. It almost makes me want to give up instead of pushing forward in a world where the worst of the worst has access to the best of the best.
That is a nightmarish scenario tbh
Later this boredom was described by the Stones, "And though she’s not really ill / There’s a little yellow pill / She goes running for the shelter of a mother’s little helper".
It is a nightmare. Mostly what I'm thinking about while the agents are running is how bored I'm going to be. That is the joke, my deep thought on T.S. Eliot are about the wasteland this thing is going to create.
>After a week, scores of iterations, it can reverse engineer any website
Cool, let’s see the proof.
It is proof-of-concept. Seriously burns some tokens (~80k - ~200k) but doesn't require AI after to scrape and automate a website so if all the people at Browser Use, Browser Base, and every one pounding every website used it, I think, the net benefit would be in the billions. I would recommend using it in isolation. Nonetheless, it works very very well on my machine.
> This type of slop comment is somehow worse than spam.
Please don't be mean.
It’s insane how insufferable this place is now.
> There is no proof, just a self-congratulatory word salad with dubious authenticity.
I worked 8 days straight on that and have been working non-stop on the second draft that is much cleaner and safer. I'm a human being. Please don't be mean. If humanity does come to end, it won't be because of AI, it will be because we can't stop being assholes to each other.
[0] https://github.com/adam-s/intercept/tree/main?tab=readme-ov-...
2-3 hours "walking" while having to check in every 5-10 minutes?
If I have to check in every 5-10 minutes, I won't taste coffee or hear that there's good music playing.
I had a bad feeling we were basically already there.
However I do not trust AI anywhere near as much as I trust the humans. The AI is super capable but also occasionally a psychopath toddler. I sat in amused astonishment when faced with job 2 not running because job 1 was failing Claude went in to the database, changed the failure record to success, triggered job 2 which produced harmful garbage, and then claimed victory. Only the most troubled person would even think of doing that, but Claude thought it was the best solution.
There is some real power in AI, for sure. But as I have been working with it, one thing is very clear. Either AI is not even close to a real intelligence (my take), or it is an alien intelligence. As I develop a system where it iterates on its own contexts, it definitely becomes probabilistically more likely to do the right thing, but the mistakes it makes become even more logic-defying. It's the coding equivalent of a hand with extra fingers.
I'm only a few weeks into really diving in. Work has given me infinite tokens to play with. Building my own orchestrator system that's purely programmatic, which will spawn agents to do work. Treat them as functions. Defined inputs and defined outputs. Don't give an agent more than one goal, I find that giving it a goal of building a system often leads it to assert that it works when it does not, so the verifier is a different agent. I know this is not new thinking, as I said I am new.
For me the most useful way to think about it has been considering LLMs to be a probabilistic programming language. It won't really error out, it'll just try to make it work. This attitude has made it fun for me again. Love learning new languages and also love making dirty scripts that make various tasks easier.