As a real world example, I was told to evaluate Claude Code and ChatGPT codex at my current job since my boss had heard about them and wanted to know what it would mean for our operations. Our main environment is a C# and Typescript monorepo with 2 products being developed, and even with a pretty extensive test suite and a nearly 100 line "AGENTS.md" file, all models I tried basically fail or try to shortcut nearly every task I give it, even when using "plan mode" to give it time to come up with a plan before starting. To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.
It almost feels like this is some "open secret" which we're all pretending isn't the case too, since if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed. I don't mean to sound dismissive, but I really do feel like I'm going crazy here.
- driving the LLM instead of doing it yourself. - sometimes I just can't get the activation energy and the LLM is always ready to go so it gives me a kickstart
- doing things you normally don't know. I learned a lot of command like tools and trucks by seeing what Claude does. Doing short scripts for stuff is super useful. Of course, the catch here is if you don't know stuff you can't drive it very well. So you need to use the things in isolation.
- exploring alternative solutions. Stuff that by definition you don't know. Of course, some will not work, but it widens your horizon
- exploring unfamiliar codebases. It can ingest huge amounts of data so exploration will be faster. (But less comprehensive than if you do it yourself fully)
- maintaining change consistency. This I think it's just better than humans. If you have stuff you need to change at 2 or 3 places, you will probably forget. LLM's are better at keeping consistency at details (but not at big picture stuff, interestingly.)
I'd previously encountered tools that seemed interesting, but as soon as I tried getting it to run I found myself going down an infinite debugging hole. With an LLM I can usually explain my system's constraints and the best models will give me a working setup from which I can begin iterating. The funny part is that most of these tools are usually AI related in some way, but getting a functional environment often felt impossible unless you had really modern hardware.
There is a counter issue though, realizing mid session that the model won’t be able to deliver that last 10%, and now you have to either grok a dump of half finished code or start from scratch.
For example a lot of pro-OpenAI astroturfing really wanted you to know that 5.3 scored better than opus on terminal-bench 2.0 this week, and a lot of Anthropic astroturfing likes to claim that all your issues with it will simply go away as soon as you switch to a $200/month plan (like you can't try Opus in the cheaper one and realise it's definitely not 10x better).
Since last few months, I have seen a notable difference in the quality and extent of projects these students have been able to accomplish. Every project and website they show looks polished, most of those could be a full startup MVP pre AI days.
The bar has clearly been raised way high, very fast with AI.
Once we got them into a technical screening, most fell apart writing code. Our problem was simple: using your preferred programming language, model a shopping cart object that has the ability to add and remove items from the cart and track the cart total.
We were shocked by how incapable most candidates were in writing simple code without their IDEs tab completion capability. We even told them to use whatever resources they normally used.
The whole experience left us a little surprised.
usually when someone hypes it up it's things like, "i have it text my gf good morning every day!!", or "it analyzed every single document on my computer and wrote me a poem!!"
The headline gain is speed. Almost no-one's talking about quality - they're moving too fast to notice the lack.
That they are so good at the things I like to do the least and still terrible at the things at which I excel. That's just gravy.
But I guess this is in line with how most engineers transition to management sometime in their 30s.
A giant monorepo would be a bad fit for an LLM IMO.
It's the appearance of productivity, not actual productivity.
I used this line for a long time, but you could just as easily say the same thing for a typical engineer. It basically boils down to "Claude likes its tickets to be well thought out". I'm sure there is some size of project where its ability to navigate the codebase starts to break down, but I've fed it sizeable ones and so long as the scope is constrained it generally just works nowadays
Is it really to escape from "getting bogged down in the specifics" and being able to "focus on the higher-level, abstract work", to quote OP's words? I thought naively that engineering always has been about dealing with the specifics and the joy of problem solving. My guess is that the drive is toward power. Which is rather natural, if you think about it.
Science and the academic world
I have always failed to understand the obsessive dream of many engineers to become managers. It seems not to be merely about an increase in revenue.
Is it to escape from "getting bogged down in the specifics" and being able to "focus on the higher-level, abstract work", to quote OP's words? I thought naively that engineering has always been about dealing with the specifics and the joy of problem-solving. My guess is that the drive is towards power, which is rather natural, if you think about it.
Science and the academic world suffer a comparable plague.
And when you're in an existing company, stuck in thing X, knowing that it's obsolete, and the people doing the latest Y that's hot in the job market are in another department and jealously guard access to Y projects?
How about when you go to interview, and you not ONLY have to know Y, but the Leetcode from 15 years ago?
So maybe I've given you another alternative to 'it has to be power, there's no other rational reason to go into management'.
Here's a gentler one: if you want to build big things, involving many people, you need to be in management.
Do you enjoy brick laying and calculating angles around doorways? You're the engineer. Do you want to be the architect hiring engineers, working with project managers, and assessing the budget while worrying about approvals? They're different types of work, and it's not about 'power' like you are suggesting. Autonomy and decision-making power are more the 'power' engineers often don't get (unless they are lucky, very very smart or in a small startup-like environment).
I've gone back and forth across the lead and management lines many times now, and it is career limiting in many many ways. But it's too fulfilling to give up. And I swear there is magic in what small, expert groups are able to produce that laps large org on the regular.
Some research around British government workers found higher job satisfaction in units with hands-off managers. It resonates with my own career. I’m really excited and want to go to work when I’m on a small, autonomous team with little red tape and politics. Larger orgs simply can’t — or haven’t — ever offered me the same feeling; with some exceptions in Big 3 consulting if I was the expert on a case.
The worst manager is the micromanager - either because he's nervous about his job security, because he doesn't know how to delegate, or because he's been hands-on forever and can't let go.
I don't see why it contradicts my little rant above. Of course I also prefer small, nimble teams with lots of autonomy, with individuals who thrive being delegated only extremely broad tasks. The only part where I think there's a difference is the constantly learning.
I love constantly learning. My issue isn't that. It's that I don't want to HAVE to constantly be practicing at home and on the weekend. I did this in my 20s and I can't/won't do this anymore. I just have no time or energy now as an Old.
For myself it is the hands-on work I find most fulfilling unfortunately. I have some sort of brain worm that makes me want to practice all the new things at home/weekend if work isn't letting me. I'm sure it'll burn me out at some point, but to paraphrase a famous creep: I keep getting older, my brainworm stays the same age.
Within my power I try to do that with my directs, making sure new interesting things are cycled in so their CVs become stronger. But me, personally, I've had really bad luck with this. I always had to study on the weekends for something that either isn't used in my company or someone else jealously guards because it's hot on the market.
> only to have it completely obsoleted a few years later
Not really. There aren’t as many fundamentally new ideas in modern tech as it may seem.Web servers have existed for more than 30 years and haven’t changed that much since then. Or e.g., React + Redux is pretty much the same thing as WinProc from WinAPI - invented some time in ~1990. Before Docker, there were Solaris Zones and FreeBSD jails. TCP/IP is 50 years old. And many, many other things we perceive as new.
Moreover, I think it’s worth looking back and learning some of the “old tech” for inspiration; there’s a wealth of deep and prescient ideas there. We still don’t have a full modern equivalent of Macromedia Flash, for example.
I can't tell if this is sincere or parody, it is so insufferably wrong. Good troll. I almost bit.
Almost nothing goes obsolete in software; it just becomes unpopular. You can still write every website you see on the Internet with just jQuery. There are perfectly functional HTTP frameworks for Cobol.
These are inherently different levels of power. I'm not sure how your example is supposed to be the opposite when you compare someone laying bricks to someone making hiring and firing decisions about groups of people. Your scenario is fundamentally a power imbalance
If only the world incentivized ICs with depth of knowledge to stay in those roles for the long haul instead of chopping off our knowledge of specificity at the apex of their depth of knowledge. So many managers have no talent, no depth of knowledge and a passable ability to manage people.
It's a skill that takes practice -coordinating disparate people and groups, creating communication where you notice they're not talking to each other, creating or fixing processes that annoy or cause chaos if they're not there, encouraging people, being a therapist, seeing what's not there and pushing a vision while you get the group to go along, protecting people from management above and pressures around, etc are mostly skills that you learn.
Sometimes no one will give you feedback so you have to figure it out yourself (unless you're lucky to get a mentor), so you just have to throw yourself in and give yourself grace to fail and succeed over time.
The only skill of these I think is possibly genetic or innate, is being able to see the big picture and make strategical decisions. A lot of tech people skew cognitively in narrow areas, and have trouble conceptualizing the world beyond.
One challenge here is the ubiquitous 'managers just approve vacations and waste space' sentiment on here and in some places. These people are a chore to manage (and sometimes are better not being present in your group).
That sure beats having it completely obsoleted a few weeks later, which sometimes feels like the situation with AI
No, you don’t. You need some kind of decision making and communication process but a separate management is not necessary.
Do you know what stank ranking even is and where it comes from? If you have to rate your group from 1 to 5, each individual, and you rate them all 4s and 5s, they crack down and force you to select a 2 and a 3 and only have one 5. Now, would you prefer a CFO, CTO or even a project manager be the one to do it? It's a weird comment.
Again, as an older manager today, I can see myself in my 20s in the resistance and stubbornness to 'how corporations work' espoused in comments like yours. I sympathize, but I warn you against being naive and ideological, because unfortunately human groups be human groups, and organizations for better or for worse behave in predictable patterns. You might as well know as much as possible so you can deal with it better.
Real managers deal with coaching, ownership, feelings, politics, communication, consensus building, etc. The people who are good at it like setting other people up to win.
> I’d think there would be very, very few people who are actively seeking people drama
Theoretically as a manager you get the bump up the power dynamic ladder (and probably pay ladder) because you are taking on the responsibility of "people drama". Being a good manager is antithetical to treating living, breathing human beings as NPCs in a game.
Often too it's the architecture that can cause a grand idea to crash and burn—experienced devs should be moving toward solving those problems.
That can extend to arbitrary absurdity. You are probably not growing your own food, mining your own ore, forging your own tools, etc etc etc.
It's all just a matter of where you rely on external tools/abstractions to do parts of the work you don't want to do yourself.
It's frontier exploration that brings me joy. If a clanker can do something, then it's a solved problem. I use all the tools at my disposal to push the frontier of problems solved. Wasting my time re-inventing the wheel brings me the opposite of joy.
Like I’ve been in situations as an IC where poor leadership from above has literally caused less efficient and more painful day-to-day work. I always hoped I could sway those decisions from my position as an IC, but reality rarely aligned with that hope.
I actually love the details, but I just don’t get too deep into them these days as I don’t want to micro-manage.
I do find I have more say in things my team deals with now that I’m a manager.
But I'm acutely conscious that in the 5+ years that I've been a senior developer, my ability to come up with useful ideas has significantly outstripped the time I have to realize those ideas (and from experience, the same is often true of academics).
At work, I have the choice between remaining hands-on and limiting what I can get done, or acting more like a manager, and having the opportunity to get more done, but only by letting other people do it, in ways that might not reflect my vision. It's pretty frustrating, to be honest.
For side projects, it's worse. Most of them just can't be done, because I don't even have the choice.
I was recently looking for mentors to work with him and advance his skills, targeting college aged kids / young 20s..
It was surprising to me how many people I came across in this field at this young age that are trying to focus on the "higher level" game planning aspects and not so much on the lower level implementation specifics.
https://www.youtube.com/playlist?list=PLnuhp3Xd9PYTt6svyQPyR...
https://guide.handmadehero.org/hmcon/
I think it's that there is only that much demand for solving really complex problems, and doing the same thing over and over is boring, so management is the only way forward for many people
You want to write a book about people's deepest motivations. Formative experiences, relationships, desires. Society, expectations, disappointment. Characters need to meet and talk at certain times. The plot needs to make sense.
You bring it to your editor. He finds you forgot to capitalise a proper noun. You also missed an Oxford comma. You used "their" instead of "they're".
He sends you back. You didn't get any feedback about whether it makes sense that the characters did what they did.
You are in hell, you won't hear anything about the structure until you fix your commas.
Eventually someone invents an automatic editor. It fixes all the little grammar and spelling and punctuation issues for you.
Now you can bring the script to an editor who tells you the character needs more development.
You are making progress.
Your only issue is the Luddites who reckon you aren't a real author, because you tend to fail their LeetGrammar tests, calling you a vibe author.
You can't do that from a high level abstract position. You actually need to stand at the coal face and think about it from time to time.
This article encodes an entitled laziness that's destructive to personal skill and quality work.
A few years ago, when Agile was still the hot thing and companies had an Agile "facilitor" or manager for each dev team, the common career path I heard when talking to those people was: "I worked as a java/cobol/etc in the past, but it just didn't click with me. I'm more of a peoples person, you know, so project management is where I really do my best work!".
Yeah, right...
What type of code? What types of tools? What sort of configuration? What messaging app? What projects?
It answers none of these questions.
I'm now using pi (the thing openclaw is built on) and within a few days i build a tmux plugin and semaphore plugin^1, and it has automated the way _I_ used to use Claude.
The things I disagree with OP is: The usefulness of persistent memory beyond a single line in AGENTS.md "If the user says 'next time' update your AGENTS.md", the use of long-running loops, or the idea that everything can be resolved via chat - might be true for simple projects, but any original work needs me to design the 'right' approach ~5% of the time.
That's not a lot, but AI lets you create load-bearing tech-debt within hours, at which point you're stuck with a lot of shit and you dont know how far it got smeared.
(Not necessarily this specific post).
>Over the past year, I’ve been actively using Claude Code for development. Many people believed AI could already assist with programming—seemingly replacing programmers—but I never felt it brought any revolutionary change to the way I work.
Funny, because just last month, HN was drowning in blog posts saying Claude Code is what enables them to step away from the desk, is definitely going to replace programmers, and lets people code "all through chatting on [their] phone" (being able to code from your phone while sitting on the bus seems to be the magic threshold that makes all the datacenters worth it).
It's like we all fell under the spell of a terminal endlessly printing output as some kind of measurement of progress.
This is an AI generated post likely created by going to chatgpt.com and typing in "write a blogpost hyping up [thing] as the next technological revolution", like most tech blog content seems to be now. None of those things ever existed, the AI made them up to fulfill the request.
To add to this, OpenClaw is incapable of doing anything meaningful. The context management is horrible, the bot constantly forgets basic instructions, and often misconfigures itself to the point of crashing.
edit: love the downvotes. I guess HN really is Reddit now. You can make any accusation without evidence and people are supposed to just believe it. If you call it out you get downvoted.
Besides, if there are enough red flags that make it indistinguishable from actual AI slop, then chances are it's not worth reading anyway and nothing of value was lost by a false positive.
I just give the link to those posts to my AI to read it, if it's not worth a human writing it, it's not worth a human reading it.
> This has truly freed up my productivity, letting me pursue so many ideas I couldn’t move forward on before
If you're writing in a blog post that AI has changed your life and let you build so many amazing projects, you should link to the projects. Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.
I've got 10+ years of coding experience, I am an AI advocate, but not vibe coding. AI is a great tool to help with the boring bits, using it to initialize files, help figure out various approaches, as a first pass code reviewer, helping with configuring, those things all work well.
But full-on replacing coders? It's not there yet. Will require an order of magnitude more improvement.
I am using them in projects with >100kloc, this is not my experience.
at the moment, I am babysitting for any kloc, but I am sure they will get better and better.
> Somehow 90% of these posts don't actually link to the amazing projects that their author is supposedly building with AI.
You are in the 90%.
I am sure there are ways to get around this sort of wall, but I do think it's currently a thing.
You also need a reasonably modular architecture which isn't incredibly interdependent, because that's hard to reason about, even for humans.
You also need lots and lots (and LOTS) of unit tests to prevent regressions.
Surely it depends on the design. If you have 10 10kloc modular modules with good abstractions, and then a 10k shell gluing them together, you could build much bigger things, no?
Then again the problem is that the public has learned nothing from the theranos and WeWorks and even more of a problem is that the vc funding works out for most of these hype trains even if they never develop a real business.
The incentives are fucked up. I’d not blame tech enthusiasts for being too enthusiastic
Might as well talk about how AI will invent sentient lizards which will replace our computers with chocolate cake.
Thinking usually happens inside your head.
What is your point?
If you’re trying to say that they should have kept their opinion to themselves, why don’t you do the same?
Edit: tone down the snark
Holy Spiderman what is your point? That if someone says something dumb I can never challenge them nor ask them to substantiate/commit?
> tone down the snark
It's amazing to me that the neutral observation "thinking happens in your head" is snarky. Have you ever heard the phrase "tone police"?
If you spend a couple of years with an LLM really watching and understanding what it’s doing and learning from mistakes, then you can get up the ladder very quickly.
A "basic" understanding in critical domains is extremely dangerous and an LLM will often give you a false sense of security that things are going fine while overlooking potential massive security issues.
All I could think was, "good luck" and I certainly hope their app never processes anything important...
I don't feel like most providers keep a model for more than 2 years. GPT-4o got deprecated in 1.5 years. Are we expecting coding models to stay stable for longer time horizons?
If the person who is liable for the system behavior cannot read/write code (as “all coders have been replaced”), does Anthropic et al become responsible for damages to end users for systems its tools/models build? I assume not.
How do you reconcile this? We have tools that help engineers design and build bridges, but I still wouldn’t want to drive on an “autonomously-generated bridge may contain errors. Use at own risk” because all human structural engineering experts have been replaced.
After asking this question many times in similar threads, I’ve received no substantial response except that “something” will probably resolve this, maybe AI will figure it out
SHOW ME THE MONEY!!!
Maybe they don't feel like sharing yet another half working Javascript Sudoku Solver or yet another half working AI tool no one will ever use?
Probably they feel amazed about what they accomplished but they feel the public won't feel the same.
GPT-5.2 fixed my hanging WiFi driver: https://gist.github.com/lostmsu/a0cdd213676223fc7669726b3a24...
It's a magical moment when someone is able to AI code a solution to a problem that they couldn't fix on their own before.
It doesn't matter whether there are other people who could have fixed this without AI tools, what matters is they were able to get it fixed, and they didn't have to just accept it was broken until someone else fixed it.
Cue the folks saying "well you could DIE!!!" Not if I don't fix brakes, etc ...
This has been a significant aspect of ai use as well. As a result a feel a little less friction with myself, less that I am letting things slip by because, well, because I still want a nice balance to work, life, leisure, etc. I don’t want to overstate things, it’s not a cure all for any of these things, but it helps a lot.
The only software I've seen designed and implemented by OpenClaw is moltbook. And I think it is hard to come up with a bigger pile of crap than Moltbook.
If somebody can build something decent with OpenClaw, that would help add some credibility to the OpenClaw story.
For me the pain point has always been with non-IT people/companies. They are way more accustomed with phone or even in person appointments. They in general have way more of a say than me, the customer.
Can Openclaw make and take phone calls for me to make appointments? Can Openclaw do chores for me? Can Openclaw meet with contractors for me? None of them it can do. It can make notes for me (useless as most notes are useless). It can scrap websites for me (not very interesting as why would I want to collect so much knowledge?). It can probably automate anything that already has an endpoint or whatever, but I don’t mind write code for my own projects. I always failed to understand why anyone would want to let AI write most of the code of their PERSONAL project — unless they want to sell them quickly.
I’m just a frustrated old man I guess.
[0] https://vapi.ai/
> I’m just a frustrated old man I guess.
I think this is a great summary of the failure of vision that a lot of tech people are having right now.
> automate anything that already has an endpoint or whatever
Facebook used to have API's, Reddit used to have API's, amazon used to have API's
They are gone.
Enshitification and dark patterns have taken over.
"Hey open claw, cancel service xxx" where XXX is something that is 17 steps and purposely hard to cancel so they keep your money.
What's going to happen when your AI tool can go to a website and strip the ad's off and return you just the text? What happens when it can build a customized news feed that looks less like Facebook and more like HN? Aren't we just gaining back function we lost with the death of RSS?
Consumers are mad about the hype of AI but the moment that it can cut through the bullshit we keep putting in their way it's going to wreck business MODELS, and the choice will be adapt or die. Start asking your "AI" tools to do all the basic, tedious bullshit tasks that are low risk (you have a ton of them) and if it gets 1/4 of them done your going to free up a ton of your own time.
They are not able to comprehend that for anything more complicated than that, the code might compile, but the logical errors and failure to implement the specs start piling up.
Grok 4 Fast told me its own internal system prompt has rules against autonomous operation, so that might have something to do with it. I am having decent results with it though.
I tried using LLMs to help debug at different points, but they went in circles on bad ideas, even when I gave them what turned out to be a correct clue.
Root cause turned out to be that IPv6 wasn't enabled for Docker networking, but was enabled for the websites DNS. So people who connected over IPv6 were getting their IPs all converted to the same internal Docker IP before being handed to the per-IP throttling algorithm.
I spotted that there were no IPv6 IPs in the logs, but the LLMs missed that the key pattern was the absence of something expected, instead drawing wrong conclusions.
So no, I'm not about to turn OpenClaw loose on building anything at all complex.
[1] https://reorx.com/blog/rabbit-r1-the-upgraded-replacement-fo...
> OpenClaw gave me the chance to become that super manager [...] A manager shouldn’t get bogged down in the specifics—they should focus on the higher-level, abstract work
These two propositions seem to be highly incompatible
1. It has a lot of files that it loads into it's context for each conversation, and it consistently updates them. Plus it stores and can reference each conversation. So there's a sense of continuity over time.
2. It connects to messaging services and other accounts of yours, so again it feels continuous. You can use it on your desktop and then pick up your phone and send it an iMessage.
3. It hooks into a lot of things, so it feels like it has more agency. You could send it a voice message over discord and say "hey remember that conversation about birds? Send an email to Steve and ask him what he thinks about it"
It feels more like a smart assistant that's always around than an app you open to ask questions to.
However, it's worth stressing how terrible the software actually is. Not a single thing I attempted to do worked correctly, important issues (like the discord integration having huge message delays and sometimes dropping messages) get closed because "sorry we have too many issues", and I really got the impression that the whole thing is just a vibe coded pile of garbage. And I don't like to be that critical about an open source project like this, but I think considering the level of hype and the dramatic claims that humans shouldn't be writing code anymore, I think it's worth being clear about.
Ended up deleting it and setting up something much simpler. I installed a little discord relay called kimaki, and that lets me interact with instances of opencode over discord when I want to. I also spent some time setting up persistent files and made sure the llm can update them, although only when I ask it to in this case. That's covered enough of what I liked from OpenClaw to satisfy me.
if one of my friends sent me an obviously AI-written email, I think that I would cease to be friends with them...
Isn’t the “what he thinks about it” part the hardest? Like, that’s what I want to phrase myself - the part of the conversation I’d like to get their opinion on and what exactly my actual request is. Or are people really doing the meme of sending AI text back and forth to each other with none the wiser?
For personal communication between friends it would be horrible. Authenticity has to be one of the things I value most about the people I know. Didn't mean to imply from that example that I did or would communicate that way.
https://github.com/a-n-d-a-i/ULTRON
Well, it's a work in progress, but I have self-upgrading and self-restarting working, and it's already more reliable than Claw ;)
I used the Claude Code SDK (Agents SDK) originally, but then realized I can get the same result by just calling `claude -p the_telegram_message`
The magic sauce being the --continue flag, of course. Bit less useful otherwise.
I haven't figured out how to interrupt it or see what it's doing yet though.
Honestly I'd rather die
> Generally, I believe (Rabbit) R1 has the potential to change the world.
There is a pattern here.
I feel like there's this "secret" hiding behind all these AI tools, that actually it's all very complicated and takes a lot of effort to make work, but the tools we're given hides it all. It's nice that we benefit from its simplicity of use. But hiding complexity leads to unexpected problems, and I'm not sure we've seen any of those yet - other than the massive, gaping security hole.
I don't know about this; or at least, in my experience, is not a what happens with good managers.
I guess best managers just develop the hunch and know when to do this and when to ask engineers for smallest details to potentially develop different solutions. You have to be technical enough to do this
That would be really helpful.
Why isn't Claude doing all that for me, while I code? Why the obsession that we must use code generation, while other gabage activities would free me to do what I'm, on paper, paid to do?
It's less sexy of course, it doesn't have the promise of removing me in the end. But the reason, in the present state, is that IT admins would never accept for an llm to handle permissions, rotations, management would never accept an llm to report status or provide estimate. This is all "serious" work where we can't have all the errors llm create.
Dev isn't that bad, devs can clean slop and customers can deal with bugs.
Good luck hoping that none from the big money would try to stand between you and someone giving you a service (uber, airbnb, etsy, etc) and get rent from that.
Claude, fix the toilet.
And me ruining my day fighting with a million hooks, specs and custom linters micromanaging Claude Code in the pursuit of beautiful code.
I'm not running OpenClaw, but I've given Claude its own email address and built a polling loop to check email & wake Claude up when I've sent it something. I'm finding a huge improvement from that. Working via email seems to change the Claude dynamic, it feels more like collaborating with a co-worker or freelancer. I can email Claude when I'm out of the house and away from my computer, and it has locked down access to use various tools so it can build some things in reply to my emails.
I've been looking into building out voice memos or an Eleven Labs setup as well, so I can talk to Claude while I'm out exercising, washing dishes etc. Voice memos will be relatively easy but I haven't yet got my head around how to integrate Eleven Labs and work with my local data & tools (I don't want a Claude that's running on Eleven Labs servers).
What made it so popular I think is that it made it easy to attach it to whatever "channel" you're comfortable with. The mac app comes with dictation, but unsure the amount of setup to get tts back.
So, OpenClaw has changed his life: It has accelerated the AI psychosis.
I saw on The Verve that they partnered with the company that repeatedly disclosed security vulnerabilities to try to make skills more secure though which is interesting: https://openclaw.ai/blog/virustotal-partnership
I’m guessing most of that malware was really obvious, people just weren’t looking, so it’s probably found a lot. But I also suspect it’s essentially impossible to actually reliably find malware in LLM skills by using an LLM.
A Reddit post with white invisible text can hijack your agent to do what an attacker wants. Even a decade or 2 back, SQL injection attacks used to require a lot of proficiency on the attacker and prevention strategies from a backend engineer. Compare that with the weak security of so called AI agents that can be hijacked with random white text on an email or pdf or reddit comment
It cannot. This is the security equivalent of telling it to not make mistakes.
> Restrict downstream tool usage and permissions for each agentic use case
Reasonable, but you have to actually do this and not screw it up.
> Harden the system according to state of the art security
"Draw the rest of the owl"
You're better off treating the system as fundamentally unsecurable, because it is. The only real solution is to never give it untrusted data or access to anything you care about. Which yes, makes it pretty useless.
I have OPA and set policies on each tool I provide at the gateway level. It makes this stuff way easier.
I never want to be one wayward email away from an AI tool dumping my company's entire slack history into a public github issue.
https://reorx.com/blog/rabbit-r1-the-upgraded-replacement-fo...
I haven't been able to find a good use for myself yet. Almost everything I use an LLM for has some kind of hard human-in-the-loop factor that is as of yet inescapable -- but I also don't really use LLMs for things like "sort my email.". mostly entirely coding.
So, it appears that we have come a long way bubbling up through abstraction layers: assembly code -> high-level languages -> scripting -> prompting -> openclaw.
It's the endgame.
oh man this is fantastic
Something tells me they never even downloaded OpenClaw before writing this blog post. It’s probably an aspirational vision board type post their life coach told them to write because they kept talking about OepnClaw during their sessions, and the life coach got tired of their BS.
No desire to be a hater or ignore the possibility of any tech but…yeah…transformative that was not
It's a racket never ends.
It is a constant lure products and tools have to create the feeling of sensemaking. People want (pejorative) tools that show visualizations or summaries, without thinking about the particular visual/summary artifact is useful, actionable or accurate!
They (or their devs) are not at fault that some people honestly believe you can't be as productive or consistent without a "thought garden" or whatever.
It only becomes problematic if the “good” thing also indulges in the hubris of influencers because they view it as good marketing. Like when an egg farm leans in “orange yolk”
Most famously, patio11 makes it a definitive part of his writing style.
I agree it's a terrible use of quotation marks, but it's a widely-used style I've been forced to accept.
> Then OpenClaw came along, and everything changed.
> After a few rounds of practice, I found that I could completely step away from the programming environment and handle an entire project’s development, testing, deployment, launch, and usage—all through chatting on my phone.
So, with Claude Code, you're stuck typing in a chat box. Now, with OpenClaw, you can type in a chat box on your phone? This is exciting and revolutionary.
Even then, the architecture will be horrible unless you chat _a lot_ about it upfront. At some point, it’s easier to just look in the terminal.
What I really wonder, is who the heck is upvoting this slop on hackernews?
So many wealthy players invested the outcome, and the technology for astroturfing (LLMs) can ironically be used to boost itself and further its own development
Articles like these should be flagged, and typically would be, but they sometimes appear mysteriously flag-proof.
Maybe it's unfair to judge an author's current opinion by their past opinion - but since the piece is ultimately an opinion based on their own experience I'm going to take it along a giant pile of salt that the author's standards for the output of AI tools are vastly different than mine.
The last time I talked to someone about OpenClaw and how it is helping them, they told me it tells them what their calendar has for them today or auto-tweets for them (i.e., non-human spam). The first is as simple as checking your calendar, and the second is blatant spam.
Anyone found some good use cases beyond a better interface for AI code assistance?
Their example use case was for it to read and summarize our Slack alerts channel to let us know if we had any issues by tagging people directly... the Slack channel is populated by our monitoring tools that also page the on-call dev for the week.
The kicker... this guy was the on-call dev that week and had just been ignoring the Slack channel, emails and notifications he was getting!
This should be the opening for every post about the various "innovations" in the space.
Preferably with a subsequent line about the manual process that was worth putting the extra effort into prior to the shiny new thing.
I really can imagine a better UX then opening my calendar in one-click and manual scanning.
Another frequent theme is "tell me the weather." One again, Google home (alexa or whatever) handles it while I'm still in bed and let's me go longer without staring at a screen.
The spam use-case is probably the best use-case I've seen, as in it truly saves time for an equal or better result, but that means being cool with being a spammer.
I'm not running openclaw itself. I am building a simpler version that I trust and understand a lot more but ostensibly it's just another always on Claude code wrapper.
I can't come up with any other explanation for why there seems to be so many people claiming that AI is changing their life and workflow, as if they have a whole team of junior engineers at their disposal, and yet have really not that much to show for it.
They're so white collar-pilled that they're in utter bliss experiencing a simulation of the peak white collar experience, being a mid-level manager in meetings all day telling others what to do, with nothing tangible coming out of it.
To be specific, for the past year I've been having numerous long conversations about all the books I've read. I talk about what I liked, didn't like, the ideas and and plots I found compelling or lame, talks about the characters, the writing styles of authors, the contemporary social context the authors might have been addressing, etc. Every aspect of the books I can think off. Then I ask it for recommendations, I tell it given my interests and preferences, suggest new books with literary merit.
ChatGPT just knocks this out of the park, amazing suggestions every time, I've never had so much fun reading than in the past year. It's like having the world's best read and most patient librarian at your personal disposal.
My experience with plain Claude Code is that I can step back and get an overview of what I'm doing, since I tend to hyperfocus on problems, preventing me from having a simultaneous overview.
It does feel like being a project manager (a role I've partially filled before) having your agency in autopilot, which is still more control than having team members do their thing.
So while it may feel very empowering to be the CEO of your own computer, the question is if it has any CEO-like effect on your work.
Taking it back to Claude Code and feeling like a manager, it certainly does have a real effect for me.
I won't dispute that running a bunch of agents in sync won't give you an extension of that effect.
The real test is: Do you invoice accordingly?
I'm waiting for the grift!
Well... no. But I do really like it. It's just an always-on Claude you can chat with in Telegram, that tries to keep context, that has access to a ton of stuff, and it can schedule wakeup times for itself.
Yesterday, I saw a demo of a product similar to OpenClaw. It can organize your files and directories and works really great (until it doesn't, of course). But don't worry, you surely have a backup and need to test the restore function anyway. /s
Edit:
So far, I haven’t found a practical use case for this. To become truly useful, it would need access to certain resources or data that I’m not comfortable sharing with it.
Even so, I still believe the Rabbit has its merits. This does not conflict with my view that OpenClaw is what is truly useful to me.
> R1 is definitely an upgraded replacement for smartphones. It’s versatile and fulfills all everyday requirements, with an interaction style akin to talking to a human.
You seemed pretty certain about how the product worked!
We're allowed to have opinions about promises that turn out not to be true.
If the rabbit had been what it claimed it would be, it would have been an obvious upgrade for me, at least.
I just want a voice-first interface.
The most charitable thing you can say about this is they're naive, ignorant of the history of vapourware 'demoed' at trade shows.
> Today, Rabbit R1 has been released, and I view it as a milestone in the evolution of our digital organ.
You viewed it as a “milestone in the evolution of our digital organ” without you let alone anyone having even tested it?
Yet you say ”That article was written when the Rabbit R1 presentation video was first released, I saw it and immediately reflect my thoughts on my blog.”?
Yes I think it is
And one sided media does as weil. Or do you expect Fox News to publish an unbiased report just next?
> Generally, I believe [Rabbit] R1 has the potential to change the world. This is a thought that seldom comes to my mind, as I have seen numerous new technologies and inventions. However, R1 is different; it’s not just another device to please a certain niche. It’s meticulously designed to serve one significant goal for all people: to improve lifestyle in the digital world.
I hope at some point there will be a medical research into this hysteria.
There's not a single real example, and it even has all the em-dashes intact.
Agents work but still mostly produce slop.
And 99% those AI-created "amazing projects" are going to be dead or meaningless in due time, rather sooner than later. Wasted energy and water, not to mention the author's lifetime.
Poe's law strikes... I can't tell if this is satire.
If you delegate these tasks to OpenClaw, I am not really sure the result is exactly what you want to achieve and it works like you want it to.
Also, Codex isn't a model, so you don't even understand the basics.
And you spent "several hours" on it? I wish I could pick up useful skills by flailing around for a few hours. You'll need to put more effort into learning how to use CLI agents effectively.
Start with understanding what Codex is, what models it has available, and which one is the most recent and most capable for your usage.
Press [Space] to skip
getting sick of this fluff stuff
For the impatient, here's a transcript summary (from Gemini):
The speaker describes creating a "virtual employee" (dubbed a "replicant") running on a local server with unrestricted, authenticated access to a real productivity stack—including Gmail, Notion, Slack, and WhatsApp. Tasked with podcast production, the agent autonomously researched guests, "vibe coded" its own custom CRM to manage data, sent email invitations, and maintained a work log on a shared calendar. The experiment highlights the agent's ability to build its own internal tools to solve problems and interact with humans via email and LinkedIn without being detected as AI.
He ultimately concludes that for some roles, OpenClaw can do 90%+ of the work autonomously. Jason controversially mentions buying Macs to run Kimi 2.5 locally so they can save on costs. Others argue that hosting an open model on inference optimized hardware in the cloud is a better option, but doing so requires sharing potentially sensitive data.