(blog.matthewbrunelle.com)
It doesn't help that the marketing leans heavily on anthropomorphizing LLMs either, IMHO.
What doesn't help also is that translation tools/AI models will naturally translate "il" after "Claude Code" to "he" since Claude is an actual person name.
Using "AI model" instead is translated to "it" by all tools/AI models I tried.
People often commit mistakes regarding that, the same way we don't have "they" as pronoun to someone we don't know the gender, so we address to these people as "dele(dela)" (masculine and feminine pronouns).
But if this is coming from someone who has english as a primary language it's definetely weird to treat models as person
Like how in English you’d say “it helps me …” but in Spanish just “me ayuda …”
Source: am Dutch. Can’t wait for us to just ditch gendered nouns.
In the Canadian French dialect all the swear words are incredibly versatile and church-related such as "osti" which I believe refers to the Eucharist.
It just so happens that for nouns beginning with a bowel, you drop the e or the a from le/la, and use an apostrophe.
So if you don't know if it's "le porte" or "la porte" you can use my favorite trick which is to shove osti in there and say "l'osti de porte" which roughly translates to "the goddamn door". You can do this for any noun in French, and Canadian French speakers will get it, though people from France will make fun of you.
Signé -Un Québécois
So that's another, maybe more harmless reason for it.
I don't know what is jarring talking about the chatbot like that.
It may be creepier if you said "she wrote that program for me" as you now assign a specific gender to the chatbot.
Curiously though I don’t get the same sensation when technologies are gendered by other people. I honestly don’t recall thinking about it when Apple released Siri. (Now I’m second-guessing myself and wondering if I should’ve reacted negatively towards feminine being the default for someone in a personal assistant role.)
This trips me up occasionally when I'm translating things into English. Once, when I referred to an indefinite gender player character in a gacha game as a "he" (because the word "player" is a "he"), quite a few people got mad! Even though in my head I was never trying to imply one way or the other.
Yes judgment. Loads of it. Judge away.
This is just bizarre. Do not refer to this product of marketing-technology as you refer to a person. EVER.
Interesting, I have just the opposite situation: I have a folder with tens of experiments, many of which have become actual projects at this point.
I wanted to evaluate which engines would be the best for working with LLMs in and it seems like Flax and Stride kind of come out on top - the former has a lot of stuff out of the box (including terrain) and the latter is all C# basically which is great for debugging. But either way, the source code for both of those makes the functionality a bit easier to track down compared to Godot (which is a lot more complex internally).
So what I do now is have both the engine source code locally alongside the docs and when I want to implement something with AI I just tell it - look at the docs, then at the source if needed, write tests for our code, if something doesn’t work then edit the engine source code in our branch and use the provided convenience script to rebuild the engine (both of those are also pretty fast, I ended up settling on Flax, plus the component model is closer to Unity which I like).
I don’t ask the AI to create scene files though, or any sort of visual assets, but rather stuff like RTS/simulation code. I don’t think any AI is that well optimized for the 3D work outside of simple proof of concept setups.
https://drive.google.com/file/d/1A7kfcjHjSmCNidqc9t731uoglzL... https://drive.google.com/file/d/1Bl_n0ECqc78LGGf7SsOx38mRUOP... https://drive.google.com/file/d/1JMcgzqcnZ2ncboeyAXvscRWagqR... https://drive.google.com/file/d/1-luJ6y7YslNfwmFnCdIDbJ871i0... https://drive.google.com/file/d/14n4TLAVywk_1GMhLLGOuukQwUmb...
Here screenshots of some UI styles that it generated.
Sort of writing a narrative on top live.
Unfortunately, local models are still a bit slow and weak but was interesting to see what it came up with nonetheless.
> he even helped me build the lore. These have been one of the most fun times using a computer in a long time.
Such a warm, touching story about a friendship between a grown up man and his neural network. But at least I had a good, roaring laugh reading this nonsense, thank you for that!
…and yet, most people continue to say that non standard tooling ecosystems, where the agent cannot run and validate the code it writes, remain difficult and unproductive.
“I just pointed CC at godot and it made a game! This is sooo good”
…is a fairytale.
What tooling are you using to make it run and compile the code? How is it iterating on the project without breaking existing functionality?
None of these are insurmountable, but they require some careful setup.
Posts like this dont make me laugh; they just make me roll my eyes.
Either the OP has not done what they claim.
Or they have spent a lot more time and effort on it than they claim.
> I gave him game design ideas, he comes with working code. I gave him papers about procedural algos, and he comes with the implementation, brainstorm items, create graphic assets (he created a set of procedural 2d generators as external tools), he even helped me build the lore.
Such a sweet story about a boy and his AI.
Unfortunately, I also dont believe in fairytales.
Instead of waving your hands wildly about AI, post some videos and code of the results.
This is hackernews, not hypenews.
Here's a bullet point list of the things Claude's done according to OP:
* it picked up the general path immediately
* he explicitly pushed into "lets have V0 game play loop finished, then we can compound and have fun = not giving up".
* [I gave him game design ideas,] he comes with working code.
* [I gave him papers about procedural algos,] and he comes with the implementation
* brainstorm[ed] items
* create[d] graphic assets
* he created a set of procedural 2d generators as external tools
* he even helped me build the lore.
Every one of these are plausible in isolation.
You imply I'm merely "pointing CC at godot and it made a game"; I never said it was simple, required no previous knowledge, that it was instant or that the game was done. I do have a careful setup involving CI and isolation.
Godot provides a headless mode. CC runs python scripts to run tests and check for debugger warnings. For anything more complex it can wire debug info anywhere. Godot is fully code based so you can make the analogy with any other framework you used AI assistants with.
No sure about what you can't believe about my statements. CC implementing algo from a paper? That it can brainstorm item or lore ideas? I don't seem to be claiming anything out of the common usage of LLMs
The part that still bites me is across sessions. A tight loop fixes this run, but next week the agent can walk into the same rake again: same wrong import path, same misuse of an internal API, same CI-only dependency issue. After patching the same class of failure a few times, I started writing those down outside the chat context so the next run sees the failure pattern before it guesses.
Why is it always so un-specific with you AI-boosting bunch, whenever you get pressed for concrete results? Suddenly it's not so magical any more, but merely screenshots showing "broadly" the progress, or it's the Nth version of a note-taking app, or something you merely did for a demo presentation. But nothing ever of use with you folks.
> it picked up the general path immediately
I said:
> Or they have spent a lot more time and effort on it than they claim.
You said:
> You imply I'm merely "pointing CC at godot and it made a game"; I never said it was simple
Well. I dont care enough to argue with you, but Im not the one being contrary here.
Readers can google “claude with godot” for a guide on setting it up and decide if that counts as picking it up immediately or not, and if what you said is honest, or hype.
What I said is not that I dont believe youre using claude; but that I roll my eyes at the unbounded enthusiasm for using AI agents with the magical pretence that its easy and productive straight away.
Its not.
Your post gave the impression that it is.
That makes me roll my eyes.
> But I had already answered, before your comment, with screenshots
> Of course these are basic placeholders for a few hours of work
Lord, spare me. You spent a few hours vibing and came to the conclusion that everything is golden?
…and yet you have a:
> I do have a careful setup involving CI and isolation.
So what, you spent more time on your setup than actually coding before posting?
/shakes-head
Whatever man.
Have fun. I stand by what I posted before.
No one could have built this software but me because it’s worth nothing to others. And I couldn’t build it because it takes too long. But when I’m using an agent to code the limited resource is my attention which actually does fine so long as every free brain cycle is on a task. So these personal things are great to throw into my tab loop to occupy a free slot.
These have been wonderful times.
There's just no pressure to handle edge cases or write docs for people who'll never use it. Just solve exactly your problem and move on.
I wonder whether there could be an AI autocomplete specifically for the task of helping you with the markdown file (and collecting your thoughts and writing prompts in general). Not an agent since that wouldn't really save time, but actually an autocomplete.
Maybe a small specially-trained local model running at hyper fast speeds and which already has your project context baked in with prefix caching (with some other larger model having summarized the context beforehand to feed to the small model), so as you type this file it automatically uses the same prompt prefix over and over to suggest autocomplete which actually makes sense.
I sure hope companies double down on leetcode nonsense, because I really don’t have any capacity to compete with this level of ADHD.
I did pay the $10 for the following domains but i’m ok with that so i can share some of the fun things that come out of the agent.
grandcheaten.com - a save game editor and guide for jagged alliance 3
thedailycheat.com - a save game editor for newstower
It's not well-known, but Itch's offline Steam equivalent (<https://itch.io/app>) is also open source.
(Shameless plug)
* Sambervise: https://github.com/edward-murrell/sambervise - A Linux GUI application for remotely administering Samba 4 Active Directory Domain Controllers.
* Krbtray: A GTK3 system tray application for Kerberos ticket management on Linux Mint / Cinnamon (and other GTK environments using GtkStatusIcon, such as XFCE and MATE). https://github.com/edward-murrell/krbtray
Another important development is that coding assistants greatly reduced the cost of refactoring whole software projects. With coding assistants we can explore the solution space with deeper changes at a fraction of the time it would take us just to write the code alone, let alone draft how modules were designed.
This isn't without tradeoffs, though. Some models can and often do generate code that misses the bar on maintainability. Just because we save time writing it that doesn't mean we don't have to spend time reiterating, cleaning up,and updating system prompts/instruction files to ground the prompts.
I'm a millennial who builds furniture with hand tools and wood joinery from a century ago. Nobody taught me, although I did find resources online to learn from. I should not be able to do these things. Everyone should have forgotten this esoteric, obsolete, uncommon knowledge by now. Yet here I am, doing it anyway. It turns out you can just learn what you want, when you want to. I don't fear losing this skill in the future, because I can just remind myself how it works. The tools, books, videos, and wood aren't going anywhere.
You aren't going to "be deskilled" from not writing code by hand regularly. Just because you use AI doesn't mean your brain grows a black hole from which information can never return. It's not giving you Alzheimer's. There might be a small amount of time it takes for you to refresh yourself, but then you're back to work again. Just ask anyone who went from coding to managing. They're a little rusty when they go back after years of absence, but they pick it back up.
Also, especially if it's a personal project, keep in mind you do not need to burn Opus tokens. Buy any of the dirt cheap subscriptions which give you access to MiniMax. Put it in a container on yolo mode. Give it some context, a prompt, web search, and a ticket system like beads. Then let it churn. You aren't in a rush, it's a personal project. As long as you follow the brainstorm -> plan -> implementation -> testing process, and have added methods to do real testing (not mocks or unit tests), it will get done with time and money to spare.
I got it working well enough to display what I wanted in text and ascii, but I could never get the interface good enough to want to use it daily, and certainly couldn't get the graphical interface working. I threw it a Claude Code, told it what I wanted the graphical interface to look like, and let it run.
It got an app exactly what I wanted, and even found a bug in the date parser that I hadn't noticed. I now have it running in the corner of my screen at all times.
The next app I'm going to build is an iPhone app that turns off all my morning alarms when the kids' don't have school. Something I've wanted forever, but never could build because I know nothing about making iPhone apps and don't have time to learn (because of the aforementioned children).
Claude Code is brilliant for personal apps. The code quality doesn't really matter, so you can just take what it gives you and use it.
Agreed.
The clipboard manager I had been using on my Macs for many years started flaking out after an OS update. The similar apps in the App Store didn’t seem to have the functionality I was looking for. So inspired by a Simon Willison blog post [1] about vibe coding SwiftUI apps, I had Claude Code create one for me. It took a few iterations to get it working, but it is now living in the menu bar of my Mac, doing everything I wanted and more.
Particularly enlightening to me was the result of my asking CC for suggestions for additional features. It gave me a long list of ideas I hadn’t considered, I chose the ones I wanted, and it implemented them.
Two days ago, I decided I wanted a dedicated markdown editor for my own use—something like the new markdown editing component in LibreOffice [2] but smaller and lighter. I asked the new GPT 5.5 to prepare an outline of such a program, and I had CC implement it. After two vibe coding sessions, I now have a lightweight native Mac app that does nearly everything I want: open and create markdown files, edit them in a word-processing-like environment, and save them with canonical markdown formatting. It doesn’t handle markdown tables yet; I’ll try to get CC to implement that feature later today.
[1] https://simonwillison.net/2026/Mar/27/vibe-coding-swiftui/
create a shortcut that turns off all alarms. Can have it read your calendar or whatever as signal to determine if alarms should be on/off for a certain day/time and have it run at a regular schedule.
(But in seriousness, I hadn't considered using shortcuts. It's not clear it's extensible enough to do exactly what I want, but I'll look into it)
It leaves more room for skill expression when you're making architectural decisions, defining scope, and designing the application.
If you like creating, buying software from Anthropic is boring as hell.
If you really want to engage an LLM to help point it towards Cherri (https://github.com/electrikmilk/cherri) to help with implementation
Why do you think that? I do regular ol' coding at my day job and have been vibe coding some side projects. They both require using my brain and both require my input for something to be created. They are different, though.
> Instead all you're doing is creating more cheap mediocre throwaway crap just because you can.
It probably is these things but since I'm just building stuff for myself, it hardly matters.
I've written a lot of code and a lot of that has been doing roughly the same thing. It's not a mental challenge; it's a chore. Sometimes it is really gratifying to code and try to figure stuff out. Often times it is not. So when it comes to building something in my free time, I'd prefer to avoid that sort of mental friction and banal tasks just to start working on the actual problem. More so than that, I'm building tools for myself to make my life easier so I can spend it more on something else.
I ride an electric bike with pedal assist. Does that mean I'm not really bicycling? Some might say yes and that it defeats the point. To me it ensures that I pick the bicycle more because it reduces friction to do so. I know that if I encounter a hill that the pedal assist will help me up it and thus I use it more and the net benefit outweighs the downsides. I think it's the same thing here.
I don't take pride in the work that an LLM does for me but I will happily benefit from it. It's a tool.
And just like with bikes, people who take pride in doing things the hard way can continue to do so. And they shouldn't belittle people who choose to use assistance.
I've been programming for 30+ years now, but I've always been fine with command line applications. Only recently I started getting into Qt to add a UI and turn my stuff into a real desktop application. It's been a real steep learning curve but I'm finally over it more or less.
Anyway I posted a screenshot of my application on LinkedIn, and mentioned it would be free and open source. I got HUNDREDS of comments from "LinkedIn-type people" all big name engineers that wouldn't HIRE me for anything but either made comments like "looking forward to integrating this into our workflows" or "not the first time someone tried to do this..."
Either way, instead of feeling motivated, I got the worst feeling that I'm doing all this work and people are either going to just take advantage of it and get the credit for "finding" it, or criticize it simply because it's not for them.
It bummed me out so bad that I stopped work on it entirely for like a month.
Anyway I finally came to look at it the way you mentioned. What I LIKED was the process of learning Qt and seeing my old programs come alive.
So instead, it's my "project car" now. I build it up and tear it down all the time. Totally redesign the data models just to see what advantages different designs can give me. Try make my own graphical views. Try implementing language translations.
It's been "finished" for a while now but I probably have five completely different-under-the-hood versions of it and THAT is what has been fun.
I use it constantly all day at work and I never mentioned it on LinkedIn again lol
All the personal tools described in this thread are duct tape and bubblegum under the hood and nowhere near productionizable. That's what Claude Code makes for you.
The whole point is that for personal tools, code quality never really mattered since it's never going to be exposed to the public or be iterated upon by a revolving door team of devs like real software products. These are all highly overfitted tools that shave off like 15 seconds of time in the day for some particular person.
It's almost exactly like having a 3D printer for software, with exactly the kind of quality that a present-day 3D printer gives you.
Ironically the value of implementing these ideas is dropping fast. A few weeks ago, I built a little search library that runs in the browser and doesn't need a server. It's styled after Elasticsearch and has most of it's term and matching query support, aggregation support, and I added ANN vector search as well (uses web GPU). Most of that was just me going "let's add feature X" and boom done. I used it in some websites (also built using AI) at this point. It doesn't scale but it's great for blogs, documentation sites (https://querylight.tryformation.com/, this site documents the library), etc. It all works exactly like I imagined it would. I probably could add most of the long tail of features Elasticsearch has to this library with very little effort.
But the flip side is that the library got a rather lukewarm reception on Github. It seems people are too busy coding things themselves with AI to appreciate other people's efforts much. And fair enough, if you need a search library, you could probably generate your own. Or just let the AI pick one for you. It's not like this was hard for me or a lot of work.
The economic value of these projects is dropping rapidly. I still like doing them because I like building stuff. And I think there is as a learning curve with these tools that is important to master. Because there is a lot of work that is going to need doing still and people will pay less for it and still expect decent results that you can only get if you master the tools. The ambition level will just go up to match what is now possible. People thinking that they are going to lean back while the AI works for them are in for a surprise. I work very long days the last months.
Despite coding from a young age I always thought that I cared more about the outcome than the code. Turns out that’s not entirely the case.
Not any more. Few weekends and everything has been implemented and I have learned a ton in the process. It has been great fun doing this turbo charged tinkering. I think personal projects are where LLMs shine the brightest.
Pretty much 100% of projects I've done with vibe coding/engineering is in the second category. Stuff I need that either doesn't exist or exists, but is either horribly complex to configure or is a mess of 420 features even though I just need one of them.
It's a lot easier for me to implement that one specific feature just for myself than keep vigilant on an existing app's eventual scope creep as it progresses to the eventual ability to read email[0] =).
Now it is different in a way — I don’t have time to use them.
You can pay more of course, buy them a computer, an internet connecties, books, courses, even an office but it isnt required.
Just pay 60 per project every 4 weeks and ignore it. If interesting progress happens its fun to look at.
Just last week I was looking for a way to move all the windows from one screen to another in a go. After evaluating many clumsy or over-complicated existing solutions, I asked copilot to write a C program to do it. It had to be minimal and not depend on any runtime framework. A few loops later I had what I wanted without installing third-party tools!
Sounds like a job for 10 line AutoHotkey script.
The new version is live at http://pixel.drawbang.com after 1 week of prompting Opus 4.6 with a max subscription.
Anyhoo, I'm working on making it pretty (it works!!) before integrating it into my opinionated GraphQL server[1].
There really is no excuse for NOT being the change you wish to see in the world anymore.
---
IMO, what is getting worse is not Claude Code, the CLI tool, but Anthropic API. That's what most people experience.
I used Claude Code with GLM 5.1, MiniMax M2.7, Kimi K2.6 and had pretty good results.
I prefer Claude Code over OpenCode because most plugins and skills have best compatibility with Claude Code.
Even Terminal Bench showed a bit better results for Claude Code than for OpenCode.
Having better tools really makes a difference when revisiting old or half-finished projects.
And when you inevitably get bored with it, well, you've not done much anyway. You can always get back up to speed in a month and have the LLM remind you of what it was doing.
I'm very interested in Local LLMs but the cheapest Mac Studio right now is more expensive than 8 years of a Claude Code Pro subscription, and incomparably slower/less capable. If I get bored with it, I will have a piece of unused hardware and a couple grand less in my bank account.
My partner on the otherhand has an M3 Max 64GB which I've had way more success with. Setting up opencode and doing a tiny spec-driven Rust project and watching it kiiinda work was extraordinarily exciting!
...but all the AMD 395+ machines I can find are even more expensive than the aforementioned cheapest Mac Studio. Mac Studio starts at $2,000 (only 32GB), AMD 395+ 128GB machines seem to start at $3,000 from what I can see.
i do not know if theres a smaller model with same capability, but model size and context window at 128 seems like a sweet spot.
token speed really isnt a bother because im either just multitasking or working on the filling in the missing details.
regardless, i think comparing first VRAM sizes w/target model then speed for your cost efficiency. plus, a healthy skepticism of mac hardware costs.
And with a Claude or GPT $20 Subscription, i can do other fun things too like using it for real things (emails) or image generation.
A Mac Studio or AMD395 is neither of it. And its not just a basic setup either. I need to buy it, configure it, put it somewhere. That alone is a grand and more + a whole weekend.
This means oyu may be opinionated today on something you will not have tomorrow, 6 months, a year. All that work flow you salivate over can be ripped away.
If you're fine with that, and you've "escaped the permanent underclass" congrats, this opinion is not for you.
It's actually really cool to have it work on some internal tooling and stuff while I work on my primary projects.
I'm surprised how easy it is to setup and that it can handle modestly complex planning and development flows.
now Claude will gas you up and tell you your bad ideas are actually the most amazing thing it’s ever heard
I have a ML Setup with 2 4090 and 128gb of ram, its warm when i use them for finetuning or batch processes.
I do not run them for coding. Its a lot easier and nicer to play around with better models for just 20 $.
Also Anthropic is by far the best, open (local) models are glorified autocomplete at best unless you casually have 20k€ worth of hardware at home.
Very usable locally assuming you setup your local tooling correctly and you are an actual programmer who can generally help drive this stuff correctly and not just a vibe coder.
I’ve tried multiple that I can run locally and they’re all very much just glorified autocomplete, but slower - on a M4 Max MacBook
Then I’d be giving money to openrouter and a Chinese model provider, is that better?
Maybe even shamelessly post it as a Show HN along with the other 99% of worthless slop submissions there.
LLM is not a runtime. It might be something akin to non deterministic compiler where it converts your MD to code.
[we've hopefully deprovokified the title now]
Why does it bother me so? I have no idea.
i doubt anyone is nouning "agentic" of their own accord (yet)
I just shipped one this week (ToolRelay - toolrelay.online) by forcing myself to focus on a single vertical slice end-to-end and stop opening new repos.
The pattern that broke for me: stop building, start distributing. The build phase gives a dopamine hit; distribution feels painful, so we keep building instead.
Curious — was the AI assistance helping you build new features, or helping you re-understand your own old code months later?