upvote
I am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago), but they realized GPUs need CapEx and I want to screw around with pi.dev on a budget. Then on to GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff. I also have Antigravity through a personal Gmail account with access to Opus et al and I don't understand if I am paying for it or not. They don't have my CC so that's a breather.

It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.

It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.

Have you shipped anything commercially viable because of AI or are you/we just keeping up?

reply
> The correction will be brutal, worse than the Industrial Revolution.

Has it occurred to you that there might not be a correction, and that the outcome would still be brutal, at least on par with the industrial revolution.

reply
It won't get that far.

It's physically impossible to build out the datacenters required for the "AI is actually good and we have mass layoffs" scenario. This Anthropic investment is spurred on because they've already hit a brick wall with capacity.

$40B goes a long way, but not for datacenters where nearly every single component and service is now backordered. Even if you could build the DC, the power connection won't be there.

The current oil crisis just makes all of that even worse.

reply
We pretty much already had the layoffs, at least that's my perception.

The next level of layoffs is probably still 25 years out.

reply
> The next level of layoffs is probably still 25 years out.

Hasn't even been 25 years years since the previous layoffs before the current ones.

reply
There's layoffs, certainly.

But all the economic indicators suggest those are "bad economy" layoffs dressed up as "AI" layoffs to keep the shareholders happy.

reply
Do you mean as in there will be no happy ending / reset and no another century of prosperity?
reply
I mean as in living through the industrial revolution would have been wild. So whether we have an AI revolution or an AI bubble it's bound to be a roller coaster.

And that's without accounting for the various wars (and resultant economic impacts) that are already in progress. A large part of what drove the meat grinder of WWI was (very approximately) the various actors repeatedly misjudging the overall situation and being overly enthusiastic to try out their shiny new weapons systems. If one or more superpowers decide to have a showdown the only thing that might minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems. Even in that case as we know from WWII the logical outcome is a decimated economy and manufacturing sector regardless of anything else that might happen.

reply
> minimize loss of life this time around is (ironically enough) the rise of autonomous weapons systems

I think that just means the relative civilian loss of life will increase once again.

reply
What strategic merit is there in targeting civilians or life critical infrastructure in a fully automated battlebot scenario? Perhaps it's naive but I would expect stockpiles, datacenters, and any key infrastructure on which the local semiconductor fabrication depends to be the primary targets.
reply
The current reality doesn't match your expectations. Russia is using automated warfare to strike what are primarily human life-critical targets.
reply
Look au Ukraine for answers and how russians target almost purely civilian infrastructure and civilians in terror campaigns every single day and night, same as nazis did to Britain in WWII. With exactly same results but they just double down and send more drones next day.

russia is really and empire of the dumb and subjugated serfs at this point (again, history repeats), but they are far from only such place.

Dont expect more, most people are not that nice when SHTF.

reply
Bubble or revolution - not a dichotomy.

Bubbles like the AI bubble are a game theoretic outcome of a revolution. Many players invest heavily to avoid losing, but as a whole the market over invests. This leads to a bubble.

reply
Imagine you're a typesetter and they just invented computerized printing.
reply
There has always been a gap between the experience of solo/small shop developers, vs. developers who work in teams in a large corporate environment. But thanks to open source, we have for the past twenty years at least mostly all been using the same tools.

But right now, the difference in developer experience between a dev on a team at a business which has corporate copilot or Claude licenses and bosses encouraging them to maximize token usage, vs a solo dev experimenting once every few months with a consumer grade chat model is vast.

reply
Let’s take an extreme example.

Meta seemingly has a constant stream of product managers. If llm’s really augment the productivity of engineers, why isn’t meta launching lots more stuff? I mean there’s no harm in at least launching one new thing.

What are all those people doing with the so called productivity enhancements?

What I’m calling into question is how much does generating more code matter if the bottle neck is creativity/imagination for projects?

The only thing I’ve seen is a really crummy meta AI thing implemented within WhatsApp.

reply
It’s allowed a sludge of internal tools to spin up, and more bloat. The ability to sand bag and over build these tools has gotten 2-10x worse.

Only solution I can think of is to drastically cut headcount so productivity is back to prior levels, and profitability is raised. Big Tech is mostly market constrained with not much room to grow beyond the market itself growing.

As for startups, seems like AI tools have drastically reduced their time to market and accelerated their growth curves.

reply
Forgive my ignorance, but what exactly is the vast difference? Who's doing more of what, or whatever you're implying? And how do you quantify this?
reply
The difference is (if you'll forgive me recruiting a couple of straw men for the purpose of illustrating the spectrum we are talking about here):

Hobbyist solo dev, counting tokens, hitting quotas, trying things on little projects, giving up and not seeing what the fuss is about.

vs

Corporate developer, increasingly held accountable by their boss for hitting metrics for token usage; being handed every new model as soon as it comes out; working with the tools every day on code changes that impact other developers on other teams all of whom have access to those same tools.

reply
Okay, so just to be clear you're not commenting on productivity? Or what does "changes that impact" mean?

I might be missing a lot of self-evident assumptions here but I feel like I'm still missing so much context and have no idea what this difference is actually describing.

reply
If you have some objective measure of productivity in mind, feel free to share it, but no that's not what I'm commenting on.

I'm talking more about why threads like this seem to be full of people saying 'this has completely changed how corporate development works' and other people saying 'I tried it a few times and I don't get the hype'

reply
Sounds exhausting. Are your revenue numbers up?
reply
I am also curious about the correlation between more PRs getting merged faster and actual business outcomes.

My impression has always been it's more important the build the correct thing (what the customer needs/wants) rather than more stuff faster.

reply
> My impression has always been it's more important the build the correct thing (what the customer needs/wants) rather than more stuff faster.

The process of learning what the customer needs/wants is a heavily iterative one, often involving throwing prototypes at them or betting at a solution, then course-correcting based on their reaction. Similarly, the process of building the correct thing is almost always an iterative approximation - correctness is something you discover and arrive at after research and prototypes and trying and getting it wrong.

All of that benefits from any of its steps being done faster - but it's up to the org/team whether they translate this speedup to quality or velocity. For example, if AI lets you knock out prototypes and hypothesis-testing scripts much faster, you can choose whether to finish earlier (and start work on next thing sooner), or do more thorough research, test more hypothesis, and finish as normally, but with better result.

(Well, at least theoretically. If you're under competitive pressure, the usual market dynamics will take the choice away, but that's another topic.)

reply
no customers will accept "throwing prototypes at them". my time is not for QA-ing your product.

why do you think restaurants rarely change their menus.

reply
This with the ability to research, iterate on prototypes, in my opinion allows to determine the right thing quicker as well. Of course right now the value is largely intuition based, there may be some immediate revenue/profit, but revenue/profit will take time to follow, so in a way it is a speculative intuition based bet. Financial gains will take time to follow, so for a period of time it will be "trust me bro" for at least some cases, but I suppose future will show, since the intuition seems so strong about it. You can't have good data about an emerging tech like that.
reply
Reducing costs is also a business benefit.
reply
The cost being reduced is the cost of your labour. Tokens are only getting more expensive.
reply
Incremental cash flows is what we should be observing - have to net out the costs of llm associated with the activity.

Thats just one set of costs but a good starting point.

reply
> - Development velocity is very noticeably much higher across the board

It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.

reply
The CTO/VP of engineering role down is now singularly focused on keeping agents fed with a backlog of Linear issues. This is the new normal.
reply
Is your team measuring how much of your code is being written with claude and comparing amongst the team, like what works best in your codebase? How are you learning from each other?

I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.

reply
Different teams are using it in very different ways so it can be tough to compare meaningfully.

Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.

That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.

It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.

reply
Thanks for the feedback. I agree that it’s changing very fast, which is why my thesis is that this tooling will be needed to help everyone on the team keep up.
reply
It sounds very similar to my shop. I have QA people and Product Managers using Claude to develop better integration and reporting tools in Python. Business users are vibe coding all kinds of tools shared as Claude Artifacts, the more ambitious ones are building single page app prototypes. We ported one prototype to Next.js and hosted on Vercel in a couple of days and then handed it back to them with a Devcontainer and Claude Code so they can iterate on it themselves; and we also developed all the security infrastructure, scaffolding, agent instructions & policy required to do this for low stakes apps in a responsible way.

It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.

We replaced an expensive, proprietary vendor product in a couple of weeks.

I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.

The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.

I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.

No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.

reply
I am an early Gemini daily driver type engineer, feels like Node, Firefox, React and Tailwind all over again, Claude Sonnet is 10x more expensive, quick thought experiment do you think 10 Gemini prompts is needed to match the quality of one Claude Code prompt? The harness around Gemini is an issue but I built my own (in Rust)
reply
I think if you drop this all you will absolutely kill it.
reply
I'm not sure. I have a buddy that's one of the better engineers I know personally, and he struggled to maintain an "AI Lent" for even a month. He found he just wasn't productive enough without it.

He did a writeup: https://buduroiu.com/blog/ai-lent-end/

reply
> I delivered more work that I was less confident about, making me more miserable in the process

Don't leave the kicker out of the story

reply
Personally at my place, there hasn't been a noticable velocity change since the adoption of Claude Code. I'd say it's even slightly worse as now you have junior frontend engineers making nonsense PRs in the backend.

Mainn blockers are still product, legal, management ... which Claude code didn't help with.

reply
what have you guys built exactly?
reply
I kept asking this question last year, especially after that initial METR report showing people believed themselves to be faster when they were slower. Then I decided to dive in feet-first for a few weeks so that nobody could say I hadn't tried all I could.

At work, what I see happening is that tickets that would have lingered in a backlog "forever" are getting done. Ideas that would have come up in conversation but never been turned into scoped work is getting done, too. Some things are no faster at all, and some things are slower, mostly because the clankers can't be trusted and human understanding can't be sped up, or because input is needed from product team, etc. But the sorts of things that don't make it into release notes, and are never announced to customers, those are happening faster, and more of them are happening.

We review server logs, create tickets for every error message we see, and chase them down, either fixing the cause or mitigating and downgrading the error message, or however is appropriate to the issue. This was already a practice, but it used to feel like we were falling farther behind every week, as the backlog of such tickets grew longer. Most low-priority stuff, since obviously we prioritized errors based on user impact, but now remediation is so fast that we've eliminated almost the entire backlog. It's the sort of things that if we were a mobile app, would be described as "improvement and bug fixes" generically. It's a lot of quality-of-life issues for use as backend devs.

At home, I'm creating projects I don't intend for anyone outside my family to see. So far things I could theoretically have done myself, even related to things I've done myself before, but at a scale I wouldn't bother. Like a price-checker that tracks a watchlist of grocery items at nine local stores and notifies me in discord of sales on items and in categories I care about. It's a little agent posting to a discord channel that I can check before heading out for groceries.

Or several projects related to my hobbies, automating the parts I don't enjoy so much to give me more time for the parts I do. My collection of a half-dozen python scripts and three cron jobs related to those hobbies has grown to just over 20 such scripts and 14 cron jobs. Plus some that are used by an agent as part of a skill, although still scripts I can call manually, because I'll go back to cron jobs for everything if the price of tokens rises a bit more.

I was super-skeptical, and now I'm not. I think companies laying off employees are delusional or using LLMs as an excuse, but there is zero question in my mind that these things can be a huge boon to productivity for some categories of coding.

reply
Jevon‘s paradox comes into play.

https://en.wikipedia.org/wiki/Jevons_paradox

In the end only profit matters

reply
This sounds like my office, but we're a bit more tilted toward Codex. I personally use Claude Cowork for drudge-admin work, GPT 5.5-Pro for several big research tasks daily, and the LLMs munge on each other's slop all day as I try my best to wrap my head around what has been produced and get it into our document repository -- all the while being conscious that the enormous volume of stuff I'm producing is a bit overwhelming for everyone.

We are definitely reaching the point where you need an LLM to deal with the onslaught of LLM-generated content, even if the humans are being judicious about editing everything. We're all just cranking on an inhumanly massive amount of output and it's frankly scary.

reply
Didn’t got 5.5 just come out lol. Am I just reading slop on this website?
reply