> if it exceed the context the agent does random stuff, that are often against simplicity and coherent logical structure.
That's a current technical limitation. Are you so sure it won't be overcome in the near/mid future?
> LLM has zero intention, and rely on you to decide what to build and more importantly not build
But work is being done to even remove or automate this layer, right? It can be hyperbole (in fact, it is) but aren't Anthropic et al predicting this? Why wouldn't your boss, or your boss' boss, do this instead of you? If they lack the judgment currently, are you so sure they cannot gain it, if they don't have to waste time learning how to code? If not now, what about soon-ish?
> At this current year and date, the AI does not automate me in anyway
Not now, granted. But what about soon? In other words, shouldn't you be worried as well as excited?
If you do a lot, you'll grow skeptical about some of the claims and hype, and have a sense of where this is leading to.
My position is that if someone use LLM a lot, they maybe right or wrong about the future of LLM. If they don't, then they definitely are not right or are only lucky.
My personal judgement is both of these are hard caps until they invented something that's not a transformer, start from scratch bascially.
Completely agreed. This is not what I'm advocating for. And definitely, there's a lot of self-serving hype (and fearmongering can be another kind of hype) by AI companies. But some of it I think will be true, or enough companies will believe it to be true, which amounts to the same.
I'm just worried, I cannot help it. And I'm not saying "don't use AI", I'm pushing back about the feeling of reckless "excitement".
No.
But I was also very skeptical about AI being able to code semi-reliably during the early stages of GPT hype, and look where I'm now: most of the code I produce is written by an AI. So I was wrong before, which makes me doubt my own ability to predict the near future.
> Does your boss have the time to do this AI wrangling work on top of their other tasks even if they don't have to learn to code?
My boss' boss would probably love to get rid of both me and my direct boss. And a whole class of problems will disappear, freeing time of people higher up the chain to focus on this... either them or a tiny group of engineers, which leaves me out of a job either way. I've already seen people in small shops get fired because their immediate semi-technical boss can now do their job with AI (cannot go details because of privacy reasons. Also, it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job).
My impression from a couple years ago was that it was fairly decent at coding, it was just slow to go from question -> code, and the tooling around that has improved significantly so that it's all pretty quick. I think whether or not the models are fundamentally better at raw coding is a murkier question.
They still fall down at bigger architectural tasks, go off the rails, hallucinate, etc. So, it seems to me like a core problem with the current technology.
> it doesn't matter if the end result is flawed, it matters that "mission accomplished" and someone is out of a job
This is a short term problem. If the market has any sanity left, the shops that maintain the talent to execute well will out-perform the shops that were short-sighted.
Your experience is very different from mine. Early GPT/LLM tech was hilariously wrong. It famously hallucinated code out of nowhere, made breaking changes all the time, failed to follow very simple instructions. I remember when it couldn't play Tic Tac Toe! It hallucinated board positions and rules. I used to break it all the time, for fun (and it didn't take much, it mostly fell down the stairs on its own). Now it can play far more complex games.
Was I right to be skeptical? Well, based on what I saw, I was right. GPT was impressive and fun but also hilariously wrong most of the time. Until they weren't!
We've been through cycles like this before. Back in the day, Dreamweaver was going to put every web developer out of a job. More recently, Squarespace was going to do something similar. However, as soon as you step out of the well-trodden path, you're encountering tougher to debug issue, or you want some customization that the tools aren't aware of or designed to handle, and now you're hiring or paying a specialist again.
I get what you're saying. This is why I was also skeptical, initially. But consider this: this time, it's qualitatively different, and more importantly, companies seem to believe so, which has real impact on our jobs.
Dreamweaver never threatened my job. Not once. Neither did Squarespace. I'm sure they did threatened some jobs, but ultimately they simply didn't replace the mind and hands guiding them, and in fact, they never aimed at this. "No code" tools were similarly misguided for a lot of real use case. However, this time, AI seems to be making real progress towards this, and is becoming a real threat to jobs.
The argument of "but when calculators/writing/$SOME_OTHER_TECH was introduced..." don't fly with me. $OLD_TECH is not necessarily analogous to new tech, or AI in particular.
What if this time it's different?
Also, it’s a little too convenient that businesses are getting to spin their layoffs as a result of AI, rather than a weakening overall market (tariffs, higher energy costs) and a misallocation of resources (over-investment in VR, crypto).