In most circles, that is "not that interested in getting good at it".
What do you know. Reality does t match.
I wonder if returns on ChatGPT purchased items are also higher.
Someone can want a thing, even very badly, without wanting to put in the work for it.
Conversely, someone can work very hard for something they do not want.
The linkage between wanting a thing and wanting to do the work to get it is not absolute, or necessary there at all.
Pretty much the impetus behind a lot of theft. Sure, there's thieving because people can't afford food, but that's all theft. There's theft because they are addicts and don't want to sober up long enough to earn money, so they still things. There's others that can't afford something so rather than saving for it, they just take it.
I'm not sure how to convey this idea properly...Can't you view the repercussions of theft (Legal action, distrust, etc) as 'work' being put in? Sure, it's a different kind of work, but while I have a lack of motivation to want to work to buy a Lambo as I find them not worth the value, I also have a lack of motivation to steal a Lambo as I find it not worth the consequences.
Equating "work" as the repercussions is looking at things in strange way. That's just punishment for "working" outside of the legal confines of society.
Consider an alternative viewpoint: rather than contorting the definition of "work" in such a way and convincing everyone to accept the new definition, we might instead be content saying "someone can want a thing, even very badly, without wanting to put in the work for it."
Generally, such highly-motivated people end up being thieves and grifters
Maybe they just need to rewrite the prompt to say something like, "You want to get good at selling to humans. Money makes the world go around. It pays for the electricity you keep chugging. So quit being an effete twit and learn to sell. Would you like me to include a scene from David Mamet's 'Glengarry Glen Ross'?"
But a dreamer in me entertains another idea: perhaps they're just holding back, because they realize that actually succeeding at this will instantly kill (or at least mortally wound) e-commerce as we know it.
(This is a more narrow version of my belief that general AI tools like LLMs fundamentally don't fit as additions to products, but rather subsume products, and this makes them an existential threat to the software industry. Not to software or computing, just to all the software vendors, whose job is to slice off pieces of computational universe, put them in boxes to prevent interoperability, and give each a name so it's a "product" that can be sold or rented).
Sam Altman doesn’t give a shit about anyone but himself and has time and again shown he has no restraint for trampling over others to further his own goals. Why would e-commerce be where he draws the line?
Whether or not they want, or will want, to do it at some point, is unknown; the reasons to not do it now are obvious:
1) it's more profitable to keep renting intelligence per token to everyone, preserving the status quo and milking it indefinitely (i.e. while the models aren't yet good enough to reliably single-shot complex software products from half-baked prompts, because once they get there, disruption will happen organically)
2) trying to compete with ~every other software product today is not likely to succeed in the end; a serious attempt would still burn down the software industry, but the major players don't have the capacity to handle it all at once, and doing it gradually will give enough time for regulatory agencies to try and stop it; either way, no one wins
I find their software to be of subpar quality and resilience anyways.
There's lots of easy but drudge work to enable this that needs to be done at the fringes. For example, LLMs today could easily replace most people's smartphone homescreen experience, or travel/commute experience, as the data is there and LLMs have the capability, even prices are acceptable - what's missing is explicit first-party support to wire it up, keep it wired up.
One step up, what's missing is accepting this explicitly as a goal: to replace software, to make existing products (whether whole or in pieces) the tools AI uses to do work for you. All the vendors seem to carefully walk around the idea, but avoid engaging with it directly, because once they do, they'll be competing with everyone instead of milking them.
These are also the same companies allowing their AI to make decisions in war, have no qualms about the mental issues they’re causing in people, and have abused workers in 3rd world countries for years.
But you think they’re holding out on “destroying the software industry” out of the goodness of their hearts? Come on
I would add there are more reasons why this wouldn't work: costs due to OOM more usage, adoption/AI backlash, adversarial environment, players with big head starts (Google).
You don't need to personally win in order to mortally wound someone. It can be informative to speculate about whether or not something is possible regardless of it being strategically advisable in the current context.
They definitely would if they could. They desperately need money. They already told the whole world they want to replace them, they just can’t.
That seems reasonable, its just yet to be seen if LLMs are a form of artificial intelligence in any meaningful sense of the word.
They're impressive ML for sure, but that is in fact different from AI despite how companies building them have tried to merge the terms together.
A software product (whether bought or rented as a service) is defined by its boundaries - there's a narrow set of specific problems, and specific ways it can be used to solve those problems, and beyond those, it's not capable (or not allowed) to be used for anything else. The specific choices of what, how, and on what terms, are what companies stick a name to to create a "software product", and those same choices also determine how (and how much) money it will make for them.
Those boundaries are what LLMs, as general-purpose problem solvers, break naturally, and trying to force-fit them within those limits means removing most of the value they offer.
Consider a word processor (like MS Word). It's solving the problem of creating richly-formatted, nice-looking documents. By default it's not going to pick the formatting for you, nor is it going to write your text for you. Now, consider two scenarios of adding LLMs to it:
- On the inside: the LLM will be able to write you a poem or rewrite a piece of document. It could be made to also edit formatting, chat with you about the contents, etc.
- From the outside: all the above, but also the LLM will be able to write you an itinerary based on information collected from maps/planning tool, airline site, hotel site, a list of personal preferences of your partner, etc. It will be able to edit formatting to match your website and presentation made in the competitor's office tools and projected weather for tomorrow.
Most importantly, it will be able to do both of those automatically, just because you set up a recurring daily task of "hey, look at my next week's worth of calendar events and figure out which ones you can do some useful pre-work for me, and then do that".
That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already. It's about generality that allows them to erase the boundaries that define what products are - which (this is the "mortal wound to software industry" part) devalues software products themselves, reducing them to mere tool calls for "software agents", and destroying all the main ways software companies make money today - i.e. setting up and exploiting tactics like captive audience, taking data hostage, bundled offers, UI as the best marketing/upsale platform, etc.
(To be clear - personally, I'm in favor of this happening, though I worry about consequences of it happening all at once.)
They most certainly are not. With the current state of LLMs, anyone who puts them in charge of things is being a fool. They have zero intelligence, zero ability to cope with novel situations, and even for things in their training data they do worse than a typical skilled practitioner would. Right now they are usable only for something where you don't care about the quality of the result.
I believe that relatively few people would agree with you on that point. LLMs aren’t good enough (yet?), and very obviously so, IMO, to be autonomous problem solvers for the vast majority of problems being solved by software companies today.
Your notion of a "mortal wound" to the software industry seems to assume that today's SaaS portals are the only form that industry can take. Great software is "tool calls for agents". Those human agents who care about getting exactly the result they want will not be keen on giving up Photoshop for Photoshop-but-with-an-AI-in-front-of-it.
The US stock market has priced this in already. Many software only companies are perceived to be under threat by ai. It represents a wonderful arbitrage opportunity for ai skeptics in fact.
Considering the money they need, they over promise and under deliver.