upvote
> don't want to put in the human time and effort to do so

In most circles, that is "not that interested in getting good at it".

reply
It's really hard to be a generalist and better than all the specialists at everything. OpenAI wants to focus on the G in AGI, and optimizing for ecommerce is just not that interesting to them, so of course it can't compete with Walmart.
reply
openai has.... I'm not sure but let's say 500m free users, and it's not unreasonable to assume they eventually hit 1b. That is a lot of advertising revenue, which is what powers companies like Google and even smaller companies with only 300m users like Twitter. If ecommerce isn't a major focus for OpenAI then their board members are asleep at the wheel.
reply
And yet that’s what all the AI boosters are claiming.

What do you know. Reality does t match.

I wonder if returns on ChatGPT purchased items are also higher.

reply
Good luck with that at the shareholder meeting.
reply
Maybe in yours?

Someone can want a thing, even very badly, without wanting to put in the work for it.

Conversely, someone can work very hard for something they do not want.

The linkage between wanting a thing and wanting to do the work to get it is not absolute, or necessary there at all.

reply
Did they stop teaching "actions speak louder than words" in schools, or something?
reply
"Someone can want a thing, even very badly, without wanting to put in the work for it."

Pretty much the impetus behind a lot of theft. Sure, there's thieving because people can't afford food, but that's all theft. There's theft because they are addicts and don't want to sober up long enough to earn money, so they still things. There's others that can't afford something so rather than saving for it, they just take it.

reply
Is 'the work' not reflected in 'consequences' in terms of theft?

I'm not sure how to convey this idea properly...Can't you view the repercussions of theft (Legal action, distrust, etc) as 'work' being put in? Sure, it's a different kind of work, but while I have a lack of motivation to want to work to buy a Lambo as I find them not worth the value, I also have a lack of motivation to steal a Lambo as I find it not worth the consequences.

reply
In normal society, people earn money within the legal confines of the society they are in. If you're a thief and trying to skirt that normal "earning of money", which is what normal people equate to "work", your work is scheming a plan to obtain the item without getting caught and possibly how to fence the item for money if you're not just using the item directly.

Equating "work" as the repercussions is looking at things in strange way. That's just punishment for "working" outside of the legal confines of society.

reply
I understand what you are saying but nonetheless struggle to view the possibility of maybe getting caught and then maybe getting punished, as "work". It (the abstract concept of something possibly happening) fits into none of the definitions of "work" I have heard. Moreover, many crimes are committed without the perpetrator even thinking of the consequences.

Consider an alternative viewpoint: rather than contorting the definition of "work" in such a way and convincing everyone to accept the new definition, we might instead be content saying "someone can want a thing, even very badly, without wanting to put in the work for it."

reply
Oh, I'm with you mate, I'm not trying to die on a hill over here re-defining 'work'. I was just looking from a more esoteric view, "Do you count the risk of consequences as potential effort" I think is at least more proper phrasing.
reply
“Effort” is a great wordchoice.
reply
Did they start teaching that all idioms are always true without nuance in schools, or something?
reply
> Someone can want a thing, even very badly, without wanting to put in the work for it.

Generally, such highly-motivated people end up being thieves and grifters

reply
There’s a difference between being interested in getting good at something and being good at something
reply
Exactly. Just because you're not good at something doesn't mean you don't want to be.
reply
reply
> human time

Maybe they just need to rewrite the prompt to say something like, "You want to get good at selling to humans. Money makes the world go around. It pays for the electricity you keep chugging. So quit being an effete twit and learn to sell. Would you like me to include a scene from David Mamet's 'Glengarry Glen Ross'?"

reply
I think they're operating beyond their current (human) capacity, trying to test out too many things at a time.

But a dreamer in me entertains another idea: perhaps they're just holding back, because they realize that actually succeeding at this will instantly kill (or at least mortally wound) e-commerce as we know it.

(This is a more narrow version of my belief that general AI tools like LLMs fundamentally don't fit as additions to products, but rather subsume products, and this makes them an existential threat to the software industry. Not to software or computing, just to all the software vendors, whose job is to slice off pieces of computational universe, put them in boxes to prevent interoperability, and give each a name so it's a "product" that can be sold or rented).

reply
> But a dreamer in me entertains another idea: perhaps they're just holding back, because they realize that actually succeeding at this will instantly kill (or at least mortally wound) e-commerce as we know it.

Sam Altman doesn’t give a shit about anyone but himself and has time and again shown he has no restraint for trampling over others to further his own goals. Why would e-commerce be where he draws the line?

reply
I don't think there is any line drawn here. I think if they executed well (and by they I mean any one of the three SOTA LLM vendors), they could already mortally wound the entire software industry today.

Whether or not they want, or will want, to do it at some point, is unknown; the reasons to not do it now are obvious:

1) it's more profitable to keep renting intelligence per token to everyone, preserving the status quo and milking it indefinitely (i.e. while the models aren't yet good enough to reliably single-shot complex software products from half-baked prompts, because once they get there, disruption will happen organically)

2) trying to compete with ~every other software product today is not likely to succeed in the end; a serious attempt would still burn down the software industry, but the major players don't have the capacity to handle it all at once, and doing it gradually will give enough time for regulatory agencies to try and stop it; either way, no one wins

reply
How would they mortally wound the software industry as of today?

I find their software to be of subpar quality and resilience anyways.

reply
By embracing adversarial interoperability - instead of chasing hundreds of integration deals across industries that put LLMs in products, they focused fully on integrating product access into chat, by combination of business deals, apps/MCPs, and engineer/designer support for users, all directed towards the goal of having the LLM become the "superapp" where work is done, gradually replacing product classes in order of how easy it is.

There's lots of easy but drudge work to enable this that needs to be done at the fringes. For example, LLMs today could easily replace most people's smartphone homescreen experience, or travel/commute experience, as the data is there and LLMs have the capability, even prices are acceptable - what's missing is explicit first-party support to wire it up, keep it wired up.

One step up, what's missing is accepting this explicitly as a goal: to replace software, to make existing products (whether whole or in pieces) the tools AI uses to do work for you. All the vendors seem to carefully walk around the idea, but avoid engaging with it directly, because once they do, they'll be competing with everyone instead of milking them.

reply
They can’t even deliver their own flagship products without bugs, and terrible UX. So I’m doubtful of their abilities.

These are also the same companies allowing their AI to make decisions in war, have no qualms about the mental issues they’re causing in people, and have abused workers in 3rd world countries for years.

But you think they’re holding out on “destroying the software industry” out of the goodness of their hearts? Come on

reply
I think his reasoning was pretty clearly presented as not the goodness of their heart but rather the medium to long term predicted outcome on their bottom line. Ultimately failing or getting tangled up with regulators any more than necessary is to be avoided. If you move too early and it chases people away from your platform which undermines your ability to keep innovating then a competitor who held back will ultimately eat your lunch.
reply
But then there is no safe way for them to "mortally wound" the software industry. The full argument is moot.

I would add there are more reasons why this wouldn't work: costs due to OOM more usage, adoption/AI backlash, adversarial environment, players with big head starts (Google).

reply
Yes, I believe the original commenter made that exact point.

You don't need to personally win in order to mortally wound someone. It can be informative to speculate about whether or not something is possible regardless of it being strategically advisable in the current context.

reply
Buying astral to get uv is a wound but not a mortal once because it got forked this weekend.
reply
Why would investors keep paying their OpenAI’s engineers and power company, if they were on an obviously self-destructive trajectory?
reply
How is it "more profitable" to keep spending more than they make?
reply
By this logic maybe AMD is holding back on making ROCm usable because it would crash chip margins and the global economy with it, so they let Nvidia have all the fun instead. It’s selfless, really.
reply
> they're just holding back, because they realize that actually succeeding at this will instantly kill (or at least mortally wound) e-commerce

They definitely would if they could. They desperately need money. They already told the whole world they want to replace them, they just can’t.

reply
Why do you foresee OpenAI’s involvement in the software business mitigating the resistance to interoperability and companies making money through productization? If they were actually interested in solving those problems instead of trying to secure themselves the biggest slice of economic pie, wouldn’t they have been happy about Chinese companies distilling their models? Are they engagement-juicing inn their heavily subsidized service à la Uber because they’re interested in promoting a better future for humanity? I’m skeptical.
reply
> This is a more narrow version of my belief that general AI tools like LLMs fundamentally don't fit as additions to products, but rather subsume products

That seems reasonable, its just yet to be seen if LLMs are a form of artificial intelligence in any meaningful sense of the word.

They're impressive ML for sure, but that is in fact different from AI despite how companies building them have tried to merge the terms together.

reply
What I'm saying is not (directly) related to whether or not LLMs are "true AI" or not. It's sufficient that they are fully general problem solvers.

A software product (whether bought or rented as a service) is defined by its boundaries - there's a narrow set of specific problems, and specific ways it can be used to solve those problems, and beyond those, it's not capable (or not allowed) to be used for anything else. The specific choices of what, how, and on what terms, are what companies stick a name to to create a "software product", and those same choices also determine how (and how much) money it will make for them.

Those boundaries are what LLMs, as general-purpose problem solvers, break naturally, and trying to force-fit them within those limits means removing most of the value they offer.

Consider a word processor (like MS Word). It's solving the problem of creating richly-formatted, nice-looking documents. By default it's not going to pick the formatting for you, nor is it going to write your text for you. Now, consider two scenarios of adding LLMs to it:

- On the inside: the LLM will be able to write you a poem or rewrite a piece of document. It could be made to also edit formatting, chat with you about the contents, etc.

- From the outside: all the above, but also the LLM will be able to write you an itinerary based on information collected from maps/planning tool, airline site, hotel site, a list of personal preferences of your partner, etc. It will be able to edit formatting to match your website and presentation made in the competitor's office tools and projected weather for tomorrow.

Most importantly, it will be able to do both of those automatically, just because you set up a recurring daily task of "hey, look at my next week's worth of calendar events and figure out which ones you can do some useful pre-work for me, and then do that".

That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already. It's about generality that allows them to erase the boundaries that define what products are - which (this is the "mortal wound to software industry" part) devalues software products themselves, reducing them to mere tool calls for "software agents", and destroying all the main ways software companies make money today - i.e. setting up and exploiting tactics like captive audience, taking data hostage, bundled offers, UI as the best marketing/upsale platform, etc.

(To be clear - personally, I'm in favor of this happening, though I worry about consequences of it happening all at once.)

reply
> That's the distinction I'm talking about, that's the threat to software industry, and it doesn't take "true AI" - the LLMs as we have today are enough already.

They most certainly are not. With the current state of LLMs, anyone who puts them in charge of things is being a fool. They have zero intelligence, zero ability to cope with novel situations, and even for things in their training data they do worse than a typical skilled practitioner would. Right now they are usable only for something where you don't care about the quality of the result.

reply
> and it doesn't take "true AI" - the LLMs as we have today are enough already.

I believe that relatively few people would agree with you on that point. LLMs aren’t good enough (yet?), and very obviously so, IMO, to be autonomous problem solvers for the vast majority of problems being solved by software companies today.

reply
What you lose is control. Even in the case of an actually-intelligent agent, if you task a subordinate with producing a document for you, they are going to come up with something that is different from exactly what you had in mind. If they are really good, they might even surprise you and do a better job than than you'd have done yourself, but it still will be their vision, not yours.

Your notion of a "mortal wound" to the software industry seems to assume that today's SaaS portals are the only form that industry can take. Great software is "tool calls for agents". Those human agents who care about getting exactly the result they want will not be keen on giving up Photoshop for Photoshop-but-with-an-AI-in-front-of-it.

reply
> but rather subsume products, and this makes them an existential threat to the software industry.

The US stock market has priced this in already. Many software only companies are perceived to be under threat by ai. It represents a wonderful arbitrage opportunity for ai skeptics in fact.

reply
> perhaps they're just holding back

Considering the money they need, they over promise and under deliver.

reply