upvote
> I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.

The addictive toxic content will go the way of tobacco and explore new markets.

Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.

Nigeria and Pakistan are going through the same change, just started later.

Since 2016 India alone added more users than the mentioned countries combined.

That's a lot of first generation users. More than the entire western population.

reply
I'm reminded of a video from the 80's/90's where researchers took a TV to the Amazon to see how "live off the land" tribes reacted to high technology. Apparently they stopped doing everything and just wanted to watch TV all day. And that was just regular old TV.

Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.

reply
I'm gonna try to remember this comment for the next time someone brings up the boiling frog analogy.

Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"

There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.

reply
This. What really happened is that someone figured out what makes people give something their undivided attention and is profiting handsomely off of this finding.
reply
In the 19th century, many authors lamented the frantic, unhealthy pace of modern life.
reply
“The world is too much with us” - W. Wordsworth

The world is too much with us; late and soon, Getting and spending, we lay waste our powers;— Little we see in Nature that is ours; We have given our hearts away, a sordid boon! This Sea that bares her bosom to the moon; The winds that will be howling at all hours, And are up-gathered now like sleeping flowers; For this, for everything, we are out of tune; It moves us not. Great God! I’d rather be A Pagan suckled in a creed outworn; So might I, standing on this pleasant lea, Have glimpses that would make me less forlorn; Have sight of Proteus rising from the sea; Or hear old Triton blow his wreathèd horn.

https://www.poetryfoundation.org/poems/45564/the-world-is-to...

reply
And boy, were they right.
reply
Can anyone come up with a citation for this?

Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.

reply
It's a particularly misleading anecdote.

In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.

And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.

// grew up near such areas

reply
On TV, content changes all the time. It is "always new". In your examples, content is the same over and over. They would not be fascinating for too long because the novelty would wear off. Very different.
reply
> On TV, content changes all the time. It is "always new".

Personally I think this also is what makes reddit so addictive as well. I want to read all the threads on the subreddits I enjoy... which is impossible, because there's always new interesting posts.

reply
"coding is the prime use case for this where you can make money"

Is it?

I have the impression GenAI deteriorates the internet both from a content and tech perspective.

Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.

GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.

Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.

reply
I fear people will just get used to it. Nobody gets tailored clothing anyhmore and people don't question that we have standardized sizes that don't really fit anyone properly. People commonly buy standardized furniture and rarely get something to a specific for their room. If cheaper software (I mean thats mostly what it is) gets the job done, we will probably just keep doing that, even if that means we lose something in the process.
reply
This has been the story for over a decade. Thins are easier. The cloud, more CPU, more RAM. No one really pays attention to performance, detail, and the little things. There is no craft in anything - just FEATURES.

AI will just make this so much worse - a race to the bottom of dull mediocrity.

reply
Yeah but buying a sofa from Ikea doesn't let people steal my banking passwords. There are serious consequences to software bugs that there aren't in cheaper ready-made clothing.
reply
Side point, but clothing industry are some of the biggest pollutors in the world
reply
Fair.

I just have the feeling that it doesn't get the job done anymore.

I hope we will see the rise of alternatives.

reply
Yeah, someone wrote: the future of apps, one user, me
reply
Your analogy is one indirection from being a fit. Factories usually get custom solutions for their production facilities, tailor made by specialist engineers. They then run the production and deliver mass produced goods to the markets. We software engineers aren’t delivering tailor made solutions straight to the consumer markets. We are much more like the engineers who set up the machinery in the production facility, and our software is much closer to that machinery then it is to the mass produced table you buy at Ikea.
reply
I am old enough to remember the outages of aws, gcp and azure which predate the gen ai thing. And of course the countless, endless, hopeless procession of bugs in just about anything else.

I am running it in a large mid cap company (~25bn revenue). For the first time we are releasing stuff which does not suck, and we are releasing it 5x faster than before. Its real for us, produces real, measureable economic value.

Now, how does anthropic or google make any money on those 250 p/m subs i have no idea.

reply
That's kind of my concern so far. We haven't seen a lot of big AI deployment success cases, but of the few mildly successful ones we HAVE heard of, they're 100% about cost saving / perceived efficiency and never about actually making a _better_ product or service.

I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.

It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.

reply
Fake Support contact from companies is another use case. They send you in endless useless circles until you give up.

Saves the company a ton of money

reply
The level to which this stuff can be used against the common person is truly astounding.
reply
Well tbh I think it's like cloud in 2007-2009. I was highly skeptical and heckling while running on managed bare metal everytime there was an outage. But now cloud is the standard model for anything really. And I think AI becomes the gold standard for code in the long term. So yea right now lots of outages. In a couple years it'll be much better. And in ten years people will always default to automation via AI.
reply
> where you can make money and have a really profitable business

I am not convinced. Nobody is making money, every player is losing money hand over fist.

reply
With coding (it's not really coding per se that matters imo it's more like dynamic logic writ large) it's a land grab strategy. They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost.
reply
It's a strategy as old as time, but it's a strategy that usually fails. Spending a lot of money on customer capture only works when customers are actually solidly captured. Most markets have fairly heavy competition and customers will only stay captured as long as there is no substantial cost to staying captive.

Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.

Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.

AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.

reply
> it's a strategy as old as time, but it's a strategy that usually fails

I like to call this the "Yahoo Effect"

reply
> They want to get established as the de facto standard and get a whole bunch of people on their platform so by the time they need to "get profitable" they have a captive audience, a leg-up on other labs. It's a tale as old as time, that's why ubers used to be cheaper than cost.

Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.

But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?

That's not going to work for AI platforms.

What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.

So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.

reply
The lock-in of email platforms is the address. With IMAP you can extract the messages right away and migrate. Yet, you would still have to check the old mailbox for stray emails that you must tell to reach you on the new address. And continue doing so for years or risk missing some critical email.

Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.

reply
No, you can do it on all the major providers for either no or low cost.
reply
Disregarding the grandfathered free accounts, own domain is $7.20/user/month on gmail, €5/month on Proton. On microsoft that's business tier feature and AFAIK not supported at all on Yahoo.
reply
Zoho Mail Lite is $1/user/mo when billed annually.

https://www.zoho.com/mail/zohomail-pricing.html

A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.

reply
Not because there is no path to profitability (they make a ton of money on inference), they just spend a lot on R&D.
reply
> they make a ton of money on inference

So it is stated, but is it actually true? I am not convinced.

Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.

reply
Agreed, the revenues are big.. but very small next to the datacenter bills.. even if a fraction of which are being used for inference, it's hard to argue they even break even. That's before all the other costs (Super Bowl ads, billions in compensation).
reply
It's widely reported and acknowledged as true.
reply
Well, the only people with any ability to acknowledge it have a massive incentive to do so, and I've been around the block enough times to know that startups will use every trick in the book to paint a rosy financial picture, even when it's extremely misleading or occasionally just straight up lies. In the current climate of AI hype my skepticism is even greater.

I'll believe it when I see it.

reply
Where and by who? Critical context missing here.
reply
reply
The CEO hyping his product and the viability of his business during an interview with Stripe does not, at least to me, qualify as “widely reported and acknowledged”
reply
from what i understand, the issue with inference is it doesn't scale as user count grows the way traditional saas scales. In typical saas adding users requires very little additional capacity. However with inference, supporting more users requires much more capacity to be added. I don't know if it's quite linear but it certainly requires more infrastructure to support additional LLM users than say a web application.
reply
And the existing infrastructure routinely struggles for several of the well known players. You can literally tell when it's getting bogged down by workload. And that's after all the absurdly large datacenters we've already established at significant expense (to both the corporations and the average person).
reply
Afaik Anthropic still loses money for their main product in this space: Claude Code and their Max plans.
reply
This became immediately clear to me over the weekend when I used Opus via API key. I had it review the code for my (relatively small) personal blog to create an AGENTS.MD - it cost me $3.26.
reply
same here... The API costs are absolutely insane for any real usage. This is either high prices to make sure no profitable competitor to claude workspace or other agent system emerges, or heavily sponsoring of their own soluions.
reply
Api cost need not correlate with running cost.
reply
Not really. They are burning money on hardware, resources and payroll without meaningful return prospects.
reply
Frontier model developers don't make money, but inference providers do. For open weight models there is a healthy market of inference providers that operate profitably without VC backing.
reply
Such as? Where do we find these open weight model providers? Why is hardly anyone talking about them or sharing links (here or elsewhere) if they are so wonderful and profitable?
reply
Go to https://models.dev/ and you're going to see plenty of providers.

OpenRouter makes it easy to use them, just add credits to your account.

I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.

reply
Why is hardly anyone talking about basic web hosting provides or sharung links (here or elsewhere) if they are so wonderful and profitable?

Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.

reply
CoreWeave's cash flow do not look too healthy.
reply
Yes They are just pivoting to stuff that loses money more slowly but maybe has a path to profits eventually…
reply
Some of these AI companies that promised AGI are going to find out that they're actually IDE plugin subscription companies
reply
Coding is a small minority of total generated tokens. It's easy swimming in tech waters all day to think Claude is the pack leader because it writes excellent code, but the reality is that tokens are overwhelmingly coming from OpenAI and Google doing mostly stuff like "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters"
reply
> "Make this e-mail sound nicer" and "What's a cheap vacation spot with warm turquoise waters"

Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.

reply
Both of those things both were and are jobs. They're called secretaries and travel agents.
reply
Jobs that have already been killed is my point
reply
Together that's about four million American jobs so I'd disagree those jobs have "already been killed".
reply
deleted
reply
I think it remains to be seen if LLMs are even 25% as good at everything else as they are at coding.. which is fine, if they focus and stop promising the world.

That alone is huge, if they let go of their egos about putting the entire white collar class out of work..

reply
Nvidia CEO said we already have agi:)
reply
Ad generated income
reply
We could argue all day about what should be at the forefront, but addictive content isn't going anywhere, because addicts pay up.

In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.

reply
> because addicts pay up.

I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.

reply
Sometimes they do pay up. Google Gemini estimates that 25% of active daily YouTube users pay for ad free service. I know my wife and I do, and we watch a huge range of YouTube material more hours a month than all the other streaming services we subscribe to. There is no area of human knowledge or human interest that YouTube doesn’t have a ton of material for; and of course, the animal videos… The ironic thing in the subject of Sora service being cancelled is that neither my wife or I watch AI generated material.
reply
I think the real answer is that Sora-style AI slop videos just aren't as addictive as we thought they'd be.

I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.

It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).

reply
Yes, fortunately slop is pretty unwatchable after the novelty wears out. Even the lowest common denominator stuff NFLX churns out is in a different league.

I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.

Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.

For short format low stakes stuff like online ads, then the AI slop actually probably works however.

Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.

reply
IMO slop fits best for "art that isn't the point".

A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.

Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.

Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.

reply
Agreed, it's good at placeholder art for which entertainment consumption is not the point. Clip Art for the new generation.
reply
>> you can get it on a ton of other platforms for free, so people don't want to pay for it.

What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.

OpenAI just proved you cannot burn money indefinitely.

reply
The monetization of social media has always been about steering otherwise non paying users into making purchases elsewhere. So if the AI slop can make people spend money on other products that's accomplished the goal.
reply
Coding is one topic but the big one is agentic ai.

You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.

You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.

There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.

Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.

For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.

On scale we need more capacity and these agents will also cost more money than a 20$ subscription.

But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.

reply
This is a pipe dream, models are mistake machines and agents are mistake amplifiers.

This only "works" for toy projects, things that don't really matter and nothing that can cost you business, money, clients or time.

reply
I see where you are going with this, but IMO this is not a technical problem but a legal problem.

Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?

Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.

You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.

An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.

I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.

reply
I think it will move most critical due dilligence to the tools / HR system themselves.

Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.

It could even go so far that the maker of these systems will insure you for their use.

It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.

It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.

Humans are really expensive and you have to train them regularly and every single on of them.

reply
> coding is the prime use case for this where you can make money

Which makes me wonder... what's the business model long-term of AI generated art places?

reply
A growing business right now is using AI art for product images for Amazon/etc listings. There are lots of ComfyUI workflows for it, you put in a picture of the product, some photos of people, and it can spit out images of the people wearing it.

Many product images are currently done through photoshop/etc, but this is quicker and can look more realistic.

It may not accurately represent how the product will actually look when worn, but that's not the seller's primary concern.

reply
> "reality and gravity are pulling them back"

I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.

reply
Bubbles either inflate or pop...

But I'm not sure we would even notice nowadays. It used to be a disaster that could take people's attention for years, but currently, it may get lost in the noise.

reply
>I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.

They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.

reply
For OpenAI that was and felt like some side husle they were playing around nothing more.

Having Disney on their side was def quite a smart/interesting move.

At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.

reply
It never made sense and was always just burning resources that OpenAI does not have.

It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.

reply
Everyone is doing image generation. Its realtivly easy and I would say it would be a people mover if openai wouldn't support this.

So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.

So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.

And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.

I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.

reply
Oh yeah. Openai didn't have a major image update in a while, no?
reply
Their latest model is from December but tbh i have not heard much about it.

Nano Banana created a lot of noise.

But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.

For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).

It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".

Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.

reply
I also prefer seeing a corporation like Google do it for two reasons: generative content might feed their cash cow also known as “YouTube” and Google already has a good base for coding assistants. Google owns, I think, 25% of Anthropic and earns money selling compute infrastructure to Anthropic. Personally I think Antigravity (with Claude and Gemini) and gemini-cli firmly keeps Google in the running as far as AI coding tools goes. I want to do business with companies that have a sustainable business plan. Google’s AI products for tech work, and ProtonMail’s Lumo+ product for all private daily web search and chatbot functionality is enough for me; I used to chase every commercial AI offering but not anymore.
reply
Claude runs now on Google tpus...
reply
Had Waffle House with some friends who mostly work in blue collar industries. One guy who works at a timber mill used Claude code to redo their ordering system. Took him about a month to go from knowing nothing about Claude Code to finishing the system. Basically just copied a proprietary software product that costs them upward $20k a year. They’re keeping that other product to cross check but so far the Claude coded item works great, and is of course more custom to their business. The dudes a hero at work because the system is heads and tails better.

Obviously caveat emperor but there are a lot of real world scenarios like this.

I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.

reply
> They just want their invoicing system to just f-ing work for a change.

Time will tell, but I'm dubious this will hold longer-term. I don't doubt that Claude can write the code, but I am dubious Claude can manage it sanely. Does it have backups? Does the guy that wrote it know how to restore those, or can Claude do it? Can Claude upgrade the backend and/or migrate the data when the backend changes, or is this going to be running known CVEs in a month?

This has sort of always been a thing via hiring CS students as interns. I don't doubt most of them could jam out something that looks like Slack or Gmail. The problems aren't apparent immediately, they become apparent when you realize it doesn't handle invalid responses well and the backups are hosed so you just lost a bunch of data.

reply
I'm converging on this as the real end state: it's a "better Excel" for general business work. And has some of the same limitations - maintainability and security. But there are also plenty of small businesses that run off a shared Excel spreadsheet and a few mailboxes.

Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.

reply
> caveat emperor

s/emperor/emptor

I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.

reply
Keep dreaming!

The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.

In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.

The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.

LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.

reply
I'm just going to keep building software mostly traditionally, while using "AI" to help me research things quicker (might as well use it while it's here), survive the shitpocalypse, and then laugh as traditional-minded developers become a scarce sought-after resource again.

Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.

reply
> Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.

Whatever you do, don't click this link: https://github.com/garrytan/gstack/

reply
I think this is where a lot of freelance contractors could pivot to - basically "last mile" coding, where the LLM does the front end work, and then high hourly pay engineers come in and fix the work. it'd still be cheaper than a lot of the industry niche software that is usually pretty bad.
reply
thanks for the correction

I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.

reply
>Sometimes I think my opinion means nothing on these topics, especially when it's going to get buried in a thread of 500 plus comments.

Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing.

reply
> I think consumers are slightly smarter now that they don't want to be drawn into this kind of addictive toxic content.

Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out.

reply
I had fun with it for about a week, but the thing that disappointed me the most wasn't the technology, it was the _people_. You have a machine that can make anything you can imagine, and the space of what people were exploring was so _small_.
reply
[flagged]
reply
I'd argue that for informal uses like HN, this is very much okay! It's grammatically correct and gets the point across. And most importantly, these paragraphs read more like someone's personal voice than some pithy but edited-to-death couple of sentences.
reply
> gets the point across

If people don't read because the text is an unreadable mess, none of the points get across.

reply
I'm a a people. I read it. If you call this an unreadable mess I really don't know what to say. Language is awesome, and it's awesome we can create infinitely long sentences with it. And like open source, if you don't like it, write the one you like :)

A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it.

reply
While I don't agree with the other poster, that the comment was a mess, sentences were so long, that I had to focus not to lose the point. I think the top comment read a bit too much like stream of consciousness, which as a person I tolerate much more in spoken speech than written one. Still, I liked the comment, but agree it could have been improved.
reply
I'm also a people but I stopped reading after the first paragraph.
reply
It might be a surprise to you, but there are plenty of people who are willing to read one or two paragraphs of words.
reply
I'm comfortable reading much more than two paragraphs, even in online forums. In this specific case, unreadability is because of poor sentence structure. I quit in the middle of the second sentence.
reply
tbh I quite like the style, I get the train of thought and am sure it wasn't written by an LLM.
reply
[dead]
reply
[flagged]
reply
> I feel like they say one thing and do something else or they say one thing and the agenda or something else.

    [...] do not ye after their works: for they say, and do not.

    For they bind heavy burdens and grievous to be borne, and lay them on men's
    shoulders; but they themselves will not move them with one of their
    fingers.

    But all their works they do for to be seen of men [...]
> And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.

    [...] they seeing see not; and hearing they hear not, neither do they understand.
That man was later nailed to a plank for literally no reason.

Nothing is new under the sun.

reply