The addictive toxic content will go the way of tobacco and explore new markets.
Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.
Nigeria and Pakistan are going through the same change, just started later.
Since 2016 India alone added more users than the mentioned countries combined.
That's a lot of first generation users. More than the entire western population.
Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.
Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"
There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.
The world is too much with us; late and soon, Getting and spending, we lay waste our powers;— Little we see in Nature that is ours; We have given our hearts away, a sordid boon! This Sea that bares her bosom to the moon; The winds that will be howling at all hours, And are up-gathered now like sleeping flowers; For this, for everything, we are out of tune; It moves us not. Great God! I’d rather be A Pagan suckled in a creed outworn; So might I, standing on this pleasant lea, Have glimpses that would make me less forlorn; Have sight of Proteus rising from the sea; Or hear old Triton blow his wreathèd horn.
https://www.poetryfoundation.org/poems/45564/the-world-is-to...
Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.
In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.
And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.
// grew up near such areas
Personally I think this also is what makes reddit so addictive as well. I want to read all the threads on the subreddits I enjoy... which is impossible, because there's always new interesting posts.
Is it?
I have the impression GenAI deteriorates the internet both from a content and tech perspective.
Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.
GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.
Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.
AI will just make this so much worse - a race to the bottom of dull mediocrity.
I just have the feeling that it doesn't get the job done anymore.
I hope we will see the rise of alternatives.
I am running it in a large mid cap company (~25bn revenue). For the first time we are releasing stuff which does not suck, and we are releasing it 5x faster than before. Its real for us, produces real, measureable economic value.
Now, how does anthropic or google make any money on those 250 p/m subs i have no idea.
I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.
It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.
Saves the company a ton of money
I am not convinced. Nobody is making money, every player is losing money hand over fist.
Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.
Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.
AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.
I like to call this the "Yahoo Effect"
Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.
But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?
That's not going to work for AI platforms.
What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.
So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.
Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.
https://www.zoho.com/mail/zohomail-pricing.html
A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.
So it is stated, but is it actually true? I am not convinced.
Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.
I'll believe it when I see it.
OpenRouter makes it easy to use them, just add credits to your account.
I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.
Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.
Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.
That alone is huge, if they let go of their egos about putting the entire white collar class out of work..
In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.
I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.
I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.
It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).
I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.
Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.
For short format low stakes stuff like online ads, then the AI slop actually probably works however.
Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.
A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.
Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.
Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.
What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.
OpenAI just proved you cannot burn money indefinitely.
You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.
You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.
There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.
Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.
For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.
On scale we need more capacity and these agents will also cost more money than a 20$ subscription.
But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.
This only "works" for toy projects, things that don't really matter and nothing that can cost you business, money, clients or time.
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
Which makes me wonder... what's the business model long-term of AI generated art places?
Many product images are currently done through photoshop/etc, but this is quicker and can look more realistic.
It may not accurately represent how the product will actually look when worn, but that's not the seller's primary concern.
I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.
But I'm not sure we would even notice nowadays. It used to be a disaster that could take people's attention for years, but currently, it may get lost in the noise.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
Having Disney on their side was def quite a smart/interesting move.
At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.
It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.
So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.
So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.
And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.
I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.
Nano Banana created a lot of noise.
But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.
For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).
It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".
Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.
Obviously caveat emperor but there are a lot of real world scenarios like this.
I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.
Time will tell, but I'm dubious this will hold longer-term. I don't doubt that Claude can write the code, but I am dubious Claude can manage it sanely. Does it have backups? Does the guy that wrote it know how to restore those, or can Claude do it? Can Claude upgrade the backend and/or migrate the data when the backend changes, or is this going to be running known CVEs in a month?
This has sort of always been a thing via hiring CS students as interns. I don't doubt most of them could jam out something that looks like Slack or Gmail. The problems aren't apparent immediately, they become apparent when you realize it doesn't handle invalid responses well and the backups are hosed so you just lost a bunch of data.
Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.
s/emperor/emptor
I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
Whatever you do, don't click this link: https://github.com/garrytan/gstack/
I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.
Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing.
Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out.
If people don't read because the text is an unreadable mess, none of the points get across.
A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it.
[...] do not ye after their works: for they say, and do not.
For they bind heavy burdens and grievous to be borne, and lay them on men's
shoulders; but they themselves will not move them with one of their
fingers.
But all their works they do for to be seen of men [...]
> And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it. [...] they seeing see not; and hearing they hear not, neither do they understand.
That man was later nailed to a plank for literally no reason.Nothing is new under the sun.