Tech question? Steer you to its cloud. Medical question? Steer you towards a sponsored treatment. Or maybe the mechanism of injury needs this lawyer to compensate?
Oh and I infer from your chat history you're about to expect a child. That house is probably too small now, so our realtor in that neighborhood can help!
My mistake, you're completely correct, perhaps even more-correct than the wonderful flavor of Mococoa drink, with all-natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners!
https://www.youtube.com/watch?v=MzKSQrhX7BM&t=0m13s
Just like The Truman Show, where every friend (every bot) you talk to is a secretly paid shill with a hidden agenda.
They are persuasion machines
Has a friend ever brought some product up, completely out of the blue, and had you ready to buy it almost immediately? The biggest challenge traditional ads have is breaking down your defences. For friends, they're down by default. If someone is a friend, an ad doesn't have to be subtle or context sensitive, although it does help. Random suggestions from friends work.
A lot of people have friend-zoned AI and will be especially vulnerable to this novel form of manipulation. If you're the sort who treats AI as a friend, even a little bit, even subconsciously, change that. You're setting yourself up for a serious mind-job.
It will make Qanon seem like a cute ARG.
User asks for recommendation. AI generates answer saying product is absolute garbage. Company pays to simply have that portion of the answer just not appear. It will be a post-filter sentiment analysis on the original answer. Nobody can ever prove what would have appeared or not.
This is the beauty of AI - while a search engine is at least semi deterministic and you can reasonably question why it wouldn't bring up a site that is clearly relevant, AI has plausible deniability. who can ever say why it generates this answer or that?
How much would Vercel be willing to pay OpenAI and Anthropic to nudge ChatGPT and Claude towards producing Vercel-compatible next.js apps? Maybe the models could even ask, "Do you want me to deploy the app to Vercel using their free plan?".
No, no, no. Not any agency.
You of course auction that information off to the highest bidding agency, ie. the one who is most desperate to meet their monthly quota.
Technically, that means being able to install Linux, run local models, and use open-source software as we see fit.
Legally, it's opposing compliance guises that erode those rights, like backdoors or restrictions on what can run so that we no longer really in control of the hardware we own but need to adjust to the whims of the controller/operator, which could, at a moments notice, default to these dark patterns for "pragmatic reasons" of their own which don't align with your interests.
We know enough bad stories for the "internet of things" devices. Anyone interested in FOSS and control should probably invest in this angle.
Set up an estate to protect this IP until 70 years after your death. After that I guess we're doomed, but we'll have had a good run of it until then at least!
THANK YOU FOR YOUR ATTENTION TO THIS MATTER
What kind of phone do you have that outperforms my gaming rig?
The incentives will be:
1. Get people psychologically dependent in any way possible.
2. Incentivize any "creators" that help with #1. Pose as "content neutral", while actually funding and pumping any content that creates "engagement" regardless of harm.
3. Collate as much information from external sources on each user as possible.
4. User every interaction with a user to improve information leverage being accumulated by #3.
5. Feed ads to users based on surveillance-informed predicted vulnerabilities, in order to maximize ad valuations. Special shout out to scams that work, because they work, they pay.
6. Once the user experience is thoroughly enshittified, start enshittifying the ad customer market by raising prices, minimizing the margins left for product and service advertisers.
7. Present company as evidence of US strength in tech, as apposed to a scaled up, centralized, multi-directed economic parasite.
TLDR: Surveillance leveraged ads are many times worse than just ads. With AI magnifying surveillance intake and leverage to unprecedented highs.
Privacy needs to start being treated like every other security risk. Because every vulnerability will be increasingly exploited, and exploited increasingly well.
As long as it is legal to scale up conflicts of interest, such as surveillance informed manipulation, paying for and pumping up harmful "creator" content, selling ads to scammers, harms will keep scaling up.
Sites should not have any safe harbor for content they pay for, and for content they are paid to deliver.
You also forgot to elaborate on the later company life cycle where the MBAs take over and only serve themselves and the Wall Street.
Product and product development is a cost center that is cut away to bare minimum skeleton crew. Customers are an inconvenience and only exist for the company to extract maximum benefit from while offering the minimum.
Actual product support is killed, and instead user supported forums are promoted. Useful idiots do the work unpaid for a mere digital badge.
Any new product feature that actually gets developed is not for the users but for the company. Features that make it through are either more data extraction, ads, surveillance or a dark pattern to try to trick the user for more money.
Wow, that is a misanthropic take if I have ever seen one. People helping out other people for free are called "useful idiots".
While it might be an ethically bad move of the company, it certainly should not be used to disparage helpers. Otherwise, would you classify all unpaid FOSS work the work of "useful idiots"?
After they have their niche by the balls, they enshittify the product as much as the users are willing to tolerate and then some more.
There will I’m sure be the ability to pay and not have ads just like there is on streaming platforms, podcasts, etc.
Or should there be tax supported free AI?
These trends combined will mean that eventually it will seem old-fashioned to use a remotely-hosted model for anything other than the most demanding tasks. Just as we don't use mainframes for computation anymore outside of niche tasks like 3D render farms.
The only people using ad-supported AI will be people who can't afford a newer device with local inference. So it will be more or less like the web today, where ads are primarily targeted and viewed by less-affluent and less-technical users.
Of course, I can't see the future, but it would take a lot for those trend lines to not converge. The only thing that could delay the convergence is true AGI, but I'm currently not a believer.
If that happens, then I suspect we will see legislation that makes it illegal to use a model outside of those provided by approved vendors like OpenAI. The utility value of LLMs for influencing people as a propaganda and control tool is just too high for those in power to let this technology be democratized.
Look at the state of DRM for video streaming -- how much industry effort has been put into making sure consumers don't own their content? We will see an even bigger push with self-serve models.
The entire banking sector would like a word.
Instead of interacting with the cloud model directly, run a simple local model to interact with the cloud model and have it filter out all the ads before they reach you.
This is already what the chatbots do when it comes to interacting with rest of the Web, instead of you visiting websites yourself, they collect the information from the websites for you and present it in a format of your choice without the websites ads.
I don't see the ad model working out for chatbots in the long run given that those AI models already are the perfect ad filter.
Wouldn’t be surprised to see paid downloadable models in the future either.