Because no one believes these laws or bills or acts or whatever will be enforced.
But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.
Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.
I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".
Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.
Work like that is a gold mind; several people will probably get promoted for it.
Sounds like ignoring it worked fine for them then.
Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.
https://apnews.com/article/sesame-allergies-label-b28f8eb3dc...
It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.
But this just my uninformed opinion, perhaps those that work in the health industry think differently.
It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_
I see a bright future for the internet
That’s because they can’t be.
People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.
The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.
Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.
By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.
There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.
I think you have this exactly right. They are mostly enforced against the poor and political enemies.
Relative to no war on drugs? Who knows.
Just a quick Google search g estimates that less than 3% of drugs are intercepted by the government.
I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.
In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.
Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?
Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.
So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.
Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.
Like every law passed forever (not quite but you get the picture!) [1]
That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.
That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.
There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.
VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.
[1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal [2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals
So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand.
I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.
The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.
Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible.
Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.
Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.
Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.
It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.
As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.
i personally would love to see something like this but changed a little:
for every user (not just minors) require a toggle: upfront, not buried, always in your face toggle to turn off algorithmic feeds, where you’ll only see posts from people you follow, in the order in which they post it. again, no dark patterns, once a user toggles to a non-algorithmic feeds, it should stick.
this would do a lot to restore trust. i don’t really use the big social medias much any more, but when i did i can not tell you how many posts i missed because the algorithms are kinda dumb af. like i missed friends anniversary celebrations, events that were right up my alley, community projects, etc… because the algorithms didn’t think the posts announcing the events would be addictive enough for me.
no need to force it “for the kids” when they can just give everyone the choice.
IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.
The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.
They can publish all they want, they just have to label it clearly. I don’t see how that is a free speech issue.
They already believe that and it’s used to keep us fighting each other.
One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.
Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.
This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.
Not citing sources doesn’t imply plagiarism, as long as you don’t misrepresent someone else’s research as your own (such as in an academic paper). Giving an account of news that you heard elsewhere in your own words isn’t plagiarism. The hurdles for plagiarism are generally relatively high.
Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.
I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.
It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.
And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.