For example:
- every technology has false positives. False positives here will mean 4th amendment violations and will add an undue burden on people who share physical characteristics with those in the training data. (This is the updated "fits the description."
- this technology will predictably be used to enable dragnets in particular areas. Those areas will not necessarily be chosen on any rational basis.
- this is all predictable because we have watched the War on Drugs for 3 generations. We have all seen how it was a tactical militaristic problem in cities and became a health concern/addiction issues problem when enforced in rural areas. There is approximately zero chance this technology becomes the first use of law enforcement that applies laws evenly.
I think that is pretty unlikely
Crimes aren't solved, despite having a literal panopticon. This view is just false.
Cops are choosing to not do their job. Giving them free access to all private information hasn't fixed that.
The thing you're missing is our system is working exactly like it's supposed to for rich people.
For example, Deepseek won't give you critical information about the communist party and Grok won't criticise Elon Musk
The main problem with the law not being applied evenly is structural - how do you get the people tasked with enforcing the law to enforce the law against their own ingroup? "AI" and the surveillance society will not solve this, rather they are making it ten times worse.
>people inevitably respond to one part of your broken framing, and then they're off to the races arguing about nonsense.
I agree that this unproductive. When people have two very different viewpoints it is hard for that gap to be bridged. I don't want to lay out my entire world view and argument from fist principals because it would take too much time and I doubt anyone would read it. Call it low effort if you want, but at least discussions don't turn into a collection of a single belief.
>how do you get the people tasked with enforcing the law to enforce the law against their own ingroup?
Ultimately law enforcement is responsible to the people so if the people don't want it then it will be hard to change. In regards to avoiding ingroup preference it would be worth coming up with ways of auditing cases that are not being looked into and having AI try to find patterns in what is causing it. The summaries of these patterns could be made public to allow voters and other officals to react to such information and apply needed changes to the system.
You answered your own question - it's straight up bait.
Go lick boots elsewhere.