> Judge Rakoff of the Southern District of New York — addressing “a question of first impression nationwide” — ruled that written exchanges between a criminal defendant and generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine.
Much more to it than this one-liner that I pulled out, but safe to say, don't rely on or put your legal defense etc. (or elements of it) into AI unless you want it discovered.
(not a lawyer, unlike OP, who might be able to refine what I highlighted with more precision)
Discovery in China will be a tad more difficult…
If somebody Googles "best attorney for murder NYC" a day after a murder is committed but before any case is filed against them (so they clearly had some reason to expect that case), could that be used as evidence?
Shouldn't that have been relatively clear to all parties involved? Maybe not to the defendant, who's apparently clueless.
The AI platform is not an attorney. A defendant's communications with an AI platform are therefore not communications between a client and their attorney, nor will the AI output constitute attorney "work product" because the AI platform is not an attorney.
Doesn't really come across as a novel problem, aside from AI being involved. I'm sure countless defendants have made the stupid mistake of talking about the facts of their case to persons other than their attorney, and those communications came back to bite them in the ass when discovered.
Explains why so many let loose afterwards ;) jokes
Does anyone know if there exists any OPSEC procedure for me to use third party tools like this for my own concerning legal questions that is both ethical and allows me to be confident that my interactions won't land in discovery documents?
So basically if you use any of the CLI tools, there is nothing for OpenAI, Anthropic, etc. to give the courts.
Online ChatGPT (especially the free version), are apparently cached by OpenAI on their servers. (I am not sure if Claude Desktop caches the conversations locally or in the cloud as well, read the fine print if it matters!)
This is a very narrow exemption, however.
(You would also want to make sure you're using a paid AI plan with contractually guaranteed privacy protections, otherwise it could be construed as third-party communications, which implicitly waives privilege.)
See: Warner v. Gilbarco, Inc.
Isn't that a fundamental misunderstanding? Would "OPSEC" like that amount to destruction of evidence or contempt of court or something like that?
Like if all your incriminating documents are on some encrypted drive, it's not like that defeats discovery. You're supposed to decrypt them and hand them over.
Your only real defense against discovery is to not have said it, or to have destroyed all records of it before the hint of discovery wafted on the wind.
We need a law where someone can clearly designate a chat privileged, with severe consequences for mis-use.
How's this any different than any professional license? You're basically paying for preferential treatment from the state in a given subject area.
Because it's got nothing to do with the professional part? Licensing should affect their practice of law, sure, but it shouldn't grant random other privileges.
There's a good summary of the current state of things here: https://www.akerman.com/en/perspectives/ai-privilege-and-wor...
Also worth noting that none of this is binding precedent, so expect this field to evolve over time.
For example in the medical world if you are a provider covered by HIPAA you must have a signed "Business Associate Agreement" with any party that handles the covered protected health information (PHI).
As in "I'm excited to win a lot of money dismantling hallucinated quotations and invalid assumptions"?