(1) For non-lawyers who use these skills/connectors/whatchamacallits to try to get legal advice, their communications are not protected by attorney-client privilege. This will absolutely bite some people in the ass.
(2) If a lawyer uses this with confidential client information (which, to the uninitiated, doesn't just mean SSNs and bank account numbers, but "all information relating to the representation of a client") and forgets to toggle off "Help improve Claude" in their settings, they have possibly (maybe even likely) committed malpractice.[1]
[1] https://www.americanbar.org/content/dam/aba/administrative/p...
> Judge Rakoff of the Southern District of New York — addressing “a question of first impression nationwide” — ruled that written exchanges between a criminal defendant and generative AI platform Claude were not protected by attorney-client privilege or the work product doctrine.
Much more to it than this one-liner that I pulled out, but safe to say, don't rely on or put your legal defense etc. (or elements of it) into AI unless you want it discovered.
(not a lawyer, unlike OP, who might be able to refine what I highlighted with more precision)
If somebody Googles "best attorney for murder NYC" a day after a murder is committed but before any case is filed against them (so they clearly had some reason to expect that case), could that be used as evidence?
Shouldn't that have been relatively clear to all parties involved? Maybe not to the defendant, who's apparently clueless.
The AI platform is not an attorney. A defendant's communications with an AI platform are therefore not communications between a client and their attorney, nor will the AI output constitute attorney "work product" because the AI platform is not an attorney.
Doesn't really come across as a novel problem, aside from AI being involved. I'm sure countless defendants have made the stupid mistake of talking about the facts of their case to persons other than their attorney, and those communications came back to bite them in the ass when discovered.
Explains why so many let loose afterwards ;) jokes
Does anyone know if there exists any OPSEC procedure for me to use third party tools like this for my own concerning legal questions that is both ethical and allows me to be confident that my interactions won't land in discovery documents?
So basically if you use any of the CLI tools, there is nothing for OpenAI, Anthropic, etc. to give the courts.
Online ChatGPT (especially the free version), are apparently cached by OpenAI on their servers. (I am not sure if Claude Desktop caches the conversations locally or in the cloud as well, read the fine print if it matters!)
This is a very narrow exemption, however.
(You would also want to make sure you're using a paid AI plan with contractually guaranteed privacy protections, otherwise it could be construed as third-party communications, which implicitly waives privilege.)
See: Warner v. Gilbarco, Inc.
Isn't that a fundamental misunderstanding? Would "OPSEC" like that amount to destruction of evidence or contempt of court or something like that?
Like if all your incriminating documents are on some encrypted drive, it's not like that defeats discovery. You're supposed to decrypt them and hand them over.
Your only real defense against discovery is to not have said it, or to have destroyed all records of it before the hint of discovery wafted on the wind.
We need a law where someone can clearly designate a chat privileged, with severe consequences for mis-use.
How's this any different than any professional license? You're basically paying for preferential treatment from the state in a given subject area.
Because it's got nothing to do with the professional part? Licensing should affect their practice of law, sure, but it shouldn't grant random other privileges.
There's a good summary of the current state of things here: https://www.akerman.com/en/perspectives/ai-privilege-and-wor...
Also worth noting that none of this is binding precedent, so expect this field to evolve over time.
For example in the medical world if you are a provider covered by HIPAA you must have a signed "Business Associate Agreement" with any party that handles the covered protected health information (PHI).
As in "I'm excited to win a lot of money dismantling hallucinated quotations and invalid assumptions"?
Just a few of the perps: Hisham Abugharbieh (Florida student murders), Jonathan Rinderknecht (Palisades Fire arson), Phoenix Ikner (FSU shooter), Ryan Schaefer (Missouri State vandalism)
There's also that thing involving somebody I think he used to be in the NFL and he was using ChatGPT to try to hide the body of his wife or something iirc
Digital evidence is huge for the last couple of decades and this is no different...
Also there was somebody who was just recently sentenced to life in prison for AI CSAM
But yeah I'm sure "this is just not gonna happen." lol
Curious if Thomson Reuters (Westlaw) felt threatened if they were this compelled to moan about it. All it does is make me wonder how well these skills perform when paired with Lexis (if possible?) instead of Westlaw.
The only issue is that in some jurisdictions, like the UK, you can't just offer someone legal advice without being SRA accredited or FCA regulated. I.e. this would effectively make Anthropic a claims management firm under the UK law.
> Under article 89I of Financial Services and Markets Act 2000 (Regulated Activities) Order 2001 ("The Order"), advising a claimant or potential claimant, investigating a claim and representing a claimant, in relation to a financial services or financial product claim is a defined regulated activity.
https://www.fca.org.uk/freedom-information/dual-regulation-c...
I'm a bit bothered by this line. Does it mean this is based on customer's sessions? Are they entitled to build knowledge bases for every profession, topic and workflow in the world using customer data?
Sounded far fetched back then, and on the face of it illegal, but now it's just common sense I imagine.
I'm just wondering how committed they'll be - I guess the edge some startups still have, is the fear that product suites from OpenAI / Anthropic / etc. will go the way of Google products, a year or two then straight to the morgue.
I see this as a strong case for private AI, or an in-house stack.
Or I have to be missing something.
`/loop 2days /create-new-{insert-industry}-md-files`
This is only for PR. No one checks what's in those docs, or if these are real, valid or ethical. The goal here is for all news outlets to pick them up. You're not the audience.
Given the amount of free PR they can get from some AI-generated .md files, I'd probably do the same if I was on their boat.
Right now, I don't think any other AI company generates as much as slop as Anthropic does.
Each cycle gets shorter and shorter to sustain the high.
But still, a TSMC style pure play model provider would win huge business in the space given how many application companies are being eaten by model companies.
Harvey is valued at $11b
A life of every thin wrapper company will be the same. Anthropic/OpenAI will just cut the middle-man as soon as they see potential.
Landlords, tenants, vendors, business and former romantic partners, clients, banks, even your local gym is way more likely to try to fuck you over than the government is.
I dont mean 'frivolous' like prisoners who file pro-se about their ice cream melting [1], but a level or two above that , that costs time and money to produce records and testimony to defend, even if nary a dime is paid out. Basically ask GPT to figure out the terms and theories to file to get your lawsuit accepted, and done by poor people who cannot afford to post $ or repay if they lose. aka "asymmetric warfare" that benefits the little guy, just like the kind private equity or other terrible corporations wield against the poor via"mandatory arbitration" clauses or damages caps and similar rules that always benefit corporations.
1. https://www.deseret.com/1994/3/21/19098386/melted-ice-cream-...
First step out of line and that account along with anything remotely connected will be banned to oblivion.
Given they share models on Azure, Anthropic will have someone at Microsoft on speed dial.
I've even seen disconnected commit hashes disappear during their security responses which the repo owner has no way of removing.
I half-suspect they threatened him and he stuck to his guns.
er, wait