upvote
Releasing the model to bad actors at the same time as the major OS, browser, and security companies would be one idea. But some might consider that "messed up" too, whatever you mean by that. But in terms of acting in the public benefit, it seems consistent to work with companies that can make significant impact on users' security. The stated goal of Project Glasswing is to "secure the world's most critical software," not to be affirmative action for every wannabe out there.
reply
I don't trust a corpo to choose what is "most critical".

That's what's messed up about it.

reply
That is a fine stance to hold but some facts are still true regardless of your view on large businesses.

For example, it will benefit more people to secure Microsoft or Amazon services than it would be to secure a smaller, less corporate player in those same service ecosystems.

You could go on to argue that the second order effects of improving one service provider over another chooses who gets to play, but that is true whether you choose small or large businesses, so this argument devolves into “who are we to choose on behalf of others”.

Which then comes back to “we should secure what the market has chosen in order to provide the greatest benefit.”

reply
deleted
reply
Let's let the California HSR committee do it instead!
reply
I'm too much of an anarchist for that.

I believe what I said:

> I think it would be net better for the public if they just made Mythos available to everyone.

reply
10 Axios's within 5 days.
reply
This is already happening. But not everyone has access to the tools to protect against it.
reply
Yeah, I'm unsure why the OP thinks that massive chaos would somehow be "better for the public."
reply
deleted
reply
This is not the only model. I assure you exploits are being found and taken advantage of without it, possibly even ones that this model is not even capable of detecting.

Sounds like people here are advocating a return to security through obscurity which is kind of ironic.

reply
You can release it with cyber capabilities refusal, they gets unlocked when you apply for approval.
reply
Damned if you do, damned if you don’t. “Extremely capable model that can find exploits” has always been a fear, and the first company to release it in public will cause bloodbath. But also the first company that will prove itself.
reply
> picking who gets to benefit from their newly enhanced cybersecurity capabilities

You could say this about coordinated disclosure of any widespread 0-day or new bug class, though

reply
That's a really good point!

But:

- Coordinated disclosure is ethically sketchy. I know why we do it, and I'm not saying we shouldn't. But it's not great.

- This isn't a single disclosure. This is a new technology that dramatically increases capability. So, even if we thought that coordinated disclosure was unambiguously good, then I think we'd still need to have a new conversation about Mythos

reply
So private companies shouldn’t get to determine who they provide services to? Assuming no extremely malicious intent, I’d be fine if they said it was only going to McDonalds because the founders like Big Macs.
reply
Totally agree, it’s an uncomfortable compromise.
reply
Not only companies, they're going to be taking applications from individual researchers. No doubt that it will only be granted to only established researchers, effectively locking out graduates and those early in their career. This is bad.
reply
They are not unique in this. Apple and Tesla have similar programs. More nuance is warranted here. They are trying to balance the need to enable external research with the need to protect users from arbitrary 3rd parties having special capabilities that could be used maliciously
reply
I understand that, but Anthropic is doing nothing to throw those grassroots researchers a lifejacket. This is the beginning of the end for independents, if it continues on this trajectory then Anthropic gets to decide who lives and who dies. Who says they should be allowed to decide that?
reply
Or (and hear me out), they are close to an IPO and want to ensure that there is a world-ending threat around which they can cluster the biggest names, with themselves leading that group.

I think I just broke my cynicism meter :-(

reply
You might want to recalibrate your cynicism meter. As strange it might sound, most companies act according to their principles when the founding team is at the helm. The garbage policies tend to materialize once the company is purchased by, or merged into, another entity where the leadership doesn't care about the original aim of the organization. They just want "line go up".

Also, it makes sense that OpenAI feels the pressure of getting to an IPO because of their financial structure. I don't know whether or not Anthropic operates under a similar set of influences (meaning it could be either, I just don't know.)

reply
That can simultaneously be true, but the best of bad options (if excluding destroying the model altogether). These models may prove quite dangerous. That they did this instead of selling their services to every company at a huge premium says a lot about Antheopic's culture.
reply
That's just in line with their ethics. They also maintain that countries other than the US should not have SOTA AI capabilities.
reply
> It's messed up that Anthropic simultaneously claims to be a public benefit copro and is also picking who gets to benefit from their newly enhanced cybersecurity capabilities. It means that the economic benefit is going to the existing industry heavyweights.

It's messed up that the US Government simultaneously claims to be a public benefit and is also picking who gets to benefit from their newly enhanced nuclear capabilities.

-- someone in 1945, probably

reply
I mean it was messed up, which is why the other world powers raced to develop their own capabilities.

And it remains messed up to this day - some countries get to be under their own nuclear umbrella, while others don't.

This kind of selective distribution of superpowers doesn't lead to great outcomes

reply
in that case in particular it led to 80 years of relatively calm geopolitics kinetically, all things considered. I'm not sure I want to live through an AI cold war, but it sure seems I don't get to choose.
reply
> relatively calm geopolitics kinetically

Relative to what?

There's this trend in history that every hundred years there's a giant blow up, lots of violence, followed by peace.

It's likely that we would have had 80 years of relative calm due to that cycle even if nukes hadn't happened

reply
> Relative to what?

to WW1 and WW2.

reply
What? The economic benefit of system critical software not totally breaking in a few weeks goes to roughly everyone. In so far Apple/Google/MS/Linux Foundation economically benefit from being able to patch pressing critical software issues upfront (I am not even exactly sure what that is supposed to mean, it's not like anyone is going to use more or less Windows or Android if this happened any other way), that's a good thing for everyone and the economic benefits of that manifest for everyone.
reply
In the long term, you're right, but in the short term, it's going to be a bloodbath.
reply
That's assuming the model is actually as good as they say it is. Given the amount of AI researchers over the past 3 years claiming supernatural capability from the LLM they have built, my bayesian skepticism is through the roof.
reply
don't confuse bayesian skepticism with plain old contrarian bias. a true bayesian updates their priors, I'd say this is an appropriate time to do so. also don't confuse what they sell with what they have internally.
reply
Anthropic has behaved the least like this of the AI companies.
reply
They made a claim that 100% of code would be AI generated in a year, over a year ago.
reply
That was a prediction. It was not a claim of their current capabilities. If that is the one you reach for then I feel my point has been made.
reply
They were right, it's hit 100% at a number of large tech companies. (They missed their initial prediction of 90% 6 months ago, because the models then available publicly weren't capable enough.)
reply
Please tell me those companies so I can find alternatives. I'm using AI every day and there's no way I would trust it do that.
reply
The transition is pretty complete at e.g. Google and Meta, IIUC. Definitely whoever builds the AI tools you're using every day isn't writing code by hand.
reply
While I agree with you, in some ways I'd argue that this is just them being transparent on what probably would inevitably already happen at the scale of these corporate overlords and modern monarchs.

There will always be a more capable technology in the hands of the few who hold the power, they're just sharing that with the world more openly.

reply
If you're a maintainer, you can apply here:

https://claude.com/contact-sales/claude-for-oss

... As mentioned in the article.

reply
Better security is a good thing, no a bad thing, regardless of which companies are more difficult to hack. Hemming and hawing over a clear and obvious good is silly.
reply
Not really. It’s a lot better than the anarchy of releasing it and having a bunch of bad people with money use it to break software that everyone’s lives depend on. Many technologies should be gate kept because they’re dangerous. Sometimes that’s permanent, like a nuclear weapon. Sometimes that’s temporary, like a new LLM that’s good at finding exploits. It can be released to the wider public once its potential for damage has been mitigated.
reply
Queue in the "First time" meme.
reply