So there is a safety model watching your behavior for these kinds of things.
It isn't even my intent to naysay their approach. They probably have to do something along those lines to avoid being convicted in the court of public opinion. I just think it's an absurd reality.
The “AI” “technology” is an easy excuse to create artificial information gap in the era of the interconnected.
Compared to Anthropic, they always have been. Anthropic has never released any open models. Never released Claude Code's source, willingly (unlike Codex). Never released their tokenizer.
I'm tired of words being misused. We have hoverboards that do not hover, self-driving cars that do not, actually, self-drive, starships that will never fly to the stars, and "open"… I can't even describe what it's used for, except everybody wants to call themselves "open".
> Developers and security professionals doing cybersecurity-related work or similar activity that could be mistaken by automated detection systems may have requests rerouted to GPT-5.2 as a fallback.
I don't like that trend. I get why they're doing it, but I don't like it
> OpenAI's terms and policies restrict the use of our services in a number of areas. We have identified activity in your OpenAI account that is not permitted under our policies for: - Cyber Abuse
I raised an appeal which got denied. To be fair I think it's close to impossible for someone that is looking at the chat history to differenciate between legitimate research and malicious intent. I have also applied for the security research program that OpenAI is offering but didn't get any reply on that.
Neither the release post, nor the model card seems to indicate anything like this?
aka the perfect marketing ploy
Anthropic is the embodiment of bullshitting to me.
I read Cialdini many decades ago and I am bored by Anthropic.
OpenAI is very clever. With the advent of Claude OpenAI disappeared from the headlines. Who or what was this Sam again all were talking about a year ago?
OpenAI has a massive user advantage so that they can simply follow Anthropic’s release cycle to ridicule them.
I think it is really brutal for Anthropic how they are easily getting passed by by OpenAI and it is getting worse with every new GPT version for Anthropic.
OpenAI owns them.