That’s just classical OpenAI trying to make us believe they’re closing on AGI…
Like all « so called » research from them and Anthropic about safety alignment and that their tech is so incredibly powerful that guardrails should be put on them.
>Our mitigations include safety training, automated monitoring, trusted access for advanced capabilities, and enforcement pipelines including threat intelligence.