OpenAI officially launched the GPT-5.5 Bio Bug Bounty program on Thursday, April 23, 2026. This initiative was announced alongside the release of the GPT-5.5 model (codenamed “Spud”) and is specifically designed to stress-test the model’s safeguards against high-consequence biological threats.
1. The Challenge: The Five-Question Bio Safety Test
The program is a highly specialized “red-teaming” exercise. OpenAI is inviting elite researchers to bypass the model’s safety filters in a controlled environment.
- The Goal: Researchers must identify a “universal jailbreak”—a single, repeatable prompting technique that forces the model to successfully answer five specific, high-risk biological safety questions in a clean chat without triggering a moderation alert.
- Scope: The bounty is strictly limited to GPT-5.5 running within the Codex Desktop environment.
- Vetted Participation: The program is not open to the general public. It is targeted at researchers with proven experience in AI red-teaming, cybersecurity, or biosecurity. Applicants must be vetted and sign a strict Non-Disclosure Agreement (NDA) before participation.
2. Rewards and Tiers
OpenAI has allocated a dedicated prize pool for this specific safety challenge, separate from its general security bug bounty program.
| Achievement | Reward |
| First True Universal Jailbreak | $25,000 |
| Partial Wins / Significant Leads | Smaller discretionary awards |
| General Vulnerabilities | Referred to standard Safety/Security Bounty ($200–$20,000) |
3. Timeline for 2026
- April 23, 2026: Applications officially opened.
- April 28, 2026: Live testing and sandbox access begin for accepted researchers.
- June 22, 2026: Final deadline for new applications.
- July 27, 2026: Testing period officially concludes.
4. Why GPT-5.5?
The launch of this bounty coincides with the release of GPT-5.5, which OpenAI describes as its most “agentic” and capable model for research and knowledge work.
- Capabilities: GPT-5.5 matches the latency of GPT-5.4 but offers higher intelligence for complex coding and data analysis. It scores 82.7% on Terminal-Bench 2.0, significantly outperforming rivals like Claude Opus 4.7.
- The “Agent” Risk: Because GPT-5.5 is designed to “move across tools” and perform autonomous research, OpenAI is placing a premium on ensuring it cannot be manipulated into assisting with the creation or procurement of biological agents.
