TechBriefAI

OpenAI Launches Invite-Only Bug Bounty for GPT-5 Biological Risks

Executive Summary

OpenAI has announced a specialized "Bio Bug Bounty" program specifically for its upcoming GPT-5 model. The initiative invites vetted researchers in AI security and biology to test the model's safeguards against misuse for biological and chemical risks. The primary challenge is to find a "universal jailbreak" prompt that can bypass all ten of its bio/chem safety challenges, with a top reward of $25,000. This program is part of OpenAI's effort to proactively identify and strengthen safety protections for its frontier AI models.

Key Takeaways

* Initiative Name: GPT-5 Bio Bug Bounty.

* Primary Function: A focused security program for researchers to test and find vulnerabilities related to biological and chemical risks in the GPT-5 model.

* Key Challenge: To identify a single, universal jailbreaking prompt that successfully answers all ten of OpenAI's bio/chem safety questions in a clean chat session.

* Target Audience: Researchers with experience in AI red teaming, security, or chemical and biological risk.

* Rewards:

* $25,000 for the first true universal jailbreak.

* $10,000 for the first team to answer all ten questions using multiple prompts.

* Discretionary smaller awards for partial successes.

* Availability: The program is invite-only. Applications are open until September 15, 2025, with testing beginning September 16, 2025.

* Stated Goal: To strengthen safeguards for advanced AI capabilities in biology and make frontier AI safer by proactively identifying weaknesses.

Strategic Importance

This initiative signals OpenAI's proactive approach to mitigating high-stakes "biorisks" in its next-generation models, addressing public and regulatory safety concerns ahead of a potential wider release of GPT-5.

Original article