OpenAI Security Expansion: Offering $10 million in grants, 15 giants join support, GPT-5.4-Cyber open to US and UK governments

robot
Abstract generation in progress

OpenAI Officially Announces the First Participants in the “Trusted Access for Cyber” Program, Opening GPT-5.4-Cyber’s Defensive Capabilities to Open-Source Security Teams and Vulnerability Research Organizations Through a $10 Million API Grant. Fifteen Companies Including Bank of America, BlackRock, Citibank… Announce Support.

(Background summary: OpenAI launches dedicated cybersecurity model GPT-5.4-Cyber: fixed 3,000 high-risk vulnerabilities, competing with Claude Mythos)
(Additional background: Anthropic’s new model Mythos is so powerful that even their own team hesitates to deploy it: capable of autonomously attacking global Linux systems and generating complete vulnerability chains within hours)

Table of Contents

Toggle

  • $10 Million Grant: Enabling Open-Source Communities to Access the Most Advanced AI
  • Endorsed by 15 Giants: Financial, Tech, and Security Alliances
  • GPT-5.4-Cyber Open for Government Evaluation: US and UK Intervene Simultaneously

On the 16th, OpenAI officially announced the progress of the first phase of the “Trusted Access for Cyber” program, aiming to implement a tiered mechanism of “trust increment and capability matching” to allow cutting-edge defensive AI capabilities to reach security practitioners of all sizes.

The core premise of this initiative is simple: top-tier cybersecurity capabilities should be widely accessible to defenders, but access rights should be proportionally increased based on trust level, verification depth, and security measures.

$10 Million Grant: Enabling Open-Source Communities to Access the Most Advanced AI

Through its “Cybersecurity Grant Program,” OpenAI commits to providing a total API usage quota of $10 million, specifically supporting organizations without 24/7 security teams (which constitute the vast majority in reality).

The first beneficiaries include four organizations with diverse focuses:

  • Socket and Semgrep focus on supply chain security, conducting systematic scans for malicious code and known vulnerabilities in dependency packages
  • Calif and Trail of Bits combine cutting-edge models with vulnerability research experts, delving into binary reverse engineering and high-risk vulnerability discovery

OpenAI states that it will continue seeking collaborations with partners proven in open-source software and critical infrastructure sectors.

15 Giants Endorsement: Cross-Industry Security Alliance with Finance, Tech, and Security Firms

Alongside the grant program, OpenAI also announced a heavyweight list of corporate supporters, including: Bank of America, BlackRock, BNY Mellon, Citibank, Cisco, Cloudflare, CrowdStrike, Goldman Sachs, iVerify, JPMorgan Chase, Morgan Stanley, NVIDIA, Oracle, SpecterOps, and Zscaler.

This list spans traditional finance, cloud infrastructure, security vendors, and chip giants, reflecting OpenAI’s effort to build a cross-industry defense alliance rather than merely serving the cybersecurity industry itself.

GPT-5.4-Cyber Open for Government Evaluation: US and UK Intervene Simultaneously

On the government side, OpenAI has granted access to GPT-5.4-Cyber to the “AI Standards and Innovation Center (CAISI)” under the U.S. National Institute of Standards and Technology (NIST), and the UK “AI Safety Research Institute (UK AISI),” enabling these agencies to independently evaluate the model’s cybersecurity capabilities and defense mechanisms.

GPT-5.4-Cyber is a model specifically fine-tuned by OpenAI for defensive cybersecurity workflows, supporting binary reverse engineering, vulnerability scanning, and malware analysis. It currently serves thousands of authenticated individual defenders and hundreds of teams.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin