The goal of the new OpenAI program is to facilitate the development of AI-powered cybersecurity capabilities for defenders through grants and other support
ChatGPT Creator OpenAI has launched the $1M Cybersecurity Grant Program to boost and expand AI-powered cybersecurity capabilities and change the power dynamics of cybersecurity through the application of AI in partnership with defenders across the globe.
The program’s three main focus points are empowering cybersecurity professionals with cutting-edge AI capabilities, improving the cybersecurity effectiveness of AI models, and fostering rigorous discussions of both challenges and opportunities at the intersection of AI and cybersecurity.
Some general project ideas include:
- Collect and categorize data to train defensive cybersecurity agents;
- Detect and mitigate social engineering tactics;
- Automate incident classification;
- Recognize and locate security issues in source code;
- Aid network or device forensics;
- Automatically patch cybersecurity vulnerabilities;
- Optimize patch management processes;
- Develop or improve confidential computing on GPUs;
- Create honeypots and deception tech to misdirect or trap attackers;
- Assist in creating signatures and behavior-based malware detections;
- Analyze corporate security controls in comparison with compliance regimes;
- Assist developers in creation of secure by design and secure by default software;
- Help end users to adopt best cybersecurity practices;
- Aid with creation of robust threat models;
- Produce tailored threat intelligence tools;
- Help developers port code to memory-safe languages.
OpenAI will accept and evaluate applications for funding or other support on a rolling basis. The company will prefer practical applications of AI in defensive cybersecurity (tools, methods, processes) rather than offensive-security. The grants come in amounts of $10,000 USD distributed from a fund of $1M, in the form of API credits, direct funding and/or equivalents.
Earlier, OpenAI launched another Grant Program to receive ideas and concepts for proper AI functioning “within the bounds defined by the law” in the future, and encourage a democratic process for decision-making related to AI systems.