AI Law - International Review of Artificial Intelligence Law
G. Giappichelli Editore

29/05/2024 - OpenAI Forms ‘Preparedness’ Team to Ensure AI Safety and Security

argument: Notizie/News - Digital Governance

According to articles from Wall Street Journal, OpenAI has established a new "Preparedness" team to address the safety and security risks associated with advanced AI models. The team is tasked with assessing potential catastrophic risks posed by highly capable foundation models, often referred to as Frontier AI. These models possess the potential for significant benefits but also carry risks that need to be meticulously managed to prevent misuse.

OpenAI has highlighted the challenges in regulating Frontier AI, which include unexpected emergence of dangerous capabilities, difficulty in preventing misuse once deployed, and challenges in containing the proliferation of these capabilities. The Preparedness team aims to create a Risk-Informed Development Policy, focusing on evaluation, monitoring, and establishing oversight mechanisms for the development process of these advanced AI models.

To incentivize innovative solutions for managing AI risks, OpenAI has launched a preparedness challenge. Participants can submit their ideas through a survey, and the top ten submissions will be rewarded with $25,000 in API credits.

The initiative is part of OpenAI's broader strategy to engage with governments and stakeholders worldwide, ensuring that AI development aligns with safety and ethical standards. This proactive step is seen as crucial in managing the rapid advancements in AI technology and preparing for potential future harms that could arise from the misuse of AI.