OpenAI Unveils Preparedness Framework to Prevent Risks

Spread the love

OpenAI, the company behind the widely known ChatGPT chatbot, has introduced a 27-page “Preparedness Framework” aimed at preventing potential worst-case scenarios arising from its advanced artificial intelligence technology. The document outlines measures to track, evaluate, and protect against “catastrophic risks” associated with cutting-edge AI models, including concerns related to cybersecurity disruptions and involvement in the creation of weapons.

Under the framework, OpenAI emphasizes a governance structure where company leadership decides on releasing new AI models, with the board of directors having the final say and the ability to reverse decisions. Safety checks precede any potential board veto, led by a dedicated preparedness team headed by MIT professor Aleksander Madry. This team evaluates and monitors risks, categorizing them into scorecards denoting levels of severity.

According to the framework, only models with a post-mitigation score of ‘medium’ or below can be deployed, and those with a ‘high’ or below score can undergo further development. The document is currently in “beta,” subject to regular updates based on feedback.

The framework sheds light on OpenAI’s governance structure, which underwent changes following recent corporate upheaval. The current interim board, criticized for its lack of diversity, faces the responsibility of overseeing OpenAI’s mission to benefit humanity through advanced technology. These safety measures come at a time when the tech sector and experts globally are actively discussing and addressing potential risks associated with AI development.

Leave a Reply

Your email address will not be published. Required fields are marked *