OpenAI has established a safety committee to function as an independent entity responsible for monitoring security practices.

OpenAI has unveiled major updates to its safety and security protocols, which include the creation of a new independent oversight committee. This development signifies a notable change, as CEO Sam Altman is no longer a member of the safety committee, representing a shift from the previous organizational framework.

The newly established Safety and Security Committee (SSC) will be led by Zico Kolter, who serves as the Director of the Machine Learning Department at Carnegie Mellon University. Other prominent members include Adam D'Angelo, CEO of Quora, retired US Army General Paul Nakasone, and Nicole Seligman, former Executive Vice President and General Counsel of Sony Corporation.

This committee replaces the earlier Safety and Security Committee formed in June 2024, which included Altman among its ranks. The original committee was responsible for providing recommendations on essential safety and security matters related to OpenAI's projects and operations.

The SSC's role has expanded beyond mere recommendations; it will now oversee safety assessments for significant model releases and supervise model launches. Importantly, the committee will have the authority to postpone a release until all safety issues are thoroughly resolved.

This reorganization comes in response to increased scrutiny regarding OpenAI's dedication to AI safety. The company has faced backlash for disbanding its Superalignment team and the exit of key personnel focused on safety. Altman's removal from the safety committee seems to be a strategic move to mitigate concerns about possible conflicts of interest in the company's safety governance.

OpenAI's recent safety initiative also aims to bolster security measures, enhance transparency regarding its operations, and foster collaboration with external entities. The company has already established partnerships with the US and UK AI Safety Institutes to work together on researching emerging AI safety challenges and developing standards for reliable AI.