OpenAI is launching an ‘independent’ safety board that can stop its model releases

Estimated read time 2 min read


OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says. OpenAI’s full board of directors will also receive “periodic briefings” on “safety and security matters.”

The members of OpenAI’s safety committee are also members of the company’s broader board of directors, so it’s unclear exactly how independent the committee actually is or how that independence is structured. We’ve asked OpenAI for comment.

By establishing an independent safety board, it appears OpenAI is taking a somewhat similar approach as Meta’s Oversight Board, which reviews some of Meta’s content policy decisions and can make rulings that Meta has to follow. None of the Oversight Board’s members are on Meta’s board of directors.

The review by OpenAI’s Safety and Security Committee also helped “additional opportunities for industry collaboration and information sharing to advance the security of the AI industry.” The company also says it will look for “more ways to share and explain our safety work” and for “more opportunities for independent testing of our systems.”



Source link

You May Also Like

More From Author

+ There are no comments

Add yours