NewsletterNewslettersEventsEventsPodcasts
Loader
Find Us
ADVERTISEMENT

OpenAI's safety group to become 'independent' oversight board without Sam Altman

FILE - The OpenAI logo is displayed on a cell phone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston.
FILE - The OpenAI logo is displayed on a cell phone with an image on a computer screen generated by ChatGPT's Dall-E text-to-image model, Dec. 8, 2023, in Boston. Copyright AP Photo/Michael Dwyer, File
Copyright AP Photo/Michael Dwyer, File
By Pascale Davies
Published on Updated
Share this articleComments
Share this articleClose Button

The move follows increasing scrutiny over OpenAI's safety policies which Altman has directly influenced as one of the leaders of its committee.

ADVERTISEMENT

OpenAI is transforming its internal safety committee into an "independent" oversight board, the company said in a blog post on Monday. 

The ChatGPT maker also said that the so-called Safety and Security Committee will be chaired by Carnegie Mellon professor Zico Kolter.

The committee unveiled in May originally included CEO Sam Altman but will now have "independent governance".

OpenAI has come under criticism over its safety culture. In June, a group of current and former OpenAI employees published an open letter, warning about the "the serious risks posed by these technologies".

Many high-profile employees, including co-fouder Ilya Sutskever, resigned from the company, citing safety concerns. 

A month later, five US senators raised questions about how OpenAI is addressing emerging safety concerns in a letter to Altman.

OpenAI said the "independent" committee would be briefed on new models and that the board could delay the release of any new models. 

The company also said that the committee had reviewed the new o1 model, code-named Strawberry and said it was "medium risk".

"As part of its work, the Safety and Security Committee… will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring," OpenAI wrote in its blog post. 

"We are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches," it added. 

Other committee members include Quora CEO Adam D'Angelo, retired US Army General and former NSA chief Paul Nakasone, and former Sony general counsel Nicole Seligman.

Share this articleComments

You might also like