Connect with us

Hi, what are you looking for?

Tech News

OpenAI is launching an ‘independent’ safety board that can stop its model releases

Vector illustration of the ChatGPT logo.
Image: The Verge

OpenAI is turning its Safety and Security Committee into an independent “Board oversight committee” that has the authority to delay model launches over safety concerns, according to an OpenAI blog post. The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.”

The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says….

Continue reading…

You May Also Like

Editor's Pick

David Inserra Last week, Australia dropped its revised Combatting Misinformation and Disinformation Bill 2024, and it’s about two sandwiches short of a picnic. The...

Editor's Pick

So the first Fed rate cut is behind us, and we are no longer in a “higher for longer” period, but in a new...

Editor's Pick

Krit Chanwong and Scott Lincicome In a new Cato policy analysis out today, September 19, we show that state and local corporate subsidies have...

Editor's Pick

Colleen Hroncich Erica Paul and Anna Utley were homeschooling their children and attending a Pittsburgh-area co-op for enrichment activities twice a month. “It was...