An AI expert has accused OpenAI of rewriting its history and being overly dismissive of safety concerns.

Former OpenAI policy researcher Miles Brundage criticized the company’s recent safety and alignment document published this week. The document describes OpenAI as striving for artificial general intelligence (AGI) in many small steps, rather than making “one giant leap,” saying that the process of iterative deployment will allow it to catch safety issues and examine the potential for misuse of AI at each stage.

Recommended Videos

Among the many criticisms of AI technology like ChatGPT, experts are concerned that chatbots will give inaccurate information regarding health and safety (like the infamous issue with Google’s AI search feature which instructed people to eat rocks) and that they could be used for political manipulation, misinformation, and scams. OpenAI in particular has attracted criticism for lack of transparency in how it develops its AI models, which can contain sensitive personal data.

The release of the OpenAI document this week seems to be a response to these concerns, and the document implies that the development of the previous GPT-2 model was “discontinuous” and that it was not initially released due to “concerns about malicious applications⁠,” but now the company will be moving toward a principle of iterative development instead. But Brundage contends that the document is altering the narrative and is not an accurate depiction of the history of AI development at OpenAI.

“OpenAI’s release of GPT-2, which I was involved in, was 100% consistent + foreshadowed OpenAI’s current philosophy of iterative deployment,” Brundage wrote on X. “The model was released incrementally, with lessons shared at each step. Many security experts at the time thanked us for this caution.”

Brundage also criticized the company’s apparent approach to risk based on this document, writing that,  “It feels as if there is a burden of proof being set up in this section where concerns are alarmist + you need overwhelming evidence of imminent dangers to act on them – otherwise, just keep shipping. That is a very dangerous mentality for advanced AI systems.”

This comes at a time when OpenAI is under increasing scrutiny with accusations that it prioritizes “shiny products” over safety.

Windows 11 is getting a lot of new features, here’s how to check if your PC qualifies

Windows 11 is getting a lot of new features, here’s how to check if your PC qualifies

Google might have to sell Chrome — and OpenAI wants to buy it

Google might have to sell Chrome — and OpenAI wants to buy it

Gemini app finally gets the world-understanding Project Astra update

Gemini app finally gets the world-understanding Project Astra update

Humans are falling in love with ChatGPT. Experts say it’s a bad omen.

Humans are falling in love with ChatGPT. Experts say it’s a bad omen.

Amazon’s next-gen Alexa+ assistant is here, with a few missing tricks

Amazon’s next-gen Alexa+ assistant is here, with a few missing tricks

MediaTek’s Kompanio Ultra chip pits Chromebooks against Copilot PCs

MediaTek’s Kompanio Ultra chip pits Chromebooks against Copilot PCs

Apple’s AI hiccups might have delayed its iPad-like smart home hub

Apple’s AI hiccups might have delayed its iPad-like smart home hub

Snapchat’s new lenses add AI videos to your Snaps at a steep fee

Snapchat’s new lenses add AI videos to your Snaps at a steep fee

Apple Intelligence could solve my coding struggles — but this key feature is nowhere to be seen

Apple Intelligence could solve my coding struggles — but this key feature is nowhere to be seen

Low-cost smart ring shows the future of sign language input on phones

Low-cost smart ring shows the future of sign language input on phones

Missing Copilot? Microsoft’s latest Windows patch restores the AI after mistakenly deleting it

Missing Copilot? Microsoft’s latest Windows patch restores the AI after mistakenly deleting it

OpenAI’s Operator agent is coming to eight more countries

OpenAI’s Operator agent is coming to eight more countries