OpenAI’s Approach to AI Safety Criticized by Former Policy Researcher
A Disagreement on the Road to AGI
A high-profile ex-OpenAI policy researcher, Miles Brundage, has taken to social media to criticize OpenAI for "rewriting the history" of its deployment approach to potentially risky AI systems. Brundage’s concerns center around OpenAI’s recent document outlining its philosophy on AI safety and alignment, specifically its view on the development of AGI (Artificial General Intelligence).
OpenAI’s Philosophy on AGI
According to OpenAI, the development of AGI is a "continuous path" that requires "iteratively deploying and learning" from AI technologies. The company believes that safety lessons can be learned from the deployment of current AI systems, with the goal of making the next system safer and more beneficial.
GPT-2 and the Issue of Safety
Brundage, who was previously the head of policy research at OpenAI, disagrees with the company’s stance. He claims that OpenAI’s release of GPT-2, a language model that can answer questions and generate text, was not as cautious as it could have been, given the potential risks involved. GPT-2 was initially released with limited access to a demo, citing the risk of malicious use. However, the decision was met with criticism from some in the AI industry, who argued that the threat posed by GPT-2 had been exaggerated.
A Call for Caution
Brundage believes that OpenAI’s approach to AI safety is flawed, arguing that the company is trying to set up a burden of proof where concerns are dismissed as "alarmist" unless overwhelming evidence of imminent dangers is presented. He fears that this mentality is "very dangerous" for advanced AI systems.
A History of Criticism
OpenAI has faced criticism in the past for prioritizing "shiny products" over safety and rushing product releases to beat rival companies to market. The company has also faced criticism for its handling of AI safety and policy, with some researchers departing the company for rivals.
Conclusion
The debate over AI safety and alignment is a critical one, with far-reaching implications for the development of AGI. As the technology continues to evolve, it is essential that companies like OpenAI prioritize caution and transparency in their approach to AI deployment. The concerns raised by Brundage and others should not be taken lightly, and OpenAI would do well to reconsider its approach to AI safety and alignment.
FAQs
Q: What is AGI?
A: AGI refers to AI systems that can perform any intellectual task that a human can.
Q: What is OpenAI’s approach to AI safety and alignment?
A: OpenAI believes that the development of AGI is a continuous path that requires iteratively deploying and learning from AI technologies.
Q: What is the issue with OpenAI’s approach to AI safety?
A: Critics argue that OpenAI is prioritizing product releases over safety, and that its approach to AI safety is flawed.
Q: Who is Miles Brundage?
A: Miles Brundage is a former OpenAI policy researcher and head of policy research. He has spoken out against OpenAI’s approach to AI safety and alignment.

