Concerns Over a U.S. Manhattan Project-Style Push for Artificial General Intelligence (AGI)
Experts Warn of Potential Risks and Dangers
In a policy paper published recently, former Google CEO Eric Schmidt, Scale AI CEO Alexandr Wang, and Center for AI Safety Director Dan Hendrycks have expressed concerns over the potential risks and dangers of a U.S. Manhattan Project-style push to develop AGI, or "superhuman" intelligence.
The Concerns
The paper, titled "Superintelligence Strategy," suggests that an aggressive bid by the U.S. to exclusively control superintelligent AI systems could prompt fierce retaliation from China, potentially in the form of a cyberattack, which could destabilize international relations. The authors argue that a Manhattan Project-style effort to develop AGI would be met with hostility from rival nations, leading to a destabilizing arms race.
The Comparison to Nuclear Weapons
The authors liken the development of AGI to the development of nuclear weapons, noting that global powers do not seek monopolies over nuclear weapons, which could trigger a preemptive strike from an adversary. In the same way, they argue that the U.S. should be cautious about racing towards dominating extremely powerful AI systems.
A New Approach: Mutual Assured AI Malfunction (MAIM)
The authors propose a concept called Mutual Assured AI Malfunction (MAIM), in which governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponize AGI. This approach would prioritize defensive strategies and deter other countries from developing superintelligent AI.
A Measured Approach
The paper suggests a third way: a measured approach to developing AGI that prioritizes defensive strategies. This approach is particularly notable coming from Schmidt, who has previously been vocal about the need for the U.S. to compete aggressively with China in developing advanced AI systems.
Conclusion
The Trump administration’s push for a Manhattan Project-style effort to develop AGI may be met with resistance from experts like Schmidt, Wang, and Hendrycks, who warn of the potential risks and dangers of such an approach. The paper’s authors suggest that a more measured approach, focusing on defensive strategies, may be a wiser choice.
FAQs
Q: What is Artificial General Intelligence (AGI)?
A: AGI refers to a hypothetical AI system that has the ability to perform any intellectual task that a human can.
Q: What is the concern about a U.S. Manhattan Project-style push for AGI?
A: The concern is that it could lead to a destabilizing arms race and potentially prompt hostile retaliation from other countries, including China.
Q: What is Mutual Assured AI Malfunction (MAIM)?
A: MAIM is a concept proposed by the authors, in which governments could proactively disable threatening AI projects rather than waiting for adversaries to weaponize AGI.
Q: What is the recommended approach to developing AGI?
A: The authors suggest a measured approach, prioritizing defensive strategies and deterrence, rather than an aggressive bid to develop AGI.

