Date:

Google Lifts Ban on AI for Weapons and Surveillance

Google Overhauls Principles Guiding AI Technology

Google announced on Tuesday that it is overhauling the principles governing how it uses artificial intelligence and other advanced technology. The company removed language promising not to pursue certain technologies that may cause harm to people, including weapons, surveillance systems, and technologies that contravene internationally accepted principles of human rights and international law.

Background

The changes were disclosed in a note appended to the top of a 2018 blog post unveiling the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads. In a blog post on Tuesday, a pair of Google executives cited the increasingly widespread use of AI, evolving standards, and geopolitical battles over AI as the “backdrop” to why Google’s principles needed to be overhauled.

Original Principles

Google first published the principles in 2018 as it moved to quell internal protests over the company’s decision to work on a US military drone program. In response, it declined to renew the government contract and also announced a set of principles to guide future uses of its advanced technologies, such as artificial intelligence. Among other measures, the principles stated Google would not develop weapons, certain surveillance systems, or technologies that undermine human rights.

New Principles

But in an announcement on Tuesday, Google did away with those commitments. The new webpage no longer lists a set of banned uses for Google’s AI initiatives. Instead, the revised document offers Google more room to pursue potentially sensitive use cases. It states Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Google also now says it will work to “mitigate unintended or harmful outcomes.”

Revised Goals

The revised principles prioritize pursuing “bold, responsible, and collaborative AI initiatives” that respect intellectual property rights. Google executives James Manyika and Demis Hassabis wrote, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

Employee Concerns

Multiple Google employees expressed concern about the changes in conversations with WIRED. “It’s deeply concerning to see Google drop its commitment to the ethical use of AI technology without input from its employees or the broader public, despite long-standing employee sentiment that the company should not be in the business of war,” says Parul Koul, a Google software engineer and president of the Alphabet Union Workers-CWA.

Conclusion

The revised principles have sparked controversy among Google employees and the general public, who are concerned about the potential risks and consequences of Google’s increased flexibility in developing AI technology.

FAQs

Q: What does the revised AI principles document state?
A: The revised document states that Google will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

Q: What was removed from the original principles?
A: The original principles included language promising not to develop weapons, certain surveillance systems, or technologies that undermine human rights. This language has been removed from the revised principles.

Q: What are Google’s new goals?
A: Google’s new goals include pursuing “bold, responsible, and collaborative AI initiatives” that respect intellectual property rights.

Q: Why are Google employees concerned about the changes?
A: Google employees are concerned about the potential risks and consequences of Google’s increased flexibility in developing AI technology, including the company’s potential involvement in the development of weapons or surveillance systems that may violate human rights.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here