Date:

Google scraps promise not to develop AI weapons

Google Updates AI Principles, Removes Commitments on Harmful Use of Technology

Changes to AI Ethics Guidelines

Google has updated its artificial intelligence (AI) principles, removing commitments to not use the technology in ways that "cause or are likely to cause overall harm." The revised guidelines no longer include a section that committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people.

New "Core Tenets" for AI Development

Coinciding with these changes, Google DeepMind CEO Demis Hassabis and Google’s senior executive for technology and society James Manyika published a blog post outlining new "core tenets" for AI development. These tenets focus on innovation, collaboration, and "responsible" AI development, with no specific commitments.

Call for Global Cooperation and AI Leadership

The blog post emphasizes the importance of global cooperation and leadership in AI development, stating, "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."

DeepMind’s Acquisition and Historical Commitments

Hassabis joined Google after its acquisition of DeepMind in 2014. In an interview with Wired in 2015, he stated that the acquisition included terms that prevented DeepMind technology from being used in military or surveillance applications.

Conclusion

The changes to Google’s AI principles and the introduction of new core tenets for AI development aim to refocus the company’s approach to AI development and deployment. While the removal of commitments to not use AI in harmful ways has raised concerns, the emphasis on global cooperation and leadership in AI development is a step towards a more responsible and ethical approach.

Frequently Asked Questions

Q: What changes were made to Google’s AI principles?
A: The company removed commitments to not use AI in ways that "cause or are likely to cause overall harm" and no longer includes a section that committed Google to not designing or deploying AI for use in surveillance, weapons, and technology intended to injure people.

Q: What are the new "core tenets" for AI development?
A: The new core tenets focus on innovation, collaboration, and "responsible" AI development, with no specific commitments.

Q: What is the significance of Google’s new approach to AI development?
A: The new approach emphasizes global cooperation and leadership in AI development, guided by core values like freedom, equality, and respect for human rights.

Q: What does the future hold for AI development at Google?
A: The company’s new approach aims to refocus its approach to AI development and deployment, prioritizing responsible and ethical development and deployment of AI technology.

Latest stories

Read More

Multisensory Marketing: The Future of Brand Engagement

Tapping into our multiple senses is an incredibly effective...

Practice Soft Skills with AI-Powered Guide

Startups and Big Tech Companies Leverage AI for Soft...

GeForce RTX 50 Series Powers Generative AI

NVIDIA GeForce RTX 50 Series: Unlocking the Power of...

Free AI Tools 2025

Free AI Tools to Boost Your Productivity If you are...

Google Goes Heavy on Investment but Light on Detail

Unlock the Editor's Digest for free Roula Khalaf, Editor of...

OpenAI’s Bold New Rebrand

OpenAI Unveils New Visual Identity as Part of Comprehensive...

Super Mario World Reborn in Unreal Engine 5

A 3D Reimagining of a Classic: Super Mario World There...

Private Data Sanctuary

Locally Installed AI: Why Sanctum is the Way to...

LEAVE A REPLY

Please enter your comment!
Please enter your name here