Date:

Under Trump, AI Scientists Told to Remove ‘Ideological Bias’ From Powerful Models

New Guidelines from NIST Eliminate Focus on AI Safety, Fairness, and Bias

Background

The National Institute of Standards and Technology (NIST) has updated its guidelines for scientists partnering with the US Artificial Intelligence Safety Institute (AISI), removing references to "AI safety," "responsible AI," and "AI fairness." The new guidelines prioritize "reducing ideological bias, to enable human flourishing and economic competitiveness."

Changes to the Cooperative Research and Development Agreement

The updated agreement, sent in early March, no longer encourages researchers to contribute technical work that could help identify and fix discriminatory model behavior related to gender, race, age, or wealth inequality. This change is concerning, as biases in AI can have severe consequences, particularly for marginalized groups.

Impact on AI Research

The new guidelines also remove mention of developing tools for authenticating content and tracking its provenance, indicating less interest in combating misinformation and deep fakes. Additionally, the agreement adds emphasis on putting America first, asking one working group to develop testing tools "to expand America’s global AI position."

Concerns from Researchers

A researcher at an organization working with the AI Safety Institute expressed concerns about the potential consequences of ignoring these issues. "Unless you’re a tech billionaire, this is going to lead to a worse future for you and the people you care about. Expect AI to be unfair, discriminatory, unsafe, and deployed irresponsibly."

Elon Musk’s Involvement

Elon Musk, who leads a controversial effort to reduce government spending and bureaucracy, has criticized AI models built by OpenAI and Google. He has also posted a meme labeling OpenAI and Google as "racist" and "woke."

Political Bias in AI Models

A growing body of research shows that political bias in AI models can impact both liberals and conservatives. For example, a study of Twitter’s recommendation algorithm found that users were more likely to be shown right-leaning perspectives on the platform.

Conclusion

The changes in NIST’s guidelines for AI research are concerning, as they may lead to the development of biased and unfair AI systems. The emphasis on reducing ideological bias and prioritizing human flourishing and economic competitiveness may not be enough to ensure the responsible development of AI.

Frequently Asked Questions

Q: What is the US Artificial Intelligence Safety Institute (AISI)?
A: The AISI is a research organization focused on ensuring the safety and fairness of artificial intelligence systems.

Q: What are the new guidelines for AI research?
A: The new guidelines eliminate references to "AI safety," "responsible AI," and "AI fairness," and prioritize "reducing ideological bias, to enable human flourishing and economic competitiveness."

Q: What are the potential consequences of ignoring AI bias?
A: Ignoring AI bias can lead to the development of unfair and discriminatory AI systems, which can have severe consequences for marginalized groups.

Q: Is Elon Musk involved in the development of AI?
A: Yes, Elon Musk is involved in the development of AI through his companies, including Tesla, SpaceX, and xAI, which competes with OpenAI and Google.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here