AI: The Growing Concern for Control and Transparency
A Growing Trend of Resistance
There’s a growing trend of people and organizations rejecting the unsolicited imposition of AI in their lives. In December 2023, The New York Times sued OpenAI and Microsoft for copyright infringement. In March 2024, three authors filed a class action in California against Nvidia for allegedly training its AI platform NeMo on their copyrighted work. Two months later, the A-list actress Scarlett Johansson sent a legal letter to OpenAI when she realized its new ChatGPT voice was “eerily similar” to hers.
The Problem is Not the Technology, but the Power Dynamic
The technology isn’t the problem here. The power dynamic is. People understand that this technology is being built on their data, often without their permission. It’s no wonder that public confidence in AI is declining. A recent study by Pew Research shows that more than half of Americans are more concerned than they are excited about AI, a sentiment echoed by a majority of people from Central and South American, African, and Middle Eastern countries in a World Risk Poll.
A New Approach: Red Teaming
In 2025, we will see people demand more control over how AI is used. How will that be achieved? One example is red teaming, a practice borrowed from the military and used in cybersecurity. In a red teaming exercise, external experts are asked to “infiltrate” or break a system. It acts as a test of where your defenses can go wrong, so you can fix them.
The Future of Red Teaming
Red teaming is used by major AI companies to find issues in their models, but isn’t yet widespread as a practice for public use. That will change in 2025. The law firm DLA Piper, for instance, now uses red teaming with lawyers to test directly whether AI systems are in compliance with legal frameworks. My nonprofit, Humane Intelligence, builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias. In 2023, we conducted a 2,200-person red teaming exercise that was supported by the White House. In 2025, our red teaming events will draw on the lived experience of regular people to evaluate AI models for Islamophobia, and for their capacity to enable online harassment against women.
A Right to Repair
Overwhelmingly, when I host one of these exercises, the most common question I’m asked is how we can evolve from identifying problems to fixing problems ourselves. In other words, people want a right to repair.
What Does a Right to Repair Look Like?
An AI right to repair might look like this—a user could have the ability to run diagnostics on an AI, report any anomalies, and see when they are fixed by the company. Third-party groups, like ethical hackers, could create patches or fixes for problems that anyone can access. Or, you could hire an independent accredited party to evaluate an AI system and customize it for you.
Conclusion
2024 was the year the world woke up to the pervasiveness and impact of AI. 2025 is the year we demand our rights. Overturning the current, dangerous power dynamic will take some work—we’re rapidly pushed to normalize a world in which AI companies simply put new and untested AI models into real-world systems, with regular people as the collateral damage. A right to repair gives every person the ability to control how AI is used in their lives.
FAQs
Q: What is red teaming?
A: Red teaming is a practice borrowed from the military and used in cybersecurity. It involves external experts trying to “infiltrate” or break a system, testing where defenses can go wrong, and helping to fix them.
Q: How does red teaming work in the context of AI?
A: Red teaming in AI involves testing AI models for bias and discrimination, and identifying potential issues before they are deployed in real-world systems.
Q: What is the purpose of a right to repair?
A: A right to repair gives users the ability to control how AI is used in their lives, including the ability to run diagnostics, report anomalies, and see when fixes are implemented by the company.
Q: Who is behind the push for a right to repair?
A: The push for a right to repair is being led by organizations like Humane Intelligence, which builds red teaming exercises with nontechnical experts, governments, and civil society organizations to test AI for discrimination and bias.

