AI

We Need a New Right to Repair for Artificial Intelligence

As artificial intelligence becomes deeply embedded in daily life, a growing movement is demanding access to the inner workings of algorithms. While this won’t stop AI’s rapid proliferation, it could play a critical role in rebuilding public trust in the technology.

The pushback against uninvited AI use is gaining momentum. In late 2023, The New York Times filed a lawsuit against OpenAI and Microsoft over alleged copyright violations. In early 2024, three authors launched a class-action suit against Nvidia, claiming its AI platform, NeMo, had been trained on their copyrighted material without permission. By May, Scarlett Johansson issued a legal notice to OpenAI, asserting that its ChatGPT voice simulation closely mimicked her own without authorization.

The core issue isn’t the technology itself—it’s the power imbalance. People are acutely aware that their data often forms the foundation of AI systems, frequently without consent. This dynamic has contributed to declining public confidence in AI. A Pew Research study found that more than half of Americans feel uneasy rather than optimistic about AI’s future. Similarly, a World Risk Poll revealed that skepticism extends globally, with people in regions like Central and South America, Africa, and the Middle East expressing similar concerns.

In 2025, a new wave of demands for control over AI usage is expected. One promising solution lies in a practice called red teaming, originally developed in military and cybersecurity contexts. Red teaming involves inviting external experts to identify weaknesses in a system, providing critical insights for improvement.

Though already employed by major AI companies, red teaming has yet to become a widely accessible tool for the public. That could change soon.

For example, the law firm DLA Piper now applies red teaming to ensure AI compliance with legal standards. Similarly, my nonprofit, Humane Intelligence, collaborates with governments, civil society groups, and nontechnical experts to test AI systems for issues like bias and discrimination. In 2023, we ran a red teaming exercise involving 2,200 participants, supported by the White House. In 2025, our focus will expand to using public input to examine AI systems for Islamophobia and their role in enabling online harassment against women.

During these exercises, one recurring question arises: how can we move beyond identifying AI problems to empowering individuals to address them directly? This is where the concept of an AI right to repair comes into play.

Imagine being able to run diagnostics on an AI system, report anomalies, and track the company’s progress in fixing them. Ethical hackers and third-party groups could develop patches or solutions, making them openly available to users. Alternatively, independent experts could be hired to assess and customize AI systems to suit individual needs.

While this concept remains aspirational, the groundwork for its realization is being laid. Challenging the current lopsided power dynamic requires effort, especially in a world where companies routinely deploy experimental AI into real-world scenarios, leaving ordinary people to bear the consequences.

An AI right to repair would give individuals more agency over how AI affects their lives. If 2024 marked the year society recognized AI’s pervasive influence, 2025 must be the year people demand their rights to transparency, control, and accountability in AI systems.

Shares: