Date:

Artificial Intelligence Will Become More Dangerous

AI Risks in 2025: A Warning

Predictions vs. Reality

OpenAI CEO Sam Altman expects Artificial General Intelligence (AGI) around 2027 or 2028, while Elon Musk predicts 2025 or 2026. However, most AI researchers believe that simply building bigger and more powerful chatbots won’t lead to AGI.

The Real Risks Ahead

In 2025, AI will pose a massive risk, not from superintelligence, but from human misuse. This can be unintentional, such as relying too heavily on AI without understanding its limitations. For instance, lawyers have been sanctioned for using AI to generate erroneous court briefings.

Unintentional Misuses

  • Chong Ke was ordered to pay costs for opposing counsel after she included fictitious AI-generated cases in a legal filing.
  • Steven Schwartz and Peter LoDuca were fined $5,000 for providing false citations.
  • Zachariah Crabill was suspended for a year for using fictitious court cases generated using ChatGPT.

Intentional Misuses

  • In January 2024, sexually explicit deepfakes of Taylor Swift flooded social media platforms, created using Microsoft’s "Designer" AI tool.
  • Non-consensual deepfakes are proliferating, and ongoing legislation across the world seeks to combat them.

The Liar’s Dividend

As AI-generated audio, text, and images become increasingly realistic, it will be harder to distinguish what’s real from what’s made up. This could lead to the "liar’s dividend," where those in positions of power repudiate evidence of their misbehavior by claiming it’s fake.

Examples

  • Tesla argued that a 2016 video of Elon Musk could have been a deepfake in response to allegations of exaggerating the safety of Tesla autopilot.
  • An Indian politician claimed that audio clips of him acknowledging corruption were doctored.
  • Two defendants in the January 6 riots claimed that videos they appeared in were deepfakes, but were found guilty.

Dubious Products and Services

Companies are exploiting public confusion to sell products and services labeled as "AI." This can have severe consequences, such as denying people important life opportunities.

Examples

  • Retorio claims its AI predicts candidates’ job suitability based on video interviews, but a study found that the system can be tricked by superficial correlations.
  • The Dutch tax authority used an AI algorithm to identify people who committed child welfare fraud, wrongly accusing thousands of parents and demanding they pay back tens of thousands of euros.

Conclusion

In 2025, AI risks will arise not from AI acting on its own, but from how people use it. Mitigating these risks is a significant challenge for companies, governments, and society. It will be crucial to focus on the real issues and not get distracted by sci-fi worries.

FAQs

Q: What are the potential risks of AI in 2025?

A: The risks include human misuse, such as over-reliance on AI, non-consensual deepfakes, and denying people important life opportunities.

Q: How can we mitigate these risks?

A: Companies, governments, and society must work together to ensure that AI is used responsibly and ethically, and that its limitations are understood and respected.

Q: What is the "liar’s dividend"?

A: The "liar’s dividend" refers to the phenomenon where those in positions of power repudiate evidence of their misbehavior by claiming it’s fake, as AI-generated content becomes increasingly realistic.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here