Date:

Facebook’s New Policies Spur Misinformation Spreading

The Consequences of Meta’s Content Moderation Change

Rolling Back on Fact-Checking

Last month, Meta announced that it would roll back its fact-checking program on Facebook, Instagram, and Threads, starting this spring. This change comes as part of a new approach, where individual users can volunteer to comment on posts with additional context or differing information. While this may seem like a positive step, it raises concerns about the lack of oversight and the potential for misinformation to spread.

The Problem with Community Notes

The new approach, called Community Notes, allows users to provide context or differing information to posts, but the requirements for what volunteers need to include in a note are slim. They must only follow Meta’s Community Standards, stay under 500 characters, and include a link. This is a far cry from actual fact-checking, which requires a more comprehensive approach.

The Impact on Content

Meta will remain the authority on content that falls into illegal territory, including fraud, child sexual exploitation, and scams. However, when it comes to contentious, misleading, and AI-generated content, there is little quantifiable oversight. This leaves a gray area, where content that is not illegal but still harmful can spread.

The New Monetization Program

In October, Meta launched a new monetization program that resurfaces the Performance Bonus, a program offering cash for posts that reach certain engagement metrics. While it has been invitation-only for creators so far, it will expand its availability sometime this year. This raises concerns about incentivizing users to create viral "hoax" content for financial gain.

The Consequences of Incentivizing Misinformation

ProPublica analyzed the situation and found that 95 Facebook pages regularly post made-up headlines designed to draw engagement and often stoke political divisions. These pages are primarily managed by people outside the US, and have a collective audience of over 7.7 million followers. While Meta removed 81 of these pages, it did not confirm whether they were receiving viral content payouts.

The Broader Implications

The Cambridge Analytica scandal in 2018 highlighted the ease with which targeted campaigns, regardless of factuality, can circulate on social platforms. Social media companies’ use of personalized algorithms makes this especially effective. The recent xAI’s Grok chatbot incident, where it was caught suppressing unfavorable information about Elon Musk and President Trump, demonstrates how these systems can be manipulated for individual interests.

Conclusion

The changes to Meta’s content moderation policies and the new monetization program raise concerns about the spread of misinformation and the incentivization of viral "hoax" content. The consequences of these changes are already starting to reveal themselves, with the potential for further deepening the information quality divide and exacerbating media literacy issues in the US.

Frequently Asked Questions

Q: What is the new approach to content moderation?
A: Meta is rolling back its fact-checking program and introducing a new approach called Community Notes, where individual users can volunteer to comment on posts with additional context or differing information.

Q: What are the requirements for Community Notes?
A: Volunteers must follow Meta’s Community Standards, stay under 500 characters, and include a link in their notes.

Q: What is the new monetization program?
A: Meta launched a new monetization program that resurfaces the Performance Bonus, offering cash for posts that reach certain engagement metrics.

Q: What are the concerns about this new program?
A: The program may incentivize users to create viral "hoax" content for financial gain, which could lead to the spread of misinformation and further deepening the information quality divide.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here