Fable’s AI-Powered End-of-Year Summary Feature Raises Concerns
The Problem with Fable’s Summaries
Fable, a popular social media app that describes itself as a haven for "bookworms and bingewatchers," created an AI-powered end-of-year summary feature recapping what books users read in 2024. The feature was meant to be playful and fun, but some of the recaps took on an oddly combative tone. For example, writer Danny Groves’s summary asked if he’s "ever in the mood for a straight, cis white man’s perspective" after labeling him a "diversity devotee." Influencer Tiana Trammell’s summary ended with the following advice: "Don’t forget to surface for the occasional white author, OK?"
Reactions and Consequences
Trammell was flabbergasted and soon realized she wasn’t alone after sharing her experience with Fable’s summaries on Threads. "I received multiple messages," she says, "from people whose summaries had inappropriately commented on ‘disability and sexual orientation.’"
Fable’s Response
Fable later apologized on several social media channels, including Threads and Instagram, where it posted a video of an executive issuing the mea culpa. "We are deeply sorry for the hurt caused by some of our Reader Summaries this week," the company wrote in the caption. "We will do better."
Changes and Controversy
Kimberly Marsh Allee, Fable’s head of community, told WIRED that the company is working on a series of changes to improve its AI summaries, including an opt-out option for people who don’t want them and clearer disclosures indicating that they’re AI-generated. However, some users are not satisfied with the response. Fantasy and romance writer A.R. Kaufer was aghast when she saw screenshots of some of the summaries on social media. "They need to say they are doing away with the AI completely," she says. "And they need to issue a statement, not only about the AI, but with an apology to those affected."
Conclusion
Fable’s AI-powered end-of-year summary feature was meant to be a fun and playful way to recap users’ reading habits, but it ended up causing harm and offense instead. While the company has apologized and is working to improve the feature, some users are not satisfied with the response. The incident highlights the importance of carefully considering the potential consequences of using AI-generated content and the need for transparency and accountability.
FAQs
Q: What happened with Fable’s AI-powered end-of-year summary feature?
A: The feature, which was meant to be playful and fun, took on an oddly combative tone, with some summaries making inappropriate comments about users’ reading habits.
Q: Who was affected by the summaries?
A: Users whose summaries included inappropriate comments about their reading habits, including comments about disability, sexual orientation, and race.
Q: How did Fable respond to the incident?
A: Fable apologized on social media and promised to do better. The company is also working on changes to improve its AI summaries, including an opt-out option and clearer disclosures.
Q: Are the changes sufficient?
A: Some users are not satisfied with the response and believe that Fable should disable the feature and conduct rigorous internal testing to ensure that no further users are exposed to harm.

