Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Match Group Tries to Coach Men to Behave Better Online
Online dating company Match Group is seeking to solve an age-old problem with romance: how to get men to behave better. The group, which owns matchmaking platforms Tinder and Hinge, is using artificial intelligence to detect signals that somebody might be sending a message that is abusive or overly sexual, as part of its push to coach users into more chivalrous conduct online.
Artificial Intelligence to Detect Abusive or Overly Sexual Messages
For “men especially”, a “big part of our safety approach is focused on driving behavioural change so that we can make dating experiences safer and more respectful,” said Yoel Roth, head of trust and safety at Match. When a user types an “off-colour” message, Match’s apps will generate an automated prompt asking them if they are sure they want to send it. “We think of it internally as ‘too much, too soon,’” Roth said. A fifth of people who receive these prompts reconsider their messages, according to Match.
Efforts to Enlist AI to Improve Dating Behaviour
The efforts to enlist AI to help improve dating behaviour come as the three largest online matchmaking brands globally — Match’s Tinder and rivals Badoo and Bumble — are all shedding users as a result of so-called dating app fatigue among Generation Z users. This has seen online dating groups launch an array of new features, including friend-finding and community-building products, in an attempt to help reverse a post-pandemic slowdown in users.
Surveys Suggest Burnout among Young Women
Surveys suggest that “burnout” on matchmaking platforms is particularly prevalent among young women, a group that Match chief executive Bernard Kim last year described as “literally the most critical demographic for all dating apps.” “In the context of online dating, where young people grow up and enter the dating marketplace [ . . .] there’s a real need and opportunity to help people understand the norms and behaviours that go along with respectful and consensual dating,” said Roth.
Head of Trust and Safety at Match
Roth joined Match in March last year, 16 months after suffering a very public break-up from his previous company Twitter, now known as X, where he worked for more than seven years heading the team that banned US President Donald Trump’s account in January 2021 following the attack on the Capitol. He resigned just two weeks after Elon Musk’s takeover of Twitter in October 2022, writing in the New York Times that he could not remain at a company where policies were “defined by edict”. Soon after, Roth became the target of a flood of harassment, which followed criticism from Musk himself.
Conclusion
Match Group is taking a proactive approach to addressing the issue of online dating safety, particularly for young women, by using artificial intelligence to detect and prevent abusive or overly sexual messages. The company is also working to combat sophisticated organized scams and fraud, which are often perpetrated by humans rather than bots.
FAQs
Q: What is Match Group doing to address online dating safety?
A: Match Group is using artificial intelligence to detect signals that somebody might be sending a message that is abusive or overly sexual, and is generating automated prompts to coach users into more chivalrous conduct online.
Q: How does the company plan to combat sophisticated organized scams?
A: Match Group is working to combat sophisticated organized scams, which are often perpetrated by humans rather than bots, by developing new features and tools to detect and prevent these types of fraud.
Q: What is the impact of the new US administration on online safety policies?
A: The new US administration has already begun to affect online safety policies at major social networks, with Meta, which owns Facebook and Instagram, moving to end its fact-checking program and weaken hate speech policies as part of a “free speech” overhaul.

