A study published Wednesday in the journal PLOS ONE found that real-life events like elections and protests lead to increases in online hate speech on both mainstream and niche platforms, even as many social media platforms try to crack down on hate posts.
Using machine-learning analysis—a way to analyze data that automates model building—the researchers examined seven types of online hate speech in 59 million posts by users in 1,150 online hate communities, including online forums where hate speech is most likely to be used, including sites like Facebook, Instagram, 4Chan and Telegram. .
The total number of posts including hate speech in a seven-day rolling average increased by 67% from 60,000 to 100,000 posts per day over the period of the study, which ran from June 2019 to December 2020.
Sometimes hate speech by social media users involves groups that are not involved in the real-world events of the time.
Events cited by researchers include an increase in religious hate speech and anti-Semitism after the US killing of Iranian General Qassem Soleimani in early 2020, and an increase in religious and sexist hate speech by Kamala Harris after the November 2020 US election. Elected as the first woman Vice President.
Despite efforts by individual platforms to remove hate speech, online hate speech continued, according to researchers.
Researchers have pointed to media attention as a key factor in driving hate-related posts: for example, when Briona Taylor was first killed by police, there was little media attention, so researchers found hate speech online, but months later when George Floyd was killed, media attention increased and hate speech grew.
A large number
250%. Since the killing of George Floyd, the level of racist speech has increased. This is the largest increase found by hate speech researchers during the study period.
Hate speech has plagued social networks for years: platforms like Facebook Twitter It has policies banning hate speech and has vowed to remove offensive content, but that hasn’t stopped the spread of these posts. Earlier this month, nearly two dozen independent human rights experts appointed by the UN called for greater accountability from social media platforms to reduce the level of online hate speech. Human rights experts aren’t alone in wanting social media companies to do more: 52% of respondents to a December USA Today-Suffolk University survey said social media platforms should regulate hateful and inaccurate content, and 38% of sites have an open forum.
After billionaire Elon Musk terminated his deal to buy Twitter last year after promising to relax the site’s moderation policies, the site has seen an “increase in hateful behavior.” According to Yoel Roth, Twitter’s former head of security and integrity. Roth tweeted at the time that the security team had removed more than 1,500 accounts for hateful behavior in three days. Under Musk’s leadership, Musk has faced sharp criticism from advocacy groups who argue that the number of hate speech on Twitter has increased dramatically since the loosening of speech restrictions, although Musk has insisted that impressions of hateful tweets have decreased.
For further reading
Twitter safety head admits ‘increase in hateful behavior’ as form restricts access to moderation tools (FOBS)
Some reservations about the consistency requirement for social media content moderation decisions. (Forbes)
What should policymakers do to promote better platform content moderation? (Forbes)