Major social media platforms failing to keep LGBTQ+ people safe online, study shows
All the major platforms are failing to protect LGBTQ+ people from potential harm, GLAAD’s annual report into social media safety has found.
The LGBTQ+ media advocacy organisation released its fourth Social Media Safety Index (SMSI) report on Tuesday (21 May), with TikTok earning a D+, while YouTube, X/Twitter and Meta’s Facebook, Instagram and Threads were handed an F for the third consecutive year.
Three social media platforms showed some improvements since last year while others have fallen further down the rankings:
- TikTok: D+ 67 per cent (+10 points from 2023)
- Facebook: F 58 (-3)
- Instagram: F 58 (-5)
- YouTube: F 58 (+4)
- Threads: F 51 (new rating)
- Twitter: F 41 (+8 points)
TikTok, which scored the highest with a D+ grade, achieved a 10-point increase on 2023 because it made “several notable improvements to its policies”, the report says.
These included a revised anti-discrimination ad policy, increasing transparency for queer users regarding control over their own information, and expressly prohibiting both targeted misgendering and deadnaming – for which self-reporting is not required.
However, the SMSI noted several areas where the platform still fails to protect users and could make changes to improve its score.
“The company discloses only limited information regarding the proactive steps it takes to address wrongful demonetisation and removal of LGBTQ creators and content from ad services on the platform,” according to the report.
“TikTok also does not disclose any data showing how many pieces of content and accounts related to LGBTQ issues have been wrongfully demonetised or removed from ad services. While the company makes a public commitment to diversifying its workforce, it does not publish any data on its LGBTQ workforce.”
Despite being eight points better off than in 2023, Elon Musk’s X still scored the worst out of all the major platforms. The SMSI points out that the platform revived its misgendering and deadnaming policy after controversially scrapping it.
However, users are required to self-report instances of targeted misgendering and deadnaming, which GLAAD recommends be replaced with alternatives, including human and/or automated content moderation, to “detect content and behaviours violating these policies”.
X is also “the only platform evaluated in the SMSI that does not disclose any information on whether it has training in place that educates content moderators about the needs of LGBTQ people and other users in protected categories”.
The report continued: “To date, the company has also failed to renew its commitment to diversifying its workforce, and has not published any employment diversity data in the [past] year.”
Social media is ‘dangerously lacking enforcement’
Commenting on the findings of the report, GLAAD president and chief executive, Sarah Kate Ellis, said: “Leaders of social media companies are failing at their responsibility to make safe products. When it comes to anti-LGBTQ hate and disinformation, the industry is dangerously lacking on enforcement of current policies.
“There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence and threats of violence, that social media platforms are responsible for and should act with urgency to address.”
GLAAD’s senior director of social media safety, Jenni Olson, said alongside “egregious levels of inadequately moderated anti-LGBTQ hate and disinformation”, we are seeing a “corollary problem of over-moderation of legitimate LGBTQ expression”, including “wrongful takedowns of LGBTQ accounts and creators, shadow-banning and similar suppression of LGBTQ content”.
Olson added: “Meta’s recent policy change limiting algorithmic eligibility of so-called ‘political content,’ which the company partly defines as ‘social topics that affect a group of people and/or society at large’ is especially concerning.”
How did this story make you feel?