Each month, there are an estimated 2.8 billion active users on Meta’s most popular social network.
Meta is making efforts to remove hate speech from its most popular application, Facebook, after the criticism it received following leaks by Frances Haugen, one of its former employees.
In its latest report, Meta shared that for every 10,000 posts on Facebook during the third quarter, there were 14 or 15 views of content featuring bullying and harassment, and that 9.2 million posts were taken down. But Facebook has 2.8 billion active users each month.
On Instagram, the number increased from five to six views of this type of content for every 10,000 posts, and 7.8 million posts were taken down.
The tech giant defines both issues as threats from one individual to another, which include threatening to make personally identifiable information public or making repeated and unwanted attempts to contact a user, although it admits that determining whether harm has been caused or not depends on the context.
One example it gives could be when a woman writes “Hey b*tch” to one of her female friends, which may be how they usually greet each other. However, if a stranger writes that on someone else’s profile when they are not close friends, it can be considered harmful content.
In fact, Meta reported that hate speech had decreased from the second to the third quarter, dropping from five to three views of hate speech for every 10,000 views of general content.
However, there are still doubts about other data used to define their statistics: How many posts were analyzed in total? How many messages are published per day or per minute by active users?
Statista, which provides important data to users about market surveys and economic indicators, emphasizes that there have been changes regarding the deleting of hate speech posts on Facebook.
In the first quarter of this year, the platform took down 25.2 million posts and 31.5 million in the second.
Meta has a global team of reviewers who review posts in more than 70 languages. It also has artificial intelligence systems that work from reports of inappropriate content which users have made in the current and previous years.
Their algorithms automatically and proactively detect malicious messages in such a way as to avoid the duplication of identical reports and prioritize critical pieces of content so that these are not viewed by millions of users, based on criteria for virality, severity of damage and likelihood of policy violation.