Faced with an advertising boycott in protest of its laissez-fair approach to policing hate speech on its social media platforms, Facebook ramped up its efforts to detect and remove harmful content significantly in the second quarter of 2020. The company defines hate speech as “violent or dehumanizing speech, statements of inferiority, calls for exclusion or segregation based on protected characteristics, or slurs. These characteristics include race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity, and serious disability or disease.”
According to its latest Community Standards Enforcement Report, released on Tuesday, Facebook took action on (i.e. in most cases removed) 22.5 million pieces of content for hate speech between April and June, up from just 9.6 million in the first three months of 2020. Moreover, the company has gotten significantly better at detecting hateful content before users reported it. In the second quarter, 94.5 percent of the content Facebook took action on was detected by its machine learning algorithms before being reported, up from 88.8 percent in Q1 and from just above 70 percent a year ago. The so-called “proactive rate” is an important metric, because the quicker Facebook detects, flags or removes violating content, the fewer users are potentially exposed to it.
In an effort to further demonstrate its “continued commitment to making Facebook and Instagram safe and inclusive”, Facebook also announced that it will be publishing its Community Standards Enforcement Report on a quarterly basis going forward. In the past, the company had published its results on a bi-annual basis.
source statista
You will find more infographics at Statista