Facebook’s latest Community Standards Enforcement Report not only shows how many pieces of content in violation of its rules the company took action on, but also how effective Facebook is at identifying such content.
Looking at what Facebook calls the “proactive rate” for different types of violations, i.e. the percentage of violating content that the company identified before anyone reported on it, reveals one of the main challenges the world’s largest social network faces in trying to keep its platform clean: while it’s very easy for artificial intelligence to identify images involving nudity or graphic violence as well as filtering out blatant spam postings, it’s much harder to identify hate speech, bullying or harassment, which often requires context and human understanding of nuance.
Relying mainly on technology to identify potentially harmful content, with humans getting involved at a later stage in the review process, it doesn’t come as a surprise that Facebook still struggles to identify hate speech or bullying before its users do. While the company’s success rate in filtering hate speech has improved from 52 to 80 percent over the past 12 months, it is still significantly lower than it is for more trivial types of violating content.
source statista
You will find more infographics at Statista