In the past four months alone, there have been three separate terrorist attacksacross the UK (and possibly a third reported just today) – and that’s after implementing efforts that the Defense Secretary claimed helped in thwarting 12 other incidents there in the previous year.
That spells a massive challenge for companies investing in curbing the spread of terrorist propaganda on the web. And although it’d most certainly be impossible to stamp out the threat across the globe, it’s clear that we can do a lot more to tackle it right now.
Last week, we looked at some steps that Facebook is taking to wipe out content promoting and sympathizing with terrorists’ causes, which involve the use of AI and relying on reports from users, as well as the skills of a team of 150 experts to identify and take down hate-filled posts before they spread across the social network.
Now, Google has detailed the measures it’s implementing in this regard as well. Similar to Facebook, it’s targeting hateful content with machine learning-based systems that can sniff it out, and also working with human reviewers and NGOs in an attempt to introduce a nuanced approach to censoring extremist media.
The trouble is, battling terrorism isn’t what these companies are solely about; they’re concerned about growing their user bases and increasing revenue. The measures they presently implement will help sanitize their platforms so they’re more easily marketable as a safe place to consume content, socialize and shop.
Meanwhile, the people who spread propaganda online dedicate their waking hours to finding ways to get their message out to the world. They can, and will continue to innovate so as to stay ahead of the curve.
Ultimately, what’s needed is a way to reduce the effectiveness of this propaganda. There are a host of reasons why people are susceptible to radicalization, and those may be far beyond the scope of the likes of Facebook to tackle.
AI is already being used to identify content that human response teams review and take down. But I believe that its greater purpose could be to identify people who are exposed to terrorist propaganda and are at risk of being radicalized. To that end, there’s hope in the form of measures that Google is working on. In the case of its video platform YouTube, the company explained in a blog post:
Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe.
This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.
In March, Facebook began testing algorithms that could detect warning signs of users in the US suffering from depression and possibly contemplating self-harm and suicide. To do this, it looks at whether people are frequently posting messages describing personal pain and sorrow, or if several responses from their friends read along the lines of, “Are you okay?” The company then contacts at-risk users to suggest channels they can seek out for help with their condition.
I imagine that similar tools could be developed to identify people who might be vulnerable to becoming radicalized – perhaps by analyzing the content of the posts they share and consume, as well as the networks of people and groups they engage with.
The ideas spread by terrorists are only as powerful as they are widely accepted. It looks like we’ll constantly find ourselves trying to outpace measures to spread propaganda, but what might be of more help is a way to reach out to people who are processing these ideas, accepting them as truth and altering the course their lives are taking. With enough data, it’s possible that AI could be of help – but in the end, we’ll need humans to talk to humans in order to fix what’s broken in our society.
Naturally, the question of privacy will crop up at this point – and it’s one that we’ll have to ponder before giving up our rights – but it’s certainly worth exploring our options if we’re indeed serious about quelling the spread of terrorism across the globe.
Ask me anything
Explore related questions