The UK government’s pressure on tech giants to do more about online extremism just got weaponized. The Home Secretary has today announced a machine learning tool, developed with public money by a local AI firm, which the government says can automatically detect propaganda produced by the Islamic State terror group with “an extremely high degree of accuracy”.
The technology is billed as working across different types of video-streaming and download platforms in real-time, and is intended to be integrated into the upload process — as the government wants the majority of video propaganda to be blocked before it’s uploaded to the Internet.
So yes this is content moderation via pre-filtering — which is something the European Commission has also been pushing for. Though it’s a highly controversial approach, with plenty of critics. Supporters of free speech frequently describe the concept as ‘censorship machines’, for instance.
Last fall the UK government said it wanted tech firms to radically shrink the time it takes them to eject extremist content from the Internet — from an average of 36 hours to just two. It’s now evident how it believes it can force tech firms to step on the gas: By commissioning its own machine learning tool to demonstrate what’s possible and try to shame the industry into action.
TechCrunch understands the government acted after becoming frustrated with the response from platforms such as YouTube. It paid private sector firm, ASI Data Science, £600,000 in public funds to develop the tool — which is billed as using “advanced machine learning” to analyze the audio and visuals of videos to “determine whether it could be Daesh propaganda”.
Specifically, the Home Office is claiming the tool automatically detects 94% of Daesh propaganda with 99.995% accuracy — which, on that specific sub-set of extremist content and assuming those figures stand up to real-world usage at scale, would give it a false positive rate of 0.005%.
For example, the government says if the tool analyzed one million “randomly selected videos” only 50 of them would require “additional human review”.
However, on a mainstream platform like Facebook, which has around 2BN users who could easily be posting a billion pieces of content per day, the tool could falsely flag (and presumably unfairly block) some 50,000 pieces of content daily.
And that’s just for IS extremist content. What about other flavors of terrorist content, such as Far Right extremism, say? It’s not at all clear at this point whether — if the model was trained on a different, perhaps less formulaic type of extremist propaganda — the tool would have the same (or worse) accuracy rates.
Criticism of the government’s approach has, unsurprisingly, been swift and shrill…
It does not matter how accurate your test is, but that when you start applying it at scale it will block/censor/cause the police to imprison innocent people: pic.twitter.com/CV32JEoM3b
— Alec Muffett (@AlecMuffett) February 13, 2018
The Home Office is not publicly detailing the methodology behind the model, which it says was trained on more than 1,000 Islamic State videos, but says it will be sharing it with smaller companies in order to help combat “the abuse of their platforms by terrorists and their supporters”.
So while much of the government anti-online-extremism rhetoric has been directed at Big Tech thus far, smaller platforms are clearly a rising concern.
It notes, for example, that IS is now using more platforms to spread propaganda — citing its own research which shows the group using 145 platforms from July until the end of the year that it had not used before.
In all, it says IS supporters used more than 400 unique online platforms to spread propaganda in 2017 — which it says highlights the importance of technology “that can be applied across different platforms”.
Home Secretary Amber Rudd also told the BBC she is not ruling out forcing tech firms to use the tool. So there’s at least an implied threat to encourage action across the board — though at this point she’s pretty clearly hoping to get voluntary cooperation from Big Tech, including to help prevent extremist propaganda simply being displaced from their platforms onto smaller entities which don’t have the same level of resources to throw at the problem.
The Home Office specifically name-checks video-sharing site Vimeo; anonymous blogging platform Telegra.ph (built by messaging platform Telegram); and file storage and sharing app pCloud as smaller platforms it’s concerned about.
Discussing the extremism-blocking tool, Rudd told the BBC: “It’s a very convincing example that you can have the information that you need to make sure that this material doesn’t go online in the first place.
“We’re not going to rule out taking legislative action if we need to do it, but I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we’ve got. This has to be in conjunction, though, of larger companies working with smaller companies.”
“We have to stay ahead. We have to have the right investment. We have to have the right technology. But most of all we have to have industry on our side — with industry on our side, and none of them want their platforms to be the place where terrorists go, with industry on side, acknowledging that, listening to us, engaging with them, we can make sure that we stay ahead of the terrorists and keep people safe,” she added.
Last summer, tech giants including Google, Facebook and Twitter formed the catchily entitled Global Internet Forum to Counter Terrorism (Gifct) to collaborate on engineering solutions to combat online extremism, such as sharing content classification techniques and effective reporting methods for users.
They also said they intended to share best practice on counterspeech initiatives — a preferred approach vs pre-filtering, from their point of view, not least because their businesses are fueled by user generated content. And more not less content is always generally going to be preferable so far as their bottom lines are concerned.
Rudd is in Silicon Valley this week for another round of meeting with social media giants to discuss tackling terrorist content online — including getting their reactions to her home-backed tool, and to solicit help with supporting smaller platforms in also ejecting terrorist content. Though what, practically, she or any tech giant can do to urge co-operation from smaller platforms — which are often based outside the UK and the US, and thus can’t easily be pressured with legislative or any other types of threats — seems a moot point. (Though ISP-level blocking might be one possibility the government is entertaining.)
Responding to her announcements today, a Facebook spokesperson told us: “We share the goals of the Home Office to find and remove extremist content as quickly as possible, and invest heavily in staff and in technology to help us do this. Our approach is working — 99% of ISIS and Al Qaeda-related content we remove is found by our automated systems. But there is no easy technical fix to fight online extremism.
“We need strong partnerships between policymakers, counter speech experts, civil society, NGOs and other companies. We welcome the progress made by the Home Office and ASI Data Science and look forward to working with them and the Global Internet Forum to Counter Terrorism to continue tackling this global threat.”
A Twitter spokesman declined to comment, but pointed to the company’s most recent Transparency Report — which showed a big reduction in received reports of terrorist content on its platform (something the company credits to the effectiveness of its in-house tech tools at identifying and blocking extremist accounts and tweets).
At the time of writing Google had not responded to a request for comment.
Source: yahoo