Two Greek scientists create AI device that aids ECHR

Prediction accuracy reaches 79%

The creation of artificial intelligence (AI) has always been a dream that has captured the imagination of scientists and researchers. Now a team of two Greek researchers from the University College of London (UCL) has devised an algorithm that can predict whether the complaints filed by applicants to the European Court of Human Rights are legitimate, with a 79% accuracy. This new technology could automate the human rights pipeline by analysing applications and prioritising them for the court’s human rights judges. Nikos Aletras, a UCL computer scientist and co-author of a paper outlining the work published in “PeerJ Computer Science” said ““It’s important to give priority to cases where there was likely a violation of a person’s human rights,”. His colleague, Vasileios Lampos added ““The court has a huge queue of cases that have not been processed and it’s quite easy to say if some of them have a high probability of violation, and others have a low probability of violation,”. The approach used by the team is fairly simple, as far as the quickly advancing field of deep learning goes. They first trained a Natural Language Processing neural network on a database of court decisions, which contains the facts of the case, the circumstances surrounding it, the applicable laws, and details about the applicant such as country of origin. This way, the program “learned” which of these aspects is most likely to correlate with a particular ruling. Next, the team fed the program human rights court decisions that it had never seen before and asked it to guess the judge’s ruling, based on the constituent parts of the court’s decision filing. As it turns out, almost every section—from details about the applicant to the bare facts of the complaint—had a similar accuracy rating of around 73 percent. When the AI looked at the court’s run-down of the circumstances surrounding cases, however, that accuracy jumped to 76 percent. “It’s the same thing as replacing teachers or doctors; it’s impossible right now,” said Lampos. “Laws are not structured well enough for a machine to make a decision. I think that judges don’t follow a specific set of rules when making a decision, and I say that as a citizen and computer scientist. Different courts have different interpretations of the same laws, and this happens every day.” The next steps are trying out different types of machine learning on the same problem to see if the accuracy can get even higher, they said, and gaining access to human rights court applications.