On 20th November 2019, The Progressive Alliance of Socialists and Democrats in the European Parliament hosted a workshop on the Ethical aspects of Artificial Intelligence (AI) to which EUROCOP were invited. David Hamilton of Scottish Police Federation attended on behalf of EUROCOP.

Artificial Intelligence is not a new concept and we live with it in our every day lives, social media, Search engines and even on platforms such as Netflix- the “because you liked that, you will like this” feature. But this is a rapidly evolving technology and already software programs are using their own learning to develop new algorithms autonomously.

The first session related to the regulatory framework of Artificial Intelligence. In this a number of different approaches were discussed from an iterative approach based upon legal judgements to the implementation of a full hard regulatory framework. A particularly interesting discussion centred around liability and the prospect that AI machines may need to have their own “legal personhood” so that they can be held liable for their actions. This of course would require them to be insured, registered etc but it demonstrates how such technologies are pushing the bounds of conventional laws and thinking. Already we are facing challenges of who is responsible for a road traffic collision if the vehicles involved are autonomous- the programmer, the driver, a passenger?

Unfortunately, law enforcement does not have a good track record with AI. There is evidence that shows that AI can be and has been biased. Whether conscious or unconscious programmer bias or through poor training data, the risk exists. In the US Justices have been currently using an AI algorithm to assist with sentencing that, it transpires, discriminates against black men.

Accepting that society has prejudices and programmers have differing value sets the priority has to be to not allow these inconsistencies to enter the AI environments. If it does, the evolving algorithms will be flawed, discrimination amplified and people will lose trust in AI.

The introduction of ethical standards should therefore be welcomed by EUROCOP. One speaker framed it well. Technology limits what can be done, the law sets what may be done and ethics set what should be done. The prospect of Ethical Impact Assessments and transparency around algorithms were all proposed but this work needs more development by the Commission.

David Hamilton, on behalf of EUROCOP questioned whether the issue was more about the ethics around decision making not AI, a view supported by many of the computer scientists in the room. The fact that AirBnB’s matching algorithm allows customers to behave in a racist way in tenant selection does not make the platform racist. An airline that uses AI to deliver bespoke seat pricing based upon a customer’s profile, could achieve the same using lots of workers rather than AI. The danger then is demonising a technology that will make things more efficient, safer and better.

In summary, AI presents enormous potential for European Law Enforcement. Systems will be able to predict crimes, identify criminals and protect offices and the public. The danger is that while there needs to be some regulatory framework, it mustn’t be overbearing regulatory framework and general misunderstanding.

The second session explored existing legal provisions for civil and military use of AI. Helpfully it was pointed out that AI is a broad spectrum and in a military context, ranges from a Landmine to an autonomous Drone swarm. The challenge will then be regulating for such a diverse range of applications and a number of clear principles emerged from that – that in life critical situations there must always be human control, that certification of AI needs to commence and whatever regulatory framework is introduced, it must be future proofed for technologies that don’t yet exist.

From a policing perspective, if AI is used as an intelligence tool as opposed to a decision making tool then the risk is substantially mitigated. But even then issues such as shared liability (with the AI) may arise. Police Staff Associations and Unions should therefore ensure that they are involved in the design and specification of AI algorithms and ensure that impact assessments are properly conducted as our members could be adversely impacted.

Computer science, big data and Artificial Intelligence has the potential to revolutionise policing making our job, safer, more efficient, and more effective. There is however a populist narrative that sees AI as a threat not an assistant. EUROCOP owes it to its members to strike a balance and find a middle ground that ensures we get the benefits but without the risks.
The EU Commission will put proposals to the EU Parliament in 2020 and we can expect to see an interesting debate to follow on whether regulation or codification is required and how this is to be achieved.

Leave a Reply

Your email address will not be published. Required fields are marked *