The European Artificial Intelligence Act (AIA) aims to limit the use of AI-powered predictive policing tools as it violates human rights and can lead to discrimination.
Indeed, the MEPs in charge of overseeing and amending the report have recommended a partial ban on predictive policing AI systems such as systems that distort human behaviour; systems that exploit the vulnerabilities of specific social groups; systems that provide ‘scoring’ of individuals; and the remote, real-time biometric identification of people in public places. It was noted that the ban would only be applied to systems that predict the probability of someone repeating an offense and not on those used to profile areas and locations.
By doing so, the Act hopes to stop the discriminatory policing of racialised and poor communities. The MEPs are then trying to protect people’s rights as much as possible by supporting the ban on all uses of predictive AI in policing and criminal justice.
The report is set to be discussed on 11 May and have its amendments voted on by the end of November. As for now, it is unclear if the proposed amendments on the use of predictive policing will be adopted.