Following a 10-month investigation by the Lords Home Affairs and Justice Committee (HAJC), it was found that the UK police forces were using advanced algorithmic technologies such as facial recognition without enough safeguards, supervision, and caution.
Indeed, the HAJC reported that these technologies pose a real risk to human rights and to the rule of law, especially as this new situation is characterised by a lack of strategy, accountability, and transparency. Facial recognition and crime prevention technologies could lead to pre-existing patterns of discrimination, undermine privacy, and do more harm than good.
The report showed that there was no accurate evaluation of the efficacy of these advanced technologies, as well as no minimum scientific or ethical standards that they need to meet before being used making them dangerous. Besides, it was noted that the vast majority of public bodies involved in the development and deployment of these technologies didn’t have the expertise and resources required to carry out proper evaluations of new equipment.
By deploying them, there is a significant risk to use technologies that could be unreliable, disproportionate, and unsuitable for the task. Hence, the system needs urgent streamlining and reforms to governance in order to make sure that it is safe for both police forces and citizens. There also needs to be rigorous trials of facial recognition tech before implementing it in public services.