PIAC commented on the Toronto Police Services Board’s public consultation on the use of new artificial intelligence technologies. The consultation sought public feedback on the Board’s draft AI Policy for governing how the Toronto Police Service will obtain authorisation for and use new AI technologies. The policy is intended to guide the Toronto Police in conducting initial risk assessments, and tracking the impacts, public concerns, and performance of AI technology, while also protecting privacy, equality, accountability, and fairness. PIAC thoroughly reviewed each provision of the draft policy, and concluded that, as published, it falls short of achieving the goal of ensuring new AI technologies do not introduce or perpetuate biases in policing decisions.
PIAC Comments on the Toronto Police Services Board’s new AI Policy
PIAC commented on the Toronto Police Services Board’s public consultation on the use of new artificial intelligence technologies. The consultation sought public feedback on the Board’s draft AI Policy for governing how the Toronto Police Service will obtain authorisation for and use new AI technologies. The policy is intended to guide the Toronto Police in conducting initial risk assessments, and tracking the impacts, public concerns, and performance of AI technology, while also protecting privacy, equality, accountability, and fairness.
PIAC thoroughly reviewed each provision of the draft policy, and concluded that, as published, it falls short of achieving the goal of ensuring new AI technologies do not introduce or perpetuate biases in policing decisions. PIAC’s main concerns in our submission are as follows:
- The criteria for categorizing AI systems as Extreme, High, Moderate, or Low Risk are imprecise, and some criteria are placed too low on the risk spectrum;
- The policy does not detail what harm mitigation measures and performance indicators are appropriate for each level of risk; a large amount of discretion is given to the Chief of Police;
- The policy lacks clear and thorough public transparency requirements at all stages of authorisation, deployment, and monitoring of AI systems;
- The policy does not involve any independent review or monitoring to ensure oversight does not fall solely to the Board;
- Review of AI technology every 5 years is far too infrequent; and
- The policy lacks transparency in the public feedback and complaint procedures.
Notwithstanding the above, PIAC also cautioned the Board against finalizing and operationalizing new AI policies before imminent privacy reforms are completed. Furthermore, PIAC warns that currently there are no enforceable and specific regulations governing the use of AI by law enforcement. Privacy reforms may or may not bring about these regulations. Until then, internal policies will be limited in efficacy and impact.
Read our full submission here.