Watchdog to monitor use of artificial intelligence
Public bodies’ use of artificial intelligence (AI) is to be monitored by the EHRC equality watchdog to ensure technologies are not perpetuating bias and discrimination.
The move, a major strand of its strategic plan for 2022-25, will see the EHRC working with around 30 local authorities and other public bodies in England and Scotland to understand how they use AI to deliver essential services such as benefit payments.
The commission says there is emerging evidence that bias built into algorithms can lead to less favourable treatment of people with protected characteristics such as race and sex.
It adds that the project is being undertaken “amid concerns that automated systems are inappropriately flagging certain families as a fraud risk”. Meanwhile, facial recognition technology “may be disproportionately affecting people from ethnic minorities”.
In September, the equality body issued guidance, Artificial intelligence in public services, to help organisations identify and tackle discrimination in the use of AI.
The guidance gives practical examples of how AI systems may cause discriminatory outcomes.
Earlier this year, the UK’s data regulatory body, the Information Commissioner’s Office, announced that it was to investigate whether AI systems are showing racial bias when dealing with job applications.
It will examine whether bias is built into algorithms and if this is affecting employment opportunities for people from ethnic minorities.
The EHRC monitoring projects will last several months and will report initial findings early next year.