The UK government’s proposed plan to regulate artificial intelligence (AI) falls short when it comes to protecting human rights, Britain’s equality watchdog has warned.
Responding to the government’s AI whitepaper, the Equalities and Human Rights Commission (EHRC) said it is broadly supportive of the UK’s approach, but more must be done to deal with the negative human rights and equality implications of AI systems.
It said the proposed regulatory regime will fail if regulators – including itself, the Information Commissioner’s Office (ICO) and others involved in the Digital Regulators Cooperation Forum (DRCF) – are not appropriately funded to carry out their functions.
“People want the benefits of new technology but also need safety nets to protect them from the risks posed by unchecked AI advancement,” said EHRC chairwoman Baroness Falkner. “If any new technology is to bring innovation while keeping us safe, it needs careful oversight. This includes oversight to ensure that AI does not worsen existing biases in society or lead to new discrimination.
“To rise to this challenge, we need to boost our capability and scale up our operation as a regulator of equality and human rights. We cannot do that without government funding.”
Published in March 2023, the whitepaper outlines the government’s “pro-innovation” framework for regulating AI, which revolves around empowering existing regulators to create tailored, context-specific rules that suit the ways AI is being used in the sectors they scrutinise.
It also outlines five principles that regulators must consider to facilitate “the safe and innovative use of AI” in their industries, and generally builds on the approach set out by government in its September 2021 national AI strategy, which seeks to drive corporate adoption of the technology, boost skills and attract more international investment.
“Since the publication of the whitepaper, there have been clear warnings from senior industry figures and academics about the risks posed by AI, including to human rights, to society and even to humans as a species,” said the EHRC in its official response.
“Human rights and equality frameworks are central to how we regulate AI, to support safe and responsible innovation. We urge the government to better integrate these considerations into its proposals.”
The EHRC said there is generally too little emphasis on human rights throughout the whitepaper, and is only explicitly mentioned in relation to the principle of “fairness” – and only then as a subset of other considerations and in relation to discrimination specifically – and then again implicitly when noting regulators are subject to the 1998 Human Rights Act.
“Fairness is an important and necessary principle, which we welcome. But it does not cover the full and broad range of human rights as enacted in the United Kingdom, nor the potential risks posed by AI in ways we can both foresee and in ways which will only become apparent over time,” the watchdog said.
“The whitepaper also makes only limited reference to equality and no reference to regulators’ own Public Sector Equality Duty obligations, despite the widely acknowledged risk of discrimination from AI systems.”
The EHRC added it is vital that the government creates adequate routes of redress so people are empowered to effectively challenge AI-related harms, as the current framework consists of a patchwork of sector-specific mechanisms.
On funding issues, the EHRC said the new regulatory responsibilities proposed for named regulators, and in particular the expectation that they will publish guidance within 12 months, fall outside of its current business plan commitments and are therefore unfunded. It is therefore calling on the government to support regulators in building out their capacity, noting that not doing so would jeopardise the government’s AI ambitions.
In April 2023, the ICO also officially responded to the whitepaper, calling for greater clarity on how regulators should collaborate and how the suggested AI principles will align with existing data protection rules.
While industry has been welcoming of the whitepaper and the proposed government approach, others from civil society and unions have been less enthusiastic.
The Trades Union Congress (TUC), for example, has said the whitepaper only offers a series of “vague” and “flimsy” commitments for the ethical use of AI at work, and that the government is refusing to put in place the necessary “guardrails” to safeguard workers’ rights.
In May 2023, Labour MP Mick Whitley introduced “a people-focused and rights-based” bill to regulate the use of AI at work, which sets out an alternative approach for controlling use of the technology. It includes provisions for employers to meaningfully consult with employees and their trade unions before introducing AI into the workplace, as well as to reverse the burden of proof in AI-based discrimination claims so the employer has to establish its algorithm did not discriminate.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.