Site icon News Azi

CDEI publishes portfolio of AI assurance techniques | Computer Weekly

The UK government’s Centre for Data Ethics and Innovation (CDEI) has published a raft of case studies relating to the auditing and assurance of artificial intelligence (AI) systems, following collaboration with trade association TechUK.

The “portfolio of AI assurance techniques” was created to help anyone involved in designing, developing, deploying or otherwise procuring AI systems do so in a trustworthy way, by giving examples of real-world auditing and assurance techniques.

“AI assurance is about building confidence in AI systems by measuring, evaluating and communicating whether an AI system meets relevant criteria,” said the CDEI, adding these criteria could include regulations, industry standards or ethical guidelines.

“Assurance can also play an important role in identifying and managing the potential risks associated with AI. To assure AI systems effectively we need a range of assurance techniques for assessing different types of AI systems, across a wide variety of contexts, against a range of relevant criteria.”

The portfolio specifically contains case studies from multiple sectors and a range of technical, procedural and educational approaches, to show how different techniques can combine to promote responsible AI.

This includes examples from the Alan Turing Institute, which has taken the “argument-based” assurance method of other “safety critical” domains like aviation and expanded it to provide a structured process for evaluating and justifying claims about ethical properties of an AI system; Babl AI, which conducts independent, third-party, criteria-based audits and certifications for automated employment decision tools; and Digital Catapult, which is providing “ethical upskilling and tool development” to tech startups.

Other case studies included in the portfolio come from Citadel AI, Nvidia, Mind Foundry, Shell, Qualitest, Logically AI and Trilateral Research, among others.

However, the CDEI noted that the inclusion of any case studies does not represent a government endorsement of the technique or organisation, and is instead supposed to demonstrate the range of possible options already available to businesses. It also confirmed that the portfolio was an ongoing project that will have further examples added in the future.

Each of the case studies promote a slightly different approach to scrutinising AI systems, which the CDEI has broken down into broad categories.

Examples of these different approaches include: impact assessments, which can be used to anticipate a given system’s environmental, equality, human rights and data protection effects; impact evaluations, which do the same but retrospectively; bias auditing, which involves assessing the inputs and outputs of algorithmic systems to determine if and how a system is creating bias; and certification, whereby an independent body signs off on the system reaching certain standards.

In each case study, the approaches outlined are also mapped to different ethical principles outlined in the UK government’s AI whitepaper, which laid out its regulatory proposals for creating an agile, “pro-innovation” framework around the technology.

The principles themselves – which government says regulators should consider to facilitate “the safe and innovative use of AI” in their industries – include safety and security; transparency and explainability; fairness; accountability and governance; and contestability and redress.

The launch of the portfolio follows the CDEI’s publication of its AI assurance roadmap in December 2021, which set out six priority action areas to help foster the creation of a competitive, dynamic and ultimately trusted market for AI assurance in the UK.

However, in November 2022, the German Marshall Fund (GMF) think-tank published a report that warned while algorithmic audits can help correct for the opacity of AI systems, poorly designed or executed audits are at best meaningless, and at worst can deflect attention from, or even excuse, the harms they are supposed to mitigate.

Otherwise known as “audit-washing”, the report said many of the tech industry’s current auditing practices provide false assurance because companies are either conducting their own self-assessments or, when there are outside checks, are still assessed according to their own goals rather than conformity to third-party standards.

In May 2023, Barcelona-based algorithmic auditing firm Eticas spoke to Computer Weekly about its method of “adversarial auditing”, which is not included in the CDEI examples but is essentially the practice of evaluating algorithms or AI systems that have little potential for transparent oversight, or are otherwise “out-of-reach” in some way.

While Eticas is usually an advocate for internal socio-technical auditing, where organisations conduct their own end-to-end audits that consider both the social and technical aspects to fully understand the impacts of a given system, its adversarial audits researcher, Iliyana Nalbantova, said that AI developers themselves are often not willing to carry out such audits, as there are currently no requirements to do so.

“Adversarial algorithmic auditing fills this gap and allows to achieve some level of AI transparency and accountability that is not normally attainable in those systems,” she said at the time.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – admin@newsazi.com. The content will be deleted within 24 hours.
Exit mobile version