Best News Network

House of Lords launches an investigation into generative AI | Computer Weekly

The House of Lords has put out a call for evidence as it begins an inquiry into the seismic changes brought about by generative AI (artificial intelligence) and large language models.

The speed of development and lack of understanding about these models’ capabilities has led some experts to warn of a credible and growing risk of harm. For instance, the Center for AI Safety has issued a statement with several tech leaders as signatories that urges those involved in AI development and policies to prioritise mitigating the risk of extinction from AI. But, there are others, such as former Microsoft CEO Bill Gates, who believe the rise of AI will free people to do work that software can never do such as teaching, caring for patients, and supporting the elderly.

According to figures quoted in a report by Goldman Sachs, generative AI could add roughly £5.5tn to the global economy over 10 years. The investment bank’s report estimated that 300 million jobs could be exposed to automation. But others roles could also be created in the process.

Large models can generate contradictory or fictious answers, meaning their use in some industries could be dangerous without proper safeguards. Training datasets can contain biased or harmful content,  and intellectual property rights over the use of training data are uncertain. The ‘black box’ nature of machine learning algorithms makes it difficult to understand why a model follows a course of action, what data were used to generate an output, and what the model might be able to do next, or do without supervision.

Baroness Stowell of Beeston, chair of the committee, said: “The latest large language models present enormous and unprecedented opportunities. But we need to be clear-eyed about the challenges. We have to investigate the risks in detail and work out how best to address them – without stifling innovation in the process. We also need to be clear about who wields power as these models develop and become embedded in daily business and personal lives.”

Among the areas the committee is looking for information and evidence on is how large language models are expected to develop over the next three years, opportunities and risks and an assessment of whether the UK’s regulators have sufficient expertise and resources to respond to large language models.

“This thinking needs to happen fast, given the breakneck speed of progress. We mustn’t let the most scary of predictions about the potential future power of AI distract us from understanding and tackling the most pressing concerns early on. Equally we must not jump to conclusions amid the hype,” Stowell said.

“Our inquiry will therefore take a sober look at the evidence across the UK and around the world, and set out proposals to the government and regulators to help ensure the UK can be a leading player in AI development and governance.”

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.