Best News Network

AI interview: Michael Osbourne, professor of machine learning | Computer Weekly

The corporate domination of artificial intelligence (AI) is preventing the technology from reaching its full potential to benefit society, says AI expert Michael Osbourne, who argues for stricter regulation of the private sector and a greater role for public research institutions.

A professor of machine learning at Oxford University and co-founder of responsible AI platform Mind Foundry, Osbourne’s research has explored the practical use of AI algorithms in a diverse range of fields, from astrostatistics to ornithology, and sensor networks to energy management.

A key theme of his overall research has been the societal and economic impacts of new technologies, particularly with regards to automation of the workplace.

Speaking with Computer Weekly, Osbourne notes the pressing need to wrest control of AI’s development and deployment away from corporations in the private sector, so that it can be used to its full potential in promoting human flourishing.

“It’s fair to say that the private sector has been a bit too powerful when it comes to AI…if we want to see the benefits of these technologies, states need to step in to unlock and to drain some of the moats that are protecting big tech,” he says.

To achieve this, he argues for tighter guardrails on what the private sector can do with AI and for a rebalancing of the scales in favour of public research institutions. Osbourne also says there should be serious thought given to how political processes can be used to change the direction of travel with AI.

A market-driven arms race

During an appearance before the House of Commons Science and Technology Committee – which is conducting an inquiry into the UK government’s proposed “pro-innovation framework for regulating AI” – Osbourne noted that the release of ChatGPT by OpenAI in November 2022 placed a dangerous “competitive pressure” on big tech firms developing similar tools.

“Google has said publicly that it’s willing to ‘recalibrate’ the level of risk and assumes in any release of AI tools due to the competitive pressure from OpenAI,” he said during the session.

“The big tech firms are seeing AI as something very valuable, and they’re willing to throw away some of the safeguards…and take a much more ‘move fast and break things’ perspective, which brings with it enormous risks.”

He further noted this was a “worrying development” because it signals the start of a race to the bottom in terms of safeguards and standards.

Speaking about the implications of a market-driven AI arms race, Osbourne tells Computer Weekly that Google’s loosening of restrictions on what AI tools it will publicly release has already negatively affected the company, and that it’s only a matter of time before similar mistakes start affecting society on a wider scale.

“We saw the consequences of that in the demo they provided of Bard, their own large language model [LLM], which famously went wrong and wiped out catastrophic amounts of value from Google’s share price,” he says.

“The reason that I’m concerned here is not that one of these companies might be winning over the other – of course, there have always been competitive pressures between companies – it’s because this technology is so powerful, and relatedly so dangerous, that cutting corners can actually lead to real harms for society at large.”

Giving the example of a jailbreaking prompt users of ChatGPT can employ to make the model ignore its restrictions against hate speech, Osbourne says: “It really doesn’t take a lot of effort to get ChatGPT to produce truly horrific speech.”

He adds, however, that although LLM’s such as ChatGPT and Bard are mostly confined to chat bot applications, their lack of reliability – even in producing answers to factual questions – means we are still likely to “see the propagation of misinformation and disinformation”, and that harms will only multiply if deployed in more impactful applications.

“AI already has led to wrongful arrests when used in policing. AI has led to people being denied social welfare benefits,” he says. “You can imagine with the success of ChatGPT, with it having reached an audience of hundreds of millions, people are going to be using it in places where ultimately it will be determined to have done great harm – it’s not reliable, it’s not trustworthy, it’s not transparent.”

Rebalancing the scales

Part of the problem is the dominance of the private sector over AI, which Osbourne says has created unhealthy dynamics in the development of the technology.

Over the past 10 years, he notes that the “wholesale” recruitment of university researchers by big tech firms has created a situation where research capacity has been drained away from public institutions, which simply cannot compete with big tech on a like-for-like basis.

“We’ve seen the demands of the technology being shaped to what can only be met by the private sector, in that AI today needs very large data sets, big compute and very skilled engineers – all of which are only really possessed within the tech sector,” he says.

We’ve seen the demands of the technology being shaped to what can only be met by the private sector
Michael Osbourne, Oxford University & Mind Foundry

“The data and the compute and the skilled personnel that have been poached from universities gives big tech this almost insurmountable lead in the development of [AI] technologies, and I think that’s really worrying.”

He adds while its indisputable that almost all the most impactful AI technologies have been developed by the private sector, the current lack of guardrails around how it develops and deploys AI mean its potential benefits are being severely limited.

These guardrails should include certain technical standards around transparency and reliability, as well as “normative frameworks” that prohibit the most harmful AI practices, such as those being proposed in the European Union’s AI Act.

Osbourne adds that there needs to be a rebalancing of the scales away from the private sector if the technology is going to move in a truly positive direction, suggesting that states should step in “to drain some of the moats that are protecting big tech”.

This could be achieved by, for example, providing public institutions with better access to computing power, using different data ownership models such as data trusts to manage information gathered about the public, and restricting what companies can patent to make technology more accessible.

“Patents are really restricting quite heavily what can be done in AI, and it’d be nice to see governments recognise that that could be inhibiting innovation which would be in the public benefit.”

AI’s corporate logic

Osbourne adds that the private sector’s dominance over AI also creates a subtle feedback loop, whereby already-dominant players become even further entrenched by their ability to process ever greater volumes of information.

“One way of viewing a corporation is as a way of processing information and making decisions,” he says. “Whereas a single human is limited in the amount of information they can process, a corporation addresses that limitation by bringing lots of people together and having structures that allow information to propagate and be addressed. An AI is scalable in a similar way.”

He adds that the scale of corporations gives them the ability to combine global-level information with local-level information, which they can then use to tailor their commercial offerings to localities or regions to the point where smaller players without access to the same amount of raw information or computing power struggle to compete.

By further enhancing corporations’ already advanced ability to process large volumes of information, which has been a big part of their success over the past few decades, Osbourne says AI can therefore be seen as an “extension of the traditional power of corporations”.

Another way in which Osbourne sees the corporation as similar to AI is that it “tends to be quite ruthless”, adding: “That is, it has a goal, usually narrowly defined as maximising shareholder value or something, and will pursue that goal to the detriment of many other goals we might wish to have.”

He says that it is here – in the comparison between AI and the corporation, and the doggedness with which they pursue their goals – that the truly threatening potential of the technology comes to the fore.

Existential threat?

“When we talk about the existential risk of AI, think about the ways in which corporations have already reshaped the world, and not always to the public good it’s fair to say,” says Osbourne.

Giving the example of “diesel-gate” – a scandal that broke in 2015 after Volkswagen was found by US authorities to have intentionally and illegally used software in its new diesel vehicles to get them through regulatory testing – Osbourne says: “If real humans, when they amass together in a particular way, in the form of cooperation, can do these kinds of things, how much more harmful might it be if an AI develops comparable or even superior capabilities?”

He adds that the concern is not AI itself in a pure technical sense, but how it plugs into social and economic systems: “We have now these globe-spanning means of propagating information and making decisions that an AI could very easily manipulate. Even without being actively malicious, [harms could happen] if the goals it’s pursuing are even slightly misaligned on our own.”

Noting that AI has already been able to trick real humans into, for example, believing it is sentient, Osbourne gives the example of an AI being used to identify virus candidates most capable of causing another pandemic.

“There are labs doing this today… [that] will try to enhance viruses to become more capable of producing a pandemic,” he says. “If an AI was involved in that process, it might be doing what it was told to do – enhancing viruses to be more transmissible and more lethal – but if it was sufficiently capable, it might think, ‘I can only really test this virus when it’s released into real populations.’

The way [LLMs have] already exceeded our expectations [means] we can probably expect our expectations to continue to be exceeded
Michael Osbourne, Oxford University & Mind Foundry

“If it was sufficiently capable, it might be able to persuade a lab worker to do exactly that. People aren’t perfect – maybe the AI provides some persuasive text, in the way that large language models are now capable of doing, to get this lab worker to leak [the virus].”

The existential risk of AI, therefore, is not borne out of malice – something AI is incapable of – but from the fact that a sufficiently powerful model could act in unexpected ways that humans cannot predict or stop to achieve its programmed goals.

However, Osbourne also notes that despite enormous progress in recent years, AI today is “very far” from human-level intelligence.

“These large language models have woken everyone else to the potential of AI. The models aren’t perfect, in some ways they’re quite foolish, they make silly mistakes,” he says. “But at the same time, it’s hard to predict what will come next, and the way that they’ve already exceeded our expectations [means] we can probably expect our expectations to continue to be exceeded.”

Human flourishing

Despite his concerns over the corporate domination of AI, Osbourne says he does not want AI thrown out with the bath water: “I do see the technology is one that is immensely powerful, [and] that can lead to real benefits for human flourishing. I think it’s essential to develop AI if we’re to meet many of the challenges of this century.”

However, to get there, Osbourne adds people need to think seriously about how political processes can be used to ensure we get the best out of the technology.

“Technology has always been political…historically, waves of technological innovation have always been linked to political opposition,” he says, noting the example of the Luddites.

Although the term Luddite is used today as shorthand for someone wary or critical of new technologies for no good reason, the historical origins of the term are very different.

While workplace sabotage occurred sporadically throughout English history during various disputes between workers and owners, the Luddites represented a systemic and organised approach to machine breaking, which they started doing in 1811 in response to the unilateral imposition of new technologies by a new and growing class of industrialists.

Luddism was therefore specifically about protecting workers jobs, pay and conditions from the negative impacts of mechanisation, and against the unilateral imposition of new technologies from above.

“Luddites…are usually portrayed as being irrational opponents of change and progress, but in fact they were protesting what were very real threats to their livelihoods,” says Osbourne, adding that while the English Industrial Revolution ultimately made people richer and achieved significant reductions in child mortality, “it didn’t lead to wage gains for English workers for something like 60 years”, while inequality increased during this period.

“Technology is not an exogenous force that’s dropped down to us by aliens. It’s something that’s developed by humans in response to political incentives,” says Osbourne. “We need to think seriously how we use the political process to get the technology we want that will actually deliver human flourishing.”

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.