Best News Network

Current AI development too reckless, can wipe out humanity, warns MIT Prof

By Online Desk

Have you ever finished binging on an AI-gone-wrong sci-fi movie and wondered what you would do if you were being hunted by an all-knowing superintelligence with virtually unlimited powers?

If not, we urge you to; not because you may be cast as John Connor in the upcoming sequel of Terminator,  but because it’s possible that private organizations may soon push humanity to the brink of extinction in their quest for profit. 

Still think it sounds like science fiction? Let’s introduce you to Erik Tegmark, an MIT professor who has spent his life researching AI’s potential risks. The Swedish-American physicist and cosmologist is one of the most high-profile voices warning about the potential extinction of the human race unless safeguards are put in around the AI ‘arms race’ right now. 

The scientist had hit the headlines seven months ago when a group of experts in the AI field including Elon Musk, Steve Wozniak and Joshua Benegio sent an open letter urging them to pause AI advancements.

“It’s not an arms race, it’s a suicide race,” he said in a chilling interview given to a German TV channel. 

Tegmark points out that countries and corporations are in a race to be No.1 in AI development, but the fallout of this race will affect each and every human being on the planet. 

“It doesn’t matter to Germany if the AI that makes humans go extinct is originally of an American or Chinese background.”

The scientist is alarmed that companies and countries are rushing towards a ‘cliff’ without even being aware that they are rushing towards a cliff. “This is not a race ultimately anyone is going to win. If we do an out-of-control race we are all going to lose,” Tegmark said. 

Tegmark is not one of the ‘AI deniers’, but continues to support its future development. But where he differs from many others is that he wants some important and critical safeguards to be put in place now, before AI gets out of hand.

As a supporter of further AI development, he said he truly understands the absolute wonders AI can do – from curing cancer to eliminating poverty. But, he asks, what’s the point of all this if there’s no human race to enjoy these? 

If we continue to develop AI at the current pace and without any safeguards, “it will be too late when people start realizing AI is too smarter than us…Act before it’s too late. We know we have to do this,” he warned. 

He compared the current stage of AI development to nuclear technology in the early 1940s, when everybody knew it was possible to make a nuclear bomb, but nobody had managed to yet.
At the time, he pointed out, the bomb existed in theory, but its realization was a few steps away. Similarly, in AI, many experts understand the risk of the emergence of a super intelligence that is hostile or apathetic to human beings, but we are still a few steps away from it.

Tegmark pointed out that individual companies, and even countries, are not in a position to moderate the pace of AI development so that sufficient safeguards can be put in place.

The biggest problem, he said, is that there is commercial pressure for companies that don’t allow them to pause alone. 

“The leaders understand the risks and want to do the right thing but one company can not pause alone. They are just going to have their lunch eaten by the competition and get killed by the shareholders,” Tegmark said. 

The only way humankind doesn’t end up in such a situation, according to Tegmark, is when all companies that work on AI collectively agree to not make any advancements in the field of AI. 

With this, no companies need to fear missing out and can give humankind some breathing space and there’s enough time to come up with safety regulations. 

“Let’s not squander all those possibilities by being a little too eager and releasing things a little bit too quick. Let’s do it safely,” said Tegmark. 

He pointed out that policymakers have not been able to keep pace with AI developments, and the best thing right now is to pause this for six months to give everyone a chance to catch up with the science.

“We have successfully banned bioweapons, human cloning. We need to gather all key players have a conversation and get it going. We need time to keep with the policymakers,” he added. 

Similar safeguards must be put in place around AI development at a global level, and not be left to individual organizations. “Can such crucial questions be left to unelected tech leaders to decide upon,” he asked.

He said that powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. 

“It’s not just clueless luddites who want to slow down. It’s the founders of the field who want to slow down. They truly understand the wonderful upside AI can have,” Tegmark said.
 

Follow The New Indian Express channel on WhatsApp

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Life Style News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.