Best News Network

ChatGPT Helped Win a Hackathon

The ChatGPT AI bot has spurred speculation about how hackers might use it and similar tools to attack faster and more effectively, though the more damaging exploits so far have been in laboratories.

In its current form, the ChatGPT bot from OpenAI, an artificial-intelligence startup backed by billions of dollars from Microsoft Corp., is mainly trained to digest and generate text. For security chiefs, that means bot-written phishing emails might be more convincing than, for example, messages from a hacker whose first language isn’t English. 

Today’s ChatGPT is too unpredictable and susceptible to errors to be a reliable weapon itself, said

Dustin Childs,

head of threat awareness at Trend Micro Inc.’s Zero Day Initiative, the cybersecurity company’s software vulnerability-hunting program. “We’re years away from AI finding vulnerabilities and doing exploits all on its own,” Mr. Childs said.

Still, that won’t always be the case, he said. 

Two security researchers from cybersecurity company Claroty Ltd. said ChatGPT helped them win the Zero Day Initiative’s hack-a-thon in Miami last month.

Noam Moshe,

a vulnerability researcher at Claroty, said the approach he and his partner took shows how a determined hacker can employ an AI bot. Generative AI—algorithms that create realistic text or images built on the training data they have consumed—can supplement hackers’ know-how, he said.

The goal of the three-day event, known as Pwn2Own, was to disrupt, break into and take over Internet of Things and industrial systems. Before arriving, contestants chose targets from Pwn2Own’s list, and then prepared tactics.  

Mr. Moshe and his partner found several potential weak points in their selected systems. They used ChatGPT to help write code to chain the bugs together, he said, saving hours of manual development. No single bug would have allowed the team to get very far, he said, but manipulating them in a sequence would. At the contest, Mr. Moshe and his partner succeeded all 10 times they tried, winning $123,000. 

“A vulnerability on its own isn’t interesting, but when we look at the bigger picture and collect vulnerabilities, we can rebuild the chain to take over the system,” he said.  

OpenAI and other companies with generative AI bots are adding controls and filters to prevent abuse, such as to prevent racist or sexist outputs. 

Some bad actors will likely try to get around any cybersecurity boundaries the bots are taught, said

Christopher Whyte,

an assistant professor of cybersecurity and homeland security at Virginia Commonwealth University. 

Rather than instructing a bot to write code to take data from a computer without a user knowing, a hacker could try to trick it to write malicious code by formulating the request without obvious triggers, Mr. Whyte said.

It is similar to when a scammer uses persuasion to trick an office worker to reveal credentials or wire money to fraudulent accounts, he said. “You steer the conversation to get the target to bypass controls,” he said.  

Write to Kim S. Nash at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Business News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.