Best News Network

Mattel Experiments With ChatGPT in Cybersecurity

Toy maker

Mattel

is experimenting with generative-artificial-intelligence tools including ChatGPT to help its cybersecurity teams, but the company’s head of cybersecurity said the risk of inaccurate results from the new technology is too great to deploy it broadly.

Generative AI tools could make cyber analysts’ jobs easier by helping with tedious tasks, like parsing large datasets, freeing employees to do more pressing work, said Tom Le, Mattel’s chief information security officer. But many results from queries to the AI tools are incorrect, even if they appear convincing, he said.

“How do you manage degrees of right and wrong in the answers that you receive,” Le said.

ChatGPT maker OpenAI published a report last month detailing the company’s research into methods to reduce inaccuracies. “Even state-of-the-art models are prone to producing falsehoods—they exhibit a tendency to invent facts in moments of uncertainty,” the report said.

Training employees on query techniques would help, Mattel’s Le said. All Mattel employees using ChatGPT are receiving training on how to use generative AI tools securely, he said.

Instead of posing a subjective question, such as whether behavior detected on a corporate network is suspicious, for example, cyber teams would get more accurate and useful results by asking how many times there was similar activity, he said. ChatGPT is less likely to be wrong if a prompt is specific, he said.

Employees should be able to understand the context to judge whether a hack is afoot, he said. “The devil is really in how you ask the question,” he added. 

The potential for inaccuracies from generative AI brings risks for companies hoping to use it for important decisions without human supervision, said Ilia Kolochenko, chief architect at cybersecurity company Immuniweb. 

Relying on generative AI to write software code or configure a company’s cloud infrastructure could cause problems like tech outages if there are mistakes, he said. 

“If we give AI too much freedom, it will probably cause a lot of trouble,” he said.

OpenAI introduced ChatGPT in October 2022 and companies have rushed to explore how they might use the tool. German e-commerce giant Zalando plans to offer a shopping assistant using ChatGPT. Goldman Sachs and legal and medical publisher

RELX

are experimenting with the technology. 

Others, including JPMorgan, Verizon and the Commonwealth Bank of Australia, have banned it pending further study. 

Generative AI hasn’t reached a “leap of faith” moment yet where companies could rely on it without employees overseeing the outcome, Le said. “Right now, anything that I would use the AI engine for, it would only be to complement what I already have,” he said.

He said he is also concerned about a generative-AI tool potentially exploring information it shouldn’t. Companies generally restrict employees from accessing data they don’t need but permissions aren’t always well governed.

In search of an answer to a query, a generative AI tool could get into databases and files that should be off limits to the employee using the tool, he said. Appropriate data restrictions must be in place before rolling out ChatGPT or similar tools, he said. 

“A lot of those risks may not get addressed properly because it’s sort of a gold rush right now,” Le said.

Write to Catherine Stupp at [email protected]

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Business News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.