Best News Network

Risk concerns hold back adoption of AI by professional services

Research across professional services firms, including in the legal sector, has revealed uncertainty over the adoption of generative AI.

The study by global content and tech firm Thomson Reuters, taking in the US, UK and Canada, sought to better understand how the technology is perceived and applied within the professional services industry and uncovered a mix of optimism and caution in the adoption of generative AI.

Among the professionals surveyed, there was high recognition of the potential of generative AI: 78% of respondents said they thought generative AI tools such as ChatGPT could enhance legal or accounting work, with the proportion slightly higher for legal (82%) than for tax (73%). Furthermore, about half (52%) of all respondents felt that generative AI should be used for legal and tax work.

However, despite the research sharing strong feelings about generative AI’s utility, many within the legal field were still weighing their options before adopting the technology. Only 4% of respondents were currently using generative AI in their operations, with an additional 5% planning to do so. Interestingly, tax and accounting firms were more open to the idea, with a 15% adoption or planned adoption rate.

Among those who had adopted or were planning to adopt generative AI technologies, research was the primary use cited by respondents; about two-thirds of those in corporate legal and 80% of those in tax identified it as the most compelling use. Knowledge management, back-office functions, and question answering services were also cited as use cases of interest.

Risk perception seemed to be the major stumbling block in the adoption of generative AI tools. A significant 69% of respondents expressed risk concerns, suggesting that fear may be holding back a more widespread adoption.

While the potential of generative AI tools was recognised, there was an air of uncertainty, underlining the need for establishing trust, as well as furthering education and strategic planning in its implementation.

Despite concerns around the risks to privacy, security, and accuracy, very few organisations were actively taking steps to limit the use of generative AI among employees. Twenty per  cent of respondents said their firm or company had warned employees against the unauthorised use of generative AI at work. Only 9% of all respondents, meanwhile, reported their organisation had banned the unauthorised use of generative AI.

Steve Hasker, president and CEO, Thomson Reuters said the technology had the capacity to disrupt and redefine the professional landscape, “but it is clear from our findings that there is a trust gap with professionals.”

He added: “The future of professional work is set to be revolutionised by generative AI, and as an industry, we need to work together to find the right balance between the benefits of technology and any unintended consequences. We believe this will help our customers to first trust the transformative power of generative AI, and then harness the opportunity to shape the future of their professions.”

Latest HR job opportunities on Personnel Today


Browse more human resources jobs

 

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.