Guernsey Press

US government seeks stronger measures to test safety of AI tools before release

The release of ChatGPT and similar products from Microsoft and Google has led to consumer concerns about the pace of technological change.

Published

President Joe Biden’s administration has said it wants stronger measures to test the safety of artificial intelligence tools such as ChatGPT before they are publicly released.

The US Commerce Department said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems.

“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.

Last week, Mr Biden said during a meeting with his council of science and technology advisers that tech companies must ensure their products are safe before releasing them to the public.

The Biden administration last year unveiled a set of far-reaching goals aimed at averting harms caused by the rise of AI systems.

However, that was before the release of ChatGPT, from San Francisco start-up OpenAI, and similar products from Microsoft and Google led to wider awareness of the capabilities of the latest AI tools that can generate human-like passages of text, as well as new images and video.

“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Mr Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening.”

The NTIA’s notice leans heavily on requesting comment about “self-regulatory” measures that the companies that build the technology would be likely to lead.

That is a contrast to the European Union, where lawmakers this month are negotiating the passage of new laws that could set strict limits on AI tools depending on how high a risk they pose.

Sorry, we are not accepting comments on this article.