Guernsey Press

Google updates AI image generation tools following inaccuracy incident

The tech giant had paused the creation of AI-generated images depicting people in February after some users flagged issues.

Published
Last updated

Google has upgraded its AI-powered image generation tools, months after pausing use of the technology over issues with its depictions of people.

In February, Google apologised and paused the creation of AI-generated images showing people within its Gemini chatbot after a number of “inaccurate or even offensive” results were shared online.

Now, the tech giant has relaunched the tool with a new version of its image generation technology, called Imagen 3, which includes new safety features, and with access to creating images of people now limited to paying subscribers to Gemini.

Google said its new image generation model will come with “built-in safeguards and adhere to our product design principles”.

Silhouette of a hand holding a phone with the Gemini logo on its screen with the Google logo partially obscured in the background
The ability to generate images of people will only be available in English to Gemini Advanced subscribers (Alamy/PA)

“We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes.

“Of course, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early access Gemini Advanced users as we keep improving.

“We’ll gradually roll this out, aiming to bring it to more users and languages soon.”

The technology giant confirmed that the ability to generate images of people would only be available in English to Gemini Advanced subscribers, but image generation not involving depictions of people would be available to all users.

Depictions of people in AI-generated images within Gemini was paused in February, when users of Gemini began flagging that the chatbot was generating images showing a range of ethnicities and genders, even when doing so was historically inaccurate – for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images depicting women and people of colour.

At the time, some critics accused Google of anti-white bias, while others suggested the company appeared to have over-corrected because of concerns about long-standing racial bias issues within AI technology which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

At the time, Google apologised – with chief executive Sundar Pichai saying the incident was “unacceptable” and the company had “got it wrong” – and pledged to fix the issue.

Alongside the image generator update, Google also confirmed that it was starting to roll out what it calls Gems, smaller, customisable versions of Gemini which users can tailor to be personal AI experts on a topic.

The rollout, which will initially be to Gemini Advanced users, will include a set of pre-made Gems to give users an idea of how they can be used – including a writing editor and coding expert.

Sorry, we are not accepting comments on this article.