Getty users gain access to AI image generator with legal protection

News Room
By News Room 5 Min Read

Receive free Getty Images Inc updates

Getty Images will give hundreds of thousands of users access to a new artificial intelligence image-generating tool, as a global intellectual property debate intensifies around the fast-moving technology.

The US photo agency, one of the world’s largest with more than 135mn copyrighted images in its archives, on Monday launched an AI tool that can create pictures based on user prompts. It also set out a payment plan for those whose images were used to train the AI system.

Getty added a pledge to protect the more than 800,000 users with an uncapped indemnification tied to the product, meaning the agency will assume full legal and financial responsibility on behalf of its business customers for any potential copyright disputes.

The release follows a promise this month from Microsoft to provide indemnity coverage to any potential copyright claims arising from using its AI CoPilot services, which integrate generative AI into Word, Excel and PowerPoint products.

Getty will also pay the artists who have helped train its AI system on a “recurring basis”, chief executive Craig Peters said.

“We fundamentally believe creatives’ . . . expertise, and the investment that they put into this content, should be rewarded,” Peters said. The “dollars are going to be small at the outset”, he added, but said the market for generative AI products will grow and “these will develop into material revenue streams”.

Getty’s product launch comes on the heels of OpenAI’s update last week of its popular image-generating tool DALL-E.

AI art tools offered by companies such as OpenAI, Midjourney and Stability AI are at the centre of a debate around intellectual property ownership in the age of AI. Getty this year filed a copyright claim against Stability AI, maker of a commercial image-generating tool, in the UK High Court, claiming it had “unlawfully copied and processed millions of images protected by copyright”.

Text-to-image AI models are trained using billions of images pulled from the internet, including social media, ecommerce sites, blogs and stock image archives. The training data sets teach algorithms, by example, to recognise objects, concepts and artistic styles such as pointillism or Renaissance art, as well as connect text descriptions to visuals.

Getty’s latest product, developed in partnership with chip company Nvidia, was trained on its own large library of images. All content generated using its AI will belong to the customer, and not be added into Getty’s existing content libraries for others to license, Peters said, because the company wants to avoid conflating authentic and AI-generated imagery in its databases.

OpenAI’s Dall-E 3 is integrated with paid versions of ChatGPT so users can ask the chatbot to create images and come up with more detailed prompts.

Microsoft-backed OpenAI has also said it will allow artists to opt their art out of future versions of text-to-image AI models via a removal form on its website, acknowledging ongoing legal disputes between artists and commercial AI imagery companies. In May, its chief executive Sam Altman told the US Congress that “creators deserve control over how their creations are used”.

Creating IP-protected products that fairly compensate human artists is the way forward for the industry, Peters said.

Products such as Getty’s and Adobe’s Firefly AI “blow up” arguments against doing this from companies which have called such a move “impractical”, he added. “This proves that argument doesn’t hold any water.”

He added that in many cases training on higher quality data, such as Adobe or Getty data sets, produced better results than scraping the web indiscriminately.

“People that are using these tools, at least in the commercial sense, value their own creativity, and therefore they value the creativity and the work of others,” he said.

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *