Social media platforms race to address AI-generated images ahead of November election

News Room
By News Room 4 Min Read

Editor’s Note: A version of this article first appeared in the “Reliable Sources” newsletter. Sign up for the daily digest chronicling the evolving media landscape here.

Big Tech is racing to address the stream of A.I.-generated images inundating social media platforms before the machine-crafted renderings further contaminate the information space.

TikTok announced on Thursday that it will begin labeling A.I.-generated content. Meta (the parent company of Instagram, Threads and Facebook) said last month that it will begin labeling such content. And YouTube introduced rules mandating creators disclose when videos are A.I.-created so that a label can be applied. (Notably, Elon Musk’s X has not announced any plans to label A.I.-generated content.)

With less than 200 days until the high-stakes November election, and as the technology advances at break-neck speed, the three largest social media companies have each outlined plans to ensure their billions of users can differentiate between content generated by machines and humans.

Meanwhile, OpenAI, the ChatGPT-creator that allows users to also create A.I.-generated imagery through its DALL-E model, said this week that it will launch a tool that allows users to detect when an image is built by a bot. Additionally, the company said that it will launch an election-related $2 million fund with Microsoft to combat deepfakes that can “deceive the voters and undermine democracy.”

The efforts from Silicon Valley represent an acknowledgment that the tools being built by technological titans have the serious potential to wreak havoc on the information space and inflict grave injury to the democratic process.

A.I.-generated imagery has already proven to be particularly deceptive. Just this week, an A.I.-created image of pop star Katy Perry supposedly posing on the Met Gala red carpet in metallic and floral dresses fooled people into believing that the singer attended the annual event, when in fact she did not. The image was so realistic that Perry’s own mother believed it to be authentic.

“Didn’t know you went to the Met,” Perry’s mom texted the singer, according to a screen shot posted by Perry.

“lol, mom the AI got you too, BEWARE!” Perry replied.

While the viral image didn’t cause serious harm, it’s not difficult to imagine a scenario — particularly ahead of a major election — in which a fake photograph could mislead voters and stir confusion, perhaps tipping the scale in favor of one candidate or another.

But, despite the repeated and alarming warnings from industry experts and figures, the federal government has, thus far, failed to take any action to establish safeguards around the industry. And so, Big Tech has been left to its own devices to rein in the technology before bad actors can exploit it for their own benefit. (What could possibly go wrong?)

Whether the industry-led efforts can successfully curb the spread of damaging deepfakes remains to be seen. Social media giants have reams of rules prohibiting certain content on their platforms, but history has repeatedly shown that they have often failed to adequately enforce them and allowed malicious content to spread to the masses before taking action.

That poor record doesn’t inspire much confidence as A.I.-created images increasingly bombard the information environment — particularly as the U.S. hurtles toward an unprecedented election with democracy itself at stake.



Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *