close
close

Yahoo News offers deepfakes protection from McAfee as the election approaches

Yahoo News offers deepfakes protection from McAfee as the election approaches

The 2024 US presidential election campaign saw some notable deepfakes – AI-powered impersonations of candidates designed to mislead voters or demean the targeted candidates. Thanks to Elon Musk's retweet, one of these deepfakes was viewed more than 143 times million just.

The prospect of unscrupulous campaigns or foreign adversaries using artificial intelligence to influence voters has alarmed researchers and officials across the country, who say AI-generated and manipulated media is already spreading rapidly across the internet. For example, researchers at Clemson University found an influence campaign on social platform X that uses AI to generate comments from more than 680 bot-based accounts supporting former President Trump and other Republican candidates; The network has posted more than 130,000 comments since March.

To strengthen its protection against manipulated images, Yahoo News — one of the most popular online news sites that receives more than 190 million visits per month, according to Similarweb.com — announced Wednesday that it is integrating cybersecurity firm McAfee's deepfake image recognition technology. The technology reviews images submitted by Yahoo News writers and flags those that are likely to have been created or manipulated by AI. This helps the site's editorial team decide whether to publish them.

Matt Sanchez, president and general manager of Yahoo Home Ecosystem, said the company is just trying to stay one step ahead of scammers.

“While deepfake images are not a problem at Yahoo News today, this tool from McAfee helps us be proactive as we always work to ensure a high-quality experience,” Sanchez said in an email. “This partnership strengthens our existing efforts and enables us to achieve greater accuracy, speed and scalability.”

Sanchez said media outlets across the news industry are thinking about the threat of deepfakes — “not because it's a widespread problem today, but because the possibility of misuse is on the horizon.”

However, thanks to easy-to-use AI tools, deepfakes have proliferated so much that 40% of high school students surveyed in August said they had heard of some type of deepfake images being shared at their school. The online database of political deepfakes compiled by three Purdue University researchers includes nearly 700 entries, including more than 275 from this year alone.

Steve Grobman, McAfee's chief technology officer and executive vice president, said the partnership with Yahoo News grew out of McAfee's work on products to help consumers detect deepfakes on their computers. The company realized that the technology it was developing for labeling potential AI-generated images could be useful for a news site, especially one like Yahoo that combines the work of its own journalists with content from other sources.

McAfee's technology adds to the “extensive capabilities” Yahoo already had to verify the integrity of material coming from its sources, Grobman said. The deepfake detection tool, itself based on AI, examines images for the kinds of artifacts that AI-powered tools leave behind in the millions of data points in a digital image.

“One of the really great things about AI is that you don’t have to tell the model what to look for. The model figures out what to look for,” Grobman said.

“The quality of counterfeits is increasing rapidly and part of our partnership is just to address that,” he said. This means monitoring the state of the art in imaging and using new examples to improve McAfee's detection technology.

Nicos Vekiarides, chief executive of fraud prevention company Attestiv, said it was an arms race between companies like his and those that make AI-powered image generators. “They are doing better. The anomalies are getting smaller,” Vekiarides said. And while there is increasing support among major industry players for adding watermarks to AI-generated material, the bad guys aren't playing by those rules, he said.

In his view, deepfake political ads and other fake material broadcast to a wide audience will not have much of an impact because “they are debunked fairly quickly.” What's more damaging, he said, are the deepfakes that influencers share with their followers or pass from individual to individual.

Daniel Kang, an assistant professor of computer science at the University of Illinois Urbana-Champaign and an expert in deepfake detection, warned that no AI detection tools today are good enough to catch a highly motivated and well-equipped attacker like a prosecutor. sponsored deepfake creator. Because there are so many ways to manipulate an image, an attacker can “twiddle more knobs than there are stars in the universe to try to bypass the detection mechanisms,” he said.

But many deepfakes don't come from sophisticated attackers, which is why Kang said he's optimistic about current AI-generated media detection technologies, even if they can't identify everything. Adding AI-powered tools to websites now allows the tools to learn and get better over time, just like spam filters do, Kang said.

They're not a silver bullet, he said; They must be combined with other protection measures against manipulated content. Still, Kang said, “I think there is good technology that we can use, and it will get better over time.”

Vekiarides said the public has prepared for the wave of deepfakes by accepting the widespread use of image editing tools, such as photo editors that essentially airbrush the imperfections of magazine cover photos. It's not that big of a jump from a fake background on a Zoom call to a fake picture of the person you're meeting online, he said.

“We let the cat out of the bag,” Vekiarides said, “and it’s hard to put it back in.”

Leave a Reply

Your email address will not be published. Required fields are marked *