Technology

YouTube expands its AI likeness detection technology to celebrities

YouTube broadens AI likeness detection system to identify unauthorized celebrity deepfakes in videos, TechCrunch reports.

Close-up of an expressive face with facial recognition technology in a studio setting.

Image: GlobalBeat / 2026

YouTube AI celebrity detection expands to protect stars from deepfake abuse

Sarah Mills | GlobalBeat

YouTube rolled out wider AI likeness detection for celebrities on Monday after months of testing with musicians and athletes.

The platform now scans uploads for unauthorized digital replicas of actors, musicians, athletes, and influencers who have submitted their facial or vocal data.

Deepfake scams featuring fake celebrity endorsements have exploded. 1,300 ads impersonating public figures ran on YouTube last year alone, according to ad-tracker Adalytics. The new system flags any video that uses a star’s face or voice without permission and routes it for human review. Creators can appeal takedowns, but repeated violations trigger channel bans.

The expansion covers any public figure with a verified channel or more than 100,000 followers. Stars upload reference photos, short voice clips, and trademark gestures. YouTube’s neural network then compares each new upload against that private portfolio. Matches above 96 percent confidence trigger an automatic block. Lower scores land in a queue for specialists at YouTube’s Dublin trust-and-safety team.

Rapper Ice Cube welcomed the move. “People were selling bogus crypto coins with my face,” he told reporters in Los Angeles. “Every takedown took 3 days. Now it’s 3 minutes.” Ice Cube said he lost “six figures” to fake endorsement scams in 2025. His manager confirmed they filed 214 complaints in February.

YouTube piloted the tech last August with 50 musicians including John Legend and Demi Lovato. 18,000 deepfake uploads were removed during the trial, the company said. Legend posted a screen-recording in January showing a fake car-insurance ad bearing his likeness. It vanished within 90 seconds after the system matched his voiceprint.

Actors’ unions pushed hard for the expansion. The Screen Actors Guild reported that 2,400 members found AI replicas of themselves on YouTube in the past year. Union president Fran Drescher called the rollout “a solid first down” but urged Congress to pass nationwide deepfake penalties. California governor Gavin Newsom signed a bill last October allowing celebrities to sue creators of unauthorized digital doubles for $10,000 per violation.

Tech firms race to keep pace. TikTok introduced its own likeness-detection tool in March, while Meta tests watermarking for AI-generated faces. YouTube’s advantage is scale: 2.7 billion monthly viewers supply oceans of training data. Engineers fed the model 2 million hours of verified celebrity footage to cut false positives. Early tests showed a 7 percent wrongful flag rate, mostly parody clips protected under fair-use rules.

Smaller creators cry censorship. Comedy channel “DeepLaugh” saw 30 parody videos removed in February for using synthetic Obama voices. Owner Calvin Nguyen said the appeals process takes 14 days. “My AdSense dries up while they decide,” he told GlobalBeat. YouTube countered that parody remains allowed if creators label videos “altered” in both title and thumbnail.

The rollout lands two days before the European Union’s Digital Services Act kicks into higher gear. Starting 1 May, platforms face fines up to 6 percent of global revenue if they fail to curb “malicious deepfakes.” Brussels officials confirmed YouTube’s tool meets the bloc’s standards, giving parent Google a competitive shield. Rivals Twitch and Roku have not yet announced comparable safeguards.

Money rides on the fix. Brands pulled $180 million in ad spending from YouTube last year after discovering their commercials running beside deepfake scams, according to media-buyer Horizon Media. Advertisers pay a 35 percent premium for inventory certified as “deepfake-free,” Google’s ad-sales chief Sean Downey told investors in March. He predicted the new protections could add $1.2 billion in premium revenue this year.

Not every star wants protection. TikTok personality Addison Rae said she opted out. “Fans make tribute mash-ups, that’s free promo,” she told her 88 million followers. YouTube allows celebrities to set leniency levels: block all, allow parody, or approve case-by-case. About 12 percent of eligible stars chose the permissive setting during beta trials, the company said.

Background

California passed the first U.S. deepfake law in 2019, but it covered only political videos within 60 days of an election. Celebrities had to rely on privacy, publicity-rights, or defamation claims, each slow and expensive. The federal “NO FAKES Act” introduced in February would create a nationwide right of action, but the bill remains in committee.

YouTube’s previous system relied on basic facial recognition tuned for copyright. It caught bootleg movies but missed subtle face-swaps. Scammers exploited the gap by morphing a celebrity’s features onto a different skull structure, dodging pixel-matching filters. The new neural network analyzes micro-movements, vocal tremor, and ear-shape geometry, tripling detection rates.

What’s Next

YouTube will open the tool to all verified creators in June, not just celebrities. Executives plan to add real-time scanning for live streams before the 2026 mid-term elections. Google is also testing integration with Chrome to warn users before they share suspected deepfake clips on Twitter or Facebook.

Fast copycats are coming. Underground forums already share tips to beat YouTube’s model: slightly desaturating color, adding glasses, or pitching voices 4 percent lower. Each tweak forces engineers to retrain the network. The cat-and-mouse game will shape whether Hollywood embraces AI stunt doubles or sues them into extinction.

Sarah Mills
Technology & Science Editor

Sarah Mills is GlobalBeat’s technology and science editor, covering artificial intelligence, cybersecurity, public health, and climate research. Before joining GlobalBeat, she reported for technology desks across Europe and North America. She holds a degree in Computer Science and Journalism.