Cybersecurity|Cybersecurity statistics
Deepfake statistics in early 2025: how frequently are famous people targeted?

In the rapidly evolving digital technology landscape, deepfakes — media that uses AI to make it look like someone is saying or doing something they never actually did — have emerged as a significant cause for concern. As these technologies become increasingly sophisticated and accessible, the frequency and impact of deepfake incidents are rising at an alarming pace. This is particularly true for famous people like Elon Musk, Donald Trump, and Taylor Swift, whose public image makes them irresistible targets for manipulation. Let’s take a closer look at these incidents.
Deepfake incidents boomed in early 2025, surpassing 2024
Resemble.AI and the AI Incident Database have been tracking deepfake incidents involving generated videos, images, and audio. We combined these databases to better understand how many deepfake incidents occur. From 2017 to 2022, there were 22 recorded incidents. In 2023, this number nearly doubled to 42 incidents. By 2024, incidents increased by 257% to 150. Remarkably, in the first quarter of 2025 alone, there were already 179 incidents, surpassing the total for all of 2024 by 19%.
We categorized deepfake incidents into the following categories:
- Explicit content generation;
- Fraud;
- Politically charged content;
- Miscellaneous content.
In the first quarter of 2025, deepfake incidents surged across all categories compared to the previous year:
- Explicit content incidents rose to 53, surpassing the 26 from the prior year;
- Fraud reported 48 incidents, nearly matching the total of 56 from 2024;
- Political incidents reached 40, nearing the 50 from 2024;
- Miscellaneous content increased to 38 incidents, exceeding the previous total of 18.
Overall, the data from the first quarter of 2025 highlights a concerning trend: deepfakes are increasing across all categories compared to the previous year.
Since 2017, fraud has accounted for 31% of all deepfake incidents: 17% involved political figures, 27% featured celebrities, and 56% included other types of fraud. Explicit content comprised 25% of incidents, targeting 35% of politicians or celebrities and 65% of the general public. Politically charged content represented 27% of deepfake incidents, while miscellaneous incidents accounted for 17%.
Besides categorizing deepfake incidents, we also looked at which groups of people were targeted by deepfakes the most. Celebrities were targeted 47 times in the first quarter of 2025, an 81% increase compared to the whole year of 2024. Politicians were targeted 56 times, almost reaching the 2024 total of 62. The general public also saw more attacks in the first quarter of 2025, a 23% increase compared to 2024.
Unmasking deepfakes: the growing targeting of famous people
Since 2017, celebrities have been targeted in 21% of incidents, totaling 84 cases. Elon Musk was targeted 20 times, accounting for 24% of celebrity-related incidents. Taylor Swift follows with 11 counts, while Tom Hanks has 3. Kanye West, Emma Watson, and Brad Pitt have each been targeted 2 times. The other celebrity targets involve multiple mentions related to explicit content or single mentions of fraud. In 38% of cases, celebrity deepfakes were used for fraud, 26% for generating explicit content, and 4% for political endorsement. The rest covered other topics, such as unauthorized deepfake voices, AI-generated music, and other types of deepfakes.
Politicians have been involved in 36% of all deepfake incidents, totaling 143 cases since 2017. Their targeting increases during elections, where people use deepfakes to promote their political agendas. Donald Trump is the most targeted politician, having been involved in 25 separate deepfake incidents, amounting to 18% of politician-related deepfakes. Joe Biden follows with 20 incidents, with most of these deepfakes occurring during the election, where his voice was used for robocalls to contact voters. Kamala Harris has experienced 6 incidents, and Volodymyr Zelenskyy has been targeted 4 times. In 76% of cases, most politician deepfakes were used for political purposes. Fraud accounted for 14% of the cases, and explicit content was generated 9% of the time.
The general public was targeted 43% of the time, with 166 deepfake incidents. Of these, 41% accounted for incidents related to various types of fraud, and 39% of these cases resulted in the generation of explicit media — where unauthorized images or videos of people were created. The rest covered a wide array of topics, such as a video of a school principal making false statements or text-to-speech cloning of people's voices.
Deepfake media formats: what are the most popular ones?
The most popular format for deepfake incidents is video, with 260 reported cases. The primary categories include fraud (33%) and political content (28%), followed by explicit content (23%) and miscellaneous content (16%).
Looking at the target groups, politicians and the general public were each targeted 39% of the time, while celebrities were targeted 22% of the time.
The second most popular format is image, with 132 incidents reported. Remarkably, 58% of these accounted for explicit content. Other categories included fraud (20%), political content (11%), and miscellaneous content (11%).
The most popular target group was the general public (56%), followed by politicians and celebrities, each targeted 22% of the time.
The last category is audio, which recorded 117 incidents. Unlike images, the most popular use of deepfake audio was for fraud (45%), followed by political content (34%) and miscellaneous purposes (21%).
The target groups were as follows: politicians and the general public were each targeted 40% of the time, and celebrities were targeted 20% of the time.
How to spot a deepfake
Due to their widespread distribution and enhanced realism, detecting deepfakes is becoming progressively more difficult. Technology that generates deepfakes often outpaces the capabilities of detection tools. The significant amount of this type of content online also complicates distinguishing between what is genuine and what is not. That said, there are some things you can look out for to detect a deepfake, including:
- Unnatural movements;
- Color differences;
- Inconsistent lightning;
- Poor lip-sync (audio doesn't match lip movements);
- Blurry or distorted backgrounds;
- Distribution channels (it can be shared by bot).
What’s next?
According to UK government sources, approximately eight million deepfakes will be shared in 2025, representing a dramatic increase from the 500,000 in 2023¹. This could indicate that the figures double every six months. Such rapid growth suggests that deepfake technology is transitioning from a specialized technical capability to a mainstream tool that both sophisticated and opportunistic criminal actors can easily access.
Methodology and sources
This study used data from Resemble.AI and the AI Incident Database to create a combined dataset covering deepfake incidents since 2017. Incidents were included if they involved the generation of fake videos, images, or audio and were covered by media articles. These incidents were categorized into fraud, explicit content generation, politically charged content, and miscellaneous. Additionally, we categorized the target groups: politicians, celebrities, and the general public. We conducted analyses to determine the prevalence of each category and to explore what kind of incidents these categories cover.
For complete research material behind this study, visit here.