Digital democracy|Dystopian new tech
Deepfake fraud caused financial losses nearing $900 million
Since the internet first became available to the masses, we have been used to hearing, “Don't believe everything you see online.” But now, with deepfake technology booming at an exponential rate, that advice is more crucial than ever. In just the first half of 2025, deepfake incidents exceeded all those since 2017 by 171%. And that is not all — financial losses related to deepfake fraud are also growing drastically.
Key insights
- Deepfake technology is already being used for fraud, causing almost $900 million in losses to date. In the first half of 2025 alone, losses amounted to $410 million, compared to $359 million for all of 2024, and $128 million total over the period 2019–2023.
- Four main categories of fraud accounted for 98.6% of financial losses ($885 million). The most common was impersonating famous people to promote fraudulent investments, which resulted in $401 million in losses. The next most common category was fraudulent transfers after impersonating a company official, leading to $217 million in losses. Another type of fraud involves using deepfake technology to bypass biometric verification systems in order to take out loans or steal data. Although this method is much less frequent than the other forms, it has already caused $139 million in losses. Lastly, romance scams, which are widely used by criminal groups, have caused $128 million in losses.
- Aside from the main types of fraud, there are five additional categories that together account for just 1.4% of total financial losses ($12.5 million). Of these, generating vast amounts of music content using AI in order to earn royalties, without disclosing that the music was produced solely by artificial intelligence, accounts for 1.1% of total losses ($10 million).
- The last four categories of fraud include creating deepfakes of victims and threatening to share fake nudes unless payment is made, as well as impersonating a family member in need to convince the victim to transfer money. Fraudsters also use AI to pose as police officers, contacting victims under false pretenses and persuading them to send money. Lastly, scammers may impersonate famous individuals and ask victims to send funds for various reasons, such as a deepfaked Italian defense minister asking for money to free kidnapped journalists. Although these four categories amounted to only $2.5 million in losses, people targeted by these scams may suffer lasting emotional and financial effects.
- Deepfake fraud targets both businesses and individuals, with businesses losing 40% ($356 million) and individuals 60% ($541 million) of the $897 million total. Examples of incidents affecting businesses include over 1,100 fraudulent loan application attempts and more than 1,000 fraudulent accounts created using AI-generated deepfake images to bypass biometric verification systems, with potential losses in Indonesia alone estimated at $138.5 million. In another case, a bank manager in Hong Kong was tricked into transferring $35 million after receiving a phone call from someone he believed to be a company director. The fraudster used AI-based voice cloning to perfectly mimic the executive’s voice and supported the request with convincing emails and documentation.
- Individuals are also targeted by deepfake fraud. For example, a romance scam using deepfake AI resulted in losses exceeding $46 million. In this case, 28 people were arrested for creating realistic online personas of attractive women to build relationships with victims and coerce them into investing in a fake cryptocurrency platform. Another example involves a large-scale AI-assisted financial scam allegedly operated from Tbilisi, Georgia. This scheme used deepfake celebrity endorsements and manipulated victims through fraudulent trading dashboards that simulated high returns. Call center agents, trained with AI-driven persuasion tactics, convinced individuals to invest more money while falsely promising profits. Many reportedly lost their life savings, with total losses reaching $35 million across more than 6,000 victims.
- Looking at the broader picture, deepfake incidents fall into three main categories. Fraud-related deepfakes occur 31% of the time when deepfake technology is mentioned. Explicit content generation accounts for 23% of cases, while deepfakes related to political agendas make up 22%. The prevalence of deepfake incidents is rising rapidly, with almost four times as many cases reported in the first half of 2025 (580 incidents) as in the entire year of 2024 (150 incidents). In comparison, only 64 deepfake-related incidents were recorded between 2017 and 2023.
Methodology and sources
This study used data from the AI Incident Database and Resemble.AI to create a combined dataset covering deepfake incidents since 2017. Incidents were included if they involved the generation of fake videos, images, or audio and were covered by media articles. These incidents were categorized into fraud, explicit content generation, politically charged content, and miscellaneous. For deepfake incidents related to fraud where a financial loss was clearly reported in the article, each case was further classified into one of nine specific fraud subcategories.
For the complete research material behind this study, visit here.