Published:Oct 21, 2025

Digital democracy|Dystopian new tech

AI drives deepfake losses to $1.56 billion

Deepfakes are an escalating threat, with related financial losses exceeding $1.5 billion as of 2025. This rapid rise highlights the evolving risks from AI-generated images and video: widely available tools can now produce convincing deepfakes for just a few dollars, driving down the cost of fraud, making scams easier to scale, and enabling new forms of fraud to emerge.

Key insights

  • Deepfake-related losses have already reached $1.56 billion, with over $1 billion occurring in 2025 alone. In 2024, losses approached $400 million, while from 2019 to 2023, they totaled $130 million. One of the main reasons for this surge is the dramatic decrease in the cost of creating deepfakes. Previously, depending on quality, producing a one-minute deepfake video was estimated to cost between $300 and $20,000¹ and required professional skills. Now, everyone can generate deepfake videos, due to easily accessible AI-assisted tools such as OpenAI’s recently released Sora 2². The minimum cost of generating fake videos has dropped substantially, sometimes reaching just a few dollars per minute.
  • Deepfake voices are getting cheaper to make, often costing around $0.01–$0.2 per minute, depending on the plan and platform. Only three seconds of recorded voice are enough to clone the voice and generate deepfakes³. These voices are often used in scams. One such case involved a woman who lost $15K after receiving a voice message that sounded exactly like her daughter, claiming she had been in a car crash. This was followed by someone impersonating a public defender who contacted the victim to demand bail money⁴.
  • Falling costs enabled new forms of scams. A clear example of how cheap deepfakes have become is the recent wave of “lost pet” scams. Fraudsters create AI images of a missing animal, claim they’ve found it, and request a small payment, sometimes as little as $50. To make the deception more convincing, they send fake photos to the victim⁵.
  • Deepfakes are also being used to exploit the hiring process. The FBI⁶ is investigating a scheme in which more than 100 companies unknowingly hired remote IT workers who intended to funnel money to foreign governments. These individuals leveraged AI and deepfake technology to generate synthetic identities, fabricate resumes and CVs, draft communications, and even conduct job interviews using manipulated video or audio feeds⁷.
  • Investment scams, which cause most losses, have also adopted deepfake technology. These scams amounted to $900 million, or 57% of all analyzed deepfake-related losses. In most cases, they start on social media platforms like Facebook, WhatsApp, Telegram, or YouTube, where deepfaked celebrities or politicians promote various fraudulent financial products.

Methodology and sources

This study used data from the AI Incident Database and Resemble.AI to create a combined dataset covering deepfake incidents from 2017 to September 2025. Incidents were included if they involved the generation of fake videos, images, or audio and were covered by media articles. For deepfake incidents related to fraud where a financial loss was clearly reported in the article, each case was further classified into one of 12 specific fraud subcategories.

For the complete research material behind this study, visit here.

Data was collected from:

AI Incident Database (2025).Resemble.AI (2025). Deepfake Incident Database.

References:

¹ Tex Media. Darkweb prices spike to $20,000 per minute for deepfakes.² OpenAI. Sora 2 is here.³ Microsoft. VALL-E Family.⁴ Homeland Security Today. When The Deepfake Threat Hits Home.⁵ CBC. Someone is using AI-generated images to demand reward money for lost pets.⁶ Reuters. DOJ announces arrest, indictments in North Korean IT worker scheme.⁷ Crowell. From Deepfakes to Sanctions Violations: The Rise of North Korean Remote IT Worker Schemes.
The team behind this research:About us