This excerpt is from DingXiang Defense Cloud Business Security Intelligence Center's white paper on "deepfakes," which are highly realistic synthetic videos or audios used for malicious purposes. If you're interested in learning more, you can contact marketing@dingxiang-inc.com to receive a free electronic copy upon completion.
Deepfakes pose a significant threat, enabling criminals to steal identities, spread misinformation, and create fake content for malicious purposes. Real-world examples include:
January 2024: Employees of a multinational company in Hong Kong lost HK$200 million due to phishing fraud involving deepfakes. December 2023: An overseas student's parents were tricked into paying a 5 million yuan ransom after receiving a deepfake video depicting their child's "kidnapping."
A KPMG report reveals a staggering 900% year-on-year increase in the number of deepfakes available online. Notably, bandeepfakes reports that nearly all deepfakes (98%) are pornographic and target women.
Investigating the Deepfake Industry Chain
According to a new report by iProov, a biometrics company, fraudsters commonly use tools like SwapFace, DeepFaceLive, and Swapstream to create deepfakes. These tools often work in conjunction with:
Simulators: Changing location data for virtual "presence" anywhere.
Device modification software: Forging device attributes to bypass security systems.
IP changers: Rapidly switching IP addresses across regions.
By combining these tools, criminals can distribute deepfakes through various channels, including video conferencing platforms, work networks, and social media, ultimately enabling further fraudulent activities.
The emergence of "Cybercrime as a Service" (CaaS) has unfortunately made acquiring deepfake services and technologies easier than ever. This raises concerns as platforms like GitHub, a software development platform, host over 3,000 repositories related to deepfakes, highlighting their widespread development and potential for misuse.
Furthermore, foreign darknet marketplaces offer dedicated channels and groups for deepfakes, ranging from self-creation tools to personalized services. The pricing structure, with options ranging from $2 for basic deepfakes to $100 for more complex ones, further facilitates deepfake-related crimes due to their affordability and ease of use.
By improving the language clarity, conciseness, and flow of the original text, this revised version aims to be more appealing and informative for American audiences.
Process: Steps involved in creating a "deepfake" video
"Deepfakes" involve AI algorithms and deep learning. Overall, the process of creating a "deepfake" video involves the following steps:
1. Data Collection
Collect a large amount of data on the target subject, including multi-angle facial photos, work information, life information, etc., of which a large amount of images, information, and videos are collected from public social media.
**2. Feature Extraction ** Using deep learning algorithms, accurately identify and extract key facial features such as eyes, nose, and mouth. 3. Image Synthesis
Overlay and blend the target's face onto the face of the subject in the video to be faked, aligning facial features to ensure they match and replace between the source and target.
4. Voice Processing
Use machine learning and artificial intelligence to replicate a person's voice with startling accuracy, such as pitch, tone, and speaking style, and match the lip movements in the video with the synthesized speech.
5. Environment Rendering
Use lighting and color tools to further refine the coordination and matching of the characters, voice, movement and environment, costumes in the video.
6. Video Synthesis and Export
The video can then be used for online playback, live streaming, video conferencing, and other fraudulent activities.
Difficulties: Challenges in Identifying and Detecting "Deepfakes"
Difficult to Identify
"Deepfakes" have evolved to the point where they can generate convincingly realistic personal simulations, making it increasingly difficult to distinguish between real and fake content. It is difficult for people to identify them without specialized training, and recognizing this threat is the first step in defending against it.
** Difficult to Detect**
The increasing quality of "deepfakes" makes detection a major challenge. It is not only difficult for the naked eye to identify effectively, but some conventional detection tools cannot detect them in time.
Difficult to Track
Traditional cybersecurity measures cannot effectively protect against "deepfakes" when there is no digital fingerprint, There are no clear digital clues to follow, no IP address to blacklist, and even no direct malware signature to detect. The "Deepfake" Fraud Ecosystem
What's even more frightening is that the danger of "deepfakes" lies not only in the technology, but also in the entire fraud ecosystem it facilitates.** The "deepfake" fraud ecosystem operates through a complex network of bots, fake accounts, and anonymous services, all designed to create, amplify, and distribute fabricated information and content. **This is a form of guerrilla warfare in the digital age, in which attackers are invisible and elusive. They not only create information, but also manipulate the reality structure perceived by each participant. Therefore, combating "deepfake" fraud requires not only technical countermeasures, but also complex psychological warfare and public safety awareness.
Security: Four Defense Measures Against "Deepfake" Fraud
As technology advances, methods for detecting and identifying "deepfake" scams are also evolving. Businesses and individuals need to verify identities through multiple channels and adopt multiple strategies to identify and defend against "deepfake" fraud.
1. Behavioral and Biometric Recognition
ion
(1) During video calls, you can ask the other person to press their nose or face to observe the changes in their face. If it is a real person's nose, it will deform when pressed. You can also ask the other person to eat food or drink water to observe the changes in their face. Alternatively, you can ask them to do some strange actions or expressions, such as waving or making a difficult gesture, to distinguish between real and fake. In the process of waving, the data of the face will be interfered, and there will be some shaking or flickering, or some abnormal situations. In one-on-one communication, you can ask some questions that only the other person knows to verify their authenticity. At the same time, when someone makes a remittance request in a video or recording, it is necessary to call or repeatedly verify from other channels.
(2) In point-to-point communication, you can ask some questions that only the other person knows to verify their authenticity.
(3) "Deepfakes" can replicate voices, but they may also contain unnatural tones, rhythms, or subtle distortions that will be particularly noticeable after careful listening. At the same time, voice analysis software can help identify voice anomalies.
(4) In the case of files, an automatic document verification system can analyze the document for inconsistencies, such as font changes or layout diff
2. Device and Account Recognition
Recognition
(1) Digital signatures and blockchain ledgers are unique and can be used to track the source of behavior and mark it for review.
(2) Comparing and identifying device information, geographic location, and behavioral operations can identify and prevent abnormal operations. DingXiang Device Fingerprint can distinguish between legitimate users and potential fraudulent behavior by recording and comparing device fingerprints. Its technology uniquely identifies and identifies each device, identifies virtual machines, proxy servers, simulators and other maliciously controlled devices, analyzes whether the device has multiple accounts logged in, whether the IP address is frequently changed, whether the device attributes are frequently changed, etc. Abnormal or unusual behavior, helping to track and identify the activities of fraudsters.
(3) Account login from different places, changing devices, changing phone numbers, dormant accounts suddenly becoming active, etc., need to be strengthened frequent verification; in addition, continuous identity verification during the session is crucial, maintaining persistent checks to ensure that the user's identity remains consistent during use. DingXiang's frictionless verification can quickly and accurately distinguish whether the operator is a human or a machine, accurately identify fraudulent behavior, and monitor and intercept abnormal behavior in real time. In addition, based on the principle of least privilege, restrict access to sensitive systems and accounts to ensure access to the resources required for their roles, thereby reducing the potential impact of account theft.
(4) Face anti-fraud system that combines artificial review and AI technology to prevent fake videos of "deepfakes". DingXiang's full-link panoramic face security threat perception scheme can effectively detect and discover fake videos. It conducts intelligent risk assessment and risk rating on user face images through multi-dimensional information such as face environment monitoring information, liveness detection, image forgery, intelligent verification, etc., and quickly identifies fake authentication risks. DingXiang's full-link panoramic face security threat awareness scheme, real-time risk monitoring for face recognition scenarios and key operations, targeted monitoring such as camera hijacking, device forgery, screen sharing and other behaviors, and trigger the active defense mechanism to deal with. After discovering forged videos or abnormal face information, the system supports automatic execution of defense policies, and the corresponding defense disposal after the device hits the defense policies can effectively block risky operations.
3. AI Recognition and Evidence Collection
(1) Generative Adversarial Networks (GANs) based on deep learning can train a neural network model called a "discriminator" to identify any differences between the real and created versions. Big data models can quickly analyze large amounts of video and audio data to identify anomalies faster than humans can. Additionally, machine learning models can identify characteristic patterns of "deepfake" production algorithms, thereby identifying "deepfake" content. Machine learning models can also be retrained and adjusted to keep up with real-time iterative evolution.
(2) AI forensics tools play a critical role in investigating and attributing "deepfake" content. These tools analyze digital footprints, metadata, and other traces left behind during the creation process to help identify the source of the attack and assist in legal investigations.
4. Social Prevention and Public Education
(1) Reduce or eliminate the sharing of sensitive information such as accounts, family members, travel, and work positions on social media to prevent fraudsters from stealing and downloading them, and then "deep forge" pictures and sounds, Then they forged their identities.
(2) Continuously educating the public about "deepfake" technology and its associated risks is critical. Encouraging the public to be vigilant and quickly report suspicious situations can also significantly improve organizations' ability to detect and respond to "deepfake" threats.
Technology is constantly evolving, and new frauds are constantly emerging. Stay up-to-date on the latest developments in AI and "deepfake" technologies whenever possible to adjust your security measures accordingly. Continuous research, development, and updating of AI models are critical to staying ahead of the increasingly sophisticated "deepfake" technology.