As AI technology continues to develop, AI has become a new tool for online fraud. Criminals use deepfake technology to perform "AI voice impersonation" and "Deepfake", combining other people's faces and voices to create fake images and videos, creating fake identities, and impersonating others to induce victims to transfer money and other crimes. This type of fraud is varied and difficult to distinguish, and consumers are easily caught in scams. Without a doubt, in today's digital environment, AI has changed the nature of identity fraud.
Four Types of People that Fraudsters Like to Impersonate
1. Impersonating acquaintances
Ms. Li, a resident of Ordos City, Inner Mongolia, recently encountered a new type of telecommunications and network fraud case that used AI face-swapping technology. The fraudster impersonated Ms. Li's old classmate "Jia Mou" and contacted her through WeChat and QQ platforms. They used AI technology to create fake video calls and successfully gained Ms. Li's trust. Ms. Li received a WeChat friend request, and the other party's nickname and avatar were her old classmate "Jia Mou". After a brief identity confirmation via QQ video call, the fraudster asked Ms. Li for help with a cash flow problem and sent her a screenshot of a fake bank transfer record. Without verifying the recipient's information, Ms. Li transferred 400,000 yuan to the other party's account. When the other party continued to ask for money, Ms. Li realized that she might be being scammed, so she immediately contacted her real classmate and called the police. The Ordos police quickly activated the emergency stop payment mechanism and successfully intercepted and returned Ms. Li's 400,000 yuan.
2. Impersonating beauties
Mr. Li from Hebei was browsing a dating app when he was tempted and agreed to chat with the so-called "beauty" and downloaded an illegal chat software sent by the other party. However, the so-called "beauty" was actually a fake person created by the scammer. During the chat, the scammer took screenshots of Mr. Li's appearance and used "Deepfake" to create a pornographic video with Mr. Li's face. At the same time, the scammer also used the illegal chat software to obtain the mobile phone numbers of all the contacts in Mr. Li's mobile phone, and used this information to blackmail and extort Mr. Li. In order to get the scammer to delete the pornographic video, Mr. Li had no choice but to agree to transfer 120,000 yuan to the other party.
3. Impersonating colleagues
In January of this year, there was a 200 million Hong Kong dollar "Deepfake" video conference fraud case in Hong Kong. The characteristic of this fraud case is that it uses AI deepfake technology. The fraudster first collected facial and voice data of the target company's senior management personnel, and then used AI technology to "face swap" this data onto the scammer's body to create a seemingly real video conference, which led to the company employees participating in the video conference transferring 200 million Hong Kong dollars to the scammer's account.
4. Impersonating family members
In December 2023, an overseas student was "kidnapped" abroad, and his parents were asked by the "kidnappers" for a ransom of 5 million yuan. They also received a video of the "hostage" being controlled and harmed. Through on-site investigation and international police cooperation, 5 hours later, the student involved, Xiaoxia, was successfully rescued - not from the kidnappers, but at the immigration port of entry in the country where his school is located. The truth was also revealed: the scammer used deepfake technology to create Xiaoxia's voice and kidnapping video, and then sent it to Xiaoxia's family to defraud money.
How victims' information is leaked
Before carrying out new telecommunications network frauds such as "AI voice imitation" and "Deepfake," fraudsters first collect targets' information, including portraits, contact details, home addresses, work information, lifestyle details, as well as photos, videos, and voice recordings of victims. The avenues through which this information is obtained are diverse and include:
1. Social media
: Fraudsters can gather victims' photos and personal information through social media platforms and other publicly available online resources. Many individuals share a substantial amount of personal photos and videos on social media, which can be exploited by malicious actors. Additionally, some forums and online platforms may publicly display users' photos and videos, allowing fraudsters to gather relevant information.
2. Network data breaches
: Large-scale network data breaches can lead to the exposure of personal information, including photos, videos, and voice recordings. Fraudsters can access this leaked data through dark web platforms to execute fraudulent activities.
3. Phishing and malware
: Fraudsters use phishing emails or malicious software to obtain victims' personal information, including photos and videos. Once they gain access to the victim's device, they can further acquire more personal data. For example, on February 15, 2024, the foreign security company Group-IB announced the discovery of malicious software named "GoldPickaxe." The iOS version of this malware deceives users into performing facial recognition, submitting identification documents, and then uses the user's facial information for "Deepfake" fraud.
4. Public events and on-site activities
: In public places or specific events such as conferences, exhibitions, and social gatherings, fraudsters may collect victims' photos and videos.
Among these methods, collecting a large number of photos, information, and videos from publicly accessible social media is prevalent. Since social media has become an indispensable part of daily life, it is crucial to avoid oversharing sensitive information on social media platforms, as it serves as a warehouse for fraudsters to obtain materials for fraud.
Especially, every dynamic update and every photo shared by victims on social media contain a wealth of sensitive information, which has become a repository for fraudsters to obtain materials for fraud. Based on the collected sensitive information, fraudsters perform feature extraction, enabling them to create fake facial videos. By extracting voice samples from video clips on social media, with samples as short as 30 seconds to 1 minute, fraudsters can produce highly realistic voice clones.
Technology Prevention: New Telecommunications Network Fraud
Dingxiang Defense Cloud Business Security Intelligence Center has released a special report titled "Deepfake Threat Research and Security Strategies," systematically detailing the industrial chain behind new telecommunications network fraud based on "Deepfake."
To prevent "Deepfake" new telecommunications network fraud, it is necessary to verify offline when suspicious points are discovered online, increase communication time, and adopt probing measures, such as requesting the other party to perform specific actions, to expose potential loopholes. Additionally, it is recommended that enterprises employ multiple technologies and methods. Furthermore, promoting the positive application of AI technology and rigorously cracking down on criminal activities are fundamental solution
1. Identification of "Deepfake" fraudulent videos:
deos: During video chats, one can request the other party to press their nose or face to observe facial changes; pressing a real nose will cause deformation. Alternatively, requesting actions like eating or drinking to observe facial changes can be effective. Asking for unusual gestures or expressions, such as waving or difficult hand gestures, can also help distinguish authenticity. During waving, facial data interference occurs, causing slight tremors, flickers, or abnormal situatio
- Comparison and identification of device information, geographical location, and behavioral operations ations: Dingxiang Device Fingerprinting distinguishes between legitimate users and potential fraudulent behavior by recording and comparing Device Fingerprinting. This technology uniquely identifies and recognizes each device, detecting manipulated devices like virtual machines, proxy servers, and emulators. It analyzes behaviors such as multiple account logins, frequent IP address changes, and device attribute changes that deviate from user habits, aiding in tracking and identifying fraudulent activitie
3. Recognition of abnormal account activities:
ties: Increased verification is required for activities such as remote logins, device changes, phone number changes, or suddenly active dormant accounts. Continuous identity verification during sessions is crucial to maintain consistent user identity. Dingxiang atbCAPTCHA accurately differentiates between human operators and machines, precisely identifying fraudulent behaviors, and provides real-time monitoring and interception of abnormal activities.
4. Prevention of fraudulent "Deepfake" videos and images
: Dingxiang's comprehensive panoramic facial security threat perception solution employs multi-dimensional verification of device environments, facial information, image authenticity, user behavior, and interaction status. It quickly identifies injected attacks, live body forgeries, image forgeries, camera hijacking, debugging risks, memory tampering, Root/Jailbreak, malicious ROMs, simulators, and over 30 other types of malicious behaviors. Upon detecting forged videos, fake facial images, or abnormal interaction behaviors, it automatically blocks operations. The solution offers flexible configuration of video verification strength and user friendliness, implementing a dynamic mechanism for enhanced verification of abnormal users while employing atbCAPTCHA for normal users.
5. Unearthing potential fraud threats:
Dingxiang Dinsight aids enterprises in risk assessment, anti-fraud analysis, and real-time monitoring, enhancing risk control efficiency and accuracy. Dinsight's average processing speed for daily risk control strategies is within 100 milliseconds, supporting configuration-based access and deposition of multi-source data. Leveraging mature indicators, strategies, models, and deep learning technology, it achieves self-monitoring and self-iterative mechanisms for risk control performance. Paired with the Xintell intelligent model platform, it optimizes security strategies automatically for known risks and configures support for different scenarios based on risk control logs and potential risk mining. Standardizing complex data processing, mining, and machine learning processes based on associative networks and deep learning technology, it provides end-to-end modeling services from data processing, feature derivation, model construction to final model deployment.