Recently, the National Financial Regulatory Administration issued a risk warning against new types of telecom network fraud, urging the public to remain vigilant against evolving scams, enhance risk awareness, and develop the ability to identify threats, in order to safeguard their finances. The National Financial Regulatory Administration identified several new types of telecom network fraud, including “screen sharing” scams, “DEEPFAKE” scams, fraudulent online investment schemes, and fake transactions involving online gaming products.
In “DEEPFAKE” fraud, criminals use pretexts such as “online shop customer service,” “marketing promotion,” “recruitment for part-time jobs,” or “dating and relationships” to contact consumers through WeChat, phone calls, and other means to collect voice, speech, or facial information. They then use “face swapping” and “voice imitation” technologies to create fake audio, video, or images that simulate the voice or appearance of others, thereby gaining the victim's trust. Under the guise of borrowing money, investment opportunities, or emergency assistance, they persuade the victim’s relatives or friends to transfer money or provide sensitive information such as bank account passwords, which the criminals then quickly exploit to transfer funds. Additionally, criminals might use deepfake technology to artificially create audio or video content of celebrities, experts, or law enforcement personnel, and spread false information to facilitate fraud.
The police have previously disclosed several cases of telecom network fraud involving DEEPFAKE technology. Recently, in Ordos, Inner Mongolia, a resident named Ms. Li fell victim to a new type of telecom network scam using DEEPFAKE technology. The scammer impersonated Ms. Li’s old classmate “Jia” and established contact with her through WeChat and QQ platforms, using AI technology to fake a video call, thereby gaining her trust. The scammer then requested Ms. Li to transfer money, citing a need for financial assistance, and even sent a fake bank transfer record. Without verifying the recipient, Ms. Li transferred 400,000 yuan to the scammer’s account. Additionally, Mr. Li from Hebei also became a victim of DEEPFAKE telecom fraud, where scammers used a dating app to lure him into a nude chat and download an illegal chat software. They then used AI technology to create a fake pornographic video and extorted 120,000 yuan from him.
Fraud Using DEEPFAKE Technology
Dingxiang Defense Cloud Business Intelligence Center published a special report titled “DEEPFAKE Threat Research and Security Strategies,” which systematically introduced the industrial chain behind new types of telecom network fraud based on DEEPFAKE technology. New telecom network frauds operate through a complete industrial chain, where the upstream illegally obtains citizens' private information, the midstream customizes scripts to carry out fraud, and the downstream converts fraud into money.
Stage One: Information Gathering. The first step of the scam is collecting personal information about the target. Scammers obtain sensitive information such as photos of the victim’s face and phone number through black market transactions, illegal database breaches, or organized cyber-attacks. This information forms the foundation for the scammers’ next move.
Stage Two: Fake Video Synthesis. After obtaining personal information, DEEPFAKE technology captures the victim's facial features through software and combines them with AI synthesis techniques to create fake video content.
Stage Three: Identity Disguise and Communication. The criminals begin adding the victim’s contact information. They mimic the target’s tone and style, communicating with the victim through text, voice, and video, gradually lowering the victim's guard and gaining their trust.
Stage Four: Inducing Money Transfers, Loans, and Extortion. Once the victim fully believes in the scammer's identity, the criminal issues orders for money transfers, loan payments, or goods purchases. They might fabricate urgent situations or business needs to induce the victim to transfer funds to a designated bank account. Under the combined pressure of urgency and trust, the victim often follows the scammer's instructions without further verification.
Technical Measures to Prevent New Types of Telecom Network Fraud
To prevent “DEEPFAKE” telecom network fraud, it is essential to verify any suspicious activity online by confirming it offline, increasing the duration of communication, and employing probing tactics such as asking the other party to perform specific actions that may reveal inconsistencies. Additionally, companies are advised to adopt multiple technical measures and methods. Furthermore, promoting the positive application of AI technology and cracking down on criminal activities are fundamental solutions.
-
Identifying DEEPFAKE Fraud Videos. During video chats, you can ask the other party to press their nose or face to observe changes in facial features. If it's a real person, pressing the nose will cause it to deform. You can also ask them to eat or drink and watch for facial changes. Alternatively, ask them to perform strange actions or facial expressions, such as waving or making a difficult gesture, to distinguish between real and fake. Waving can disrupt facial data, causing some shaking, flickering, or other anomalies.
-
Comparing and Identifying Device Information, Geographic Location, and Behavioral Operations. Dingxiang Device Fingerprinting can identify legitimate users and potential fraudulent activities by recording and comparing device fingerprints. This technology assigns a unique identifier to each device, helping detect maliciously controlled devices like virtual machines, proxy servers, and emulators. It also analyzes whether the device exhibits unusual behaviors, such as logging into multiple accounts, frequently changing IP addresses, or altering device properties, which may indicate fraud.
-
Strengthening Account Identification. Activities such as remote logins, device changes, phone number changes, and sudden activity from dormant accounts require more frequent verification. Additionally, continuous identity verification during sessions is crucial to maintain consistent user identity throughout usage. Dingxiang atbCAPTCHA can quickly and accurately distinguish between human and machine operators, precisely identify fraudulent activities, and monitor and intercept abnormal behaviors in real-time.
-
Preventing DEEPFAKE Fake Videos and Images. Dingxiang's full-chain panoramic facial security threat perception solution performs intelligent verification using multidimensional information, including device environment, facial information, image forgery detection, user behavior, and interaction status. This solution can quickly identify and block more than 30 types of malicious activities such as injection attacks, live body forgeries, image forgery, camera hijacking, debugging risks, memory tampering, Root/jailbreak, malicious ROM, and emulator operations. Upon detecting forged videos, fake facial images, or abnormal interactions, it can automatically block the operations. Additionally, the solution allows for flexible configuration of video verification strength and user-friendliness, implementing a dynamic mechanism that applies atbCAPTCHA to normal users while strengthening verification for abnormal users.
-
Uncovering Potential Fraud Threats. Dingxiang Dinsight real-time risk control engine helps companies with risk assessment, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. Dinsight’s average processing speed for daily risk control strategies is within 100 milliseconds and supports the configurable access and accumulation of multi-source data. It leverages mature indicators, strategies, model experience, and deep learning technology to achieve self-performance monitoring and iterative risk control. Paired with the Xintell intelligent model platform, Dinsight can automatically optimize security strategies for known risks, mine potential risks from risk control logs and data, and configure risk control strategies for different scenarios with one click. The platform standardizes complex processes such as data processing, mining, and machine learning, providing an end-to-end modeling service from data processing and feature derivation to model construction and deployment.