blog
Fraudster Uses AI to Impersonate a Company Leader and Scams Mr. Li Out of 830,000 Yuan

Mr. Li received a text message claiming to be from his workplace leader, asking him to add a social media account for further communication. After becoming "friends," the fraudster requested an urgent transfer of funds, claiming it was needed for financial turnover, with a promise of repayment later. Initially doubtful, Mr. Li was convinced when the scammer initiated a video call. The person in the video appeared to be his “leader,” which made Mr. Li lower his guard. He then made three separate transfers totaling 950,000 yuan.

2024121001.jpg

Later that day, during a conversation with friends, Mr. Li found the situation suspicious and decided to report it to the police. After receiving his report, the police quickly coordinated with the bank to freeze the involved funds through emergency measures, successfully recovering 830,000 yuan and preventing greater losses. Thanks to the swift action of the authorities, most of Mr. Li’s money was retrieved.

In Another Case, an Elderly Woman Is Scammed by AI Voice Cloning。Ms. Li received a call from someone claiming to be her "younger brother." The caller stated that he had changed his phone number and asked her to add him on WeChat, promising to visit her soon. A few days later, the "brother" called again, claiming he was detained after a fight and needed money for compensation. He asked Ms. Li for financial assistance. Believing the story, she prepared 70,000 yuan in cash. The scammer then requested an additional 50,000 yuan. Though Ms. Li began to feel suspicious, she still decided to report the incident to the police.

After investigation, the police confirmed this was a case of fraud using AI voice cloning technology. The scammers had used realistic voice synthesis to mimic Ms. Li’s brother’s voice, tone, and speaking habits, completely earning her trust. The police initiated collaborative operations and successfully tracked down the suspect, Mr. Wang, in another province. Mr. Wang was apprehended and has been detained on fraud charges while the case undergoes further investigation.

Why the Victims Fell for the Scams

In both cases, the victims failed to recognize the signs of manipulated audio and video. Mr. Li trusted the video call, unaware that the video could have been digitally altered. Similarly, Ms. Li was convinced by the realistic voice mimicry, which replicated not only her brother’s voice but also his speech patterns, eliminating her doubts. This trust led her to provide significant financial support without hesitation.

Fraudsters Use AI to Identify Target Victims

The "Financial AIGC Audio and Video Anti-Fraud White Paper" published by the Bank of Communications in collaboration with Dingxiang highlights the processes and characteristics of AI-based scams, such as "face-swapping" and "voice cloning." 2024121315.jpg AI face-swapping technology is now widely used in scams. Fraudsters use this technology to superimpose a target’s face onto their own, creating synthetic videos or photos that make scams appear highly credible. This technology can "pass as real," trapping victims without their knowledge.

Additionally, AI is used to screen potential victims. Scammers no longer rely on blind attempts but analyze personal information shared on online platforms. Using AI, they precisely select targets and design personalized scam strategies. This approach allows them to execute scams more efficiently and covertly, significantly increasing the success rate of their fraudulent activities.

How to Effectively Prevent AI-Based Scams?

The "Financial AIGC Audio and Video Anti-Fraud White Paper" calls on industries to establish a comprehensive defense system covering the entire cycle, all scenarios, and the full fraud chain to combat AI-enabled fraud. For individuals, the following measures can effectively help prevent scams: 2024121309.jpg

1. Redial Videos or Calls

When receiving suspicious calls or video requests, stay calm and avoid responding immediately. Use the excuse of "poor signal" to hang up and then call back using a verified, safe contact method. Confirm whether the request truly came from your friend or family member, and avoid responding directly to potentially fraudulent messages.

2. Set Up Family Verification Mechanisms

Predefine a "safety word" or a "challenge question" that only you and your family members know. If you receive a suspicious call, ask the caller to provide this keyword. Failure to respond correctly or attempts to evade the question should raise red flags. In such cases, hang up immediately and verify the person's identity through secure channels.

3. Consult Friends or Family

In cases involving urgent financial requests, consult other friends or family members to verify the situation. Ask if they are aware of any similar calls or messages. This prevents you from making decisions based on stress or misinformation. Reaching out to trusted contacts can help you assess the situation more accurately.

4. Report to Police

Dial your local emergency number, such as 110 in China, to report the fraud to authorities in detail.

These measures can help individuals resist AI-based scam threats. They also serve as a reminder to stay calm and vigilant when encountering sudden financial requests, avoiding falling into the traps set by fraudsters.

Platforms Must Strengthen Anti-Fraud Measures

To address AI-based scams, society must work together by combining technical solutions, public education, and emotional support. This joint effort will help prevent more people from becoming victims. Short-video platforms, in particular, need to adopt multiple technologies and measures to detect fraudsters and identify fake audio and video content from the source.

1. Identifying Abnormal Devices

Dingxiang Device Fingerprinting enables the identification of fraudulent activities by tracking and comparing device fingerprints. This technology assigns unique IDs to devices and detects malicious activities such as virtual machines, proxy servers, simulators, and abnormal behaviors like multi-account logins, frequent IP address changes, and device attribute modifications. These measures help track and identify fraudsters effectively.

2. Detecting Abnormal Account Operations

Abnormal activities such as logging in from different locations, device changes, new phone numbers, or sudden activity from dormant accounts require increased verification. Continuous identity verification during sessions is crucial. Dingxiang atbCAPTCHA accurately distinguishes between humans and bots, detecting fraudulent behavior in real time and blocking abnormal activities. 2024062705.png

3. Preventing Deepfake Videos

Dingxiang Full-Link Facial Security Threat Detection uses multi-dimensional information, including device environment, facial data, image verification, user behavior, and interaction status, to conduct intelligent authentication. It quickly identifies over 30 types of malicious activities, such as injection attacks, live spoofing, image forgery, camera hijacking, debugging risks, memory tampering, rooting/jailbreaking, malicious ROMs, and emulators. Upon detecting fake videos, false facial images, or abnormal interactions, it automatically blocks operations. The system also provides flexible configurations for video verification strength, enabling user-friendly verification for legitimate users while applying enhanced validation for suspicious ones.

4. Uncovering Potential Fraud Threats

Dingxiang Dinsight helps enterprises conduct risk assessments, anti-fraud analysis, and real-time monitoring, improving risk control efficiency and accuracy. With an average processing speed of 100 milliseconds for daily strategies, Dinsight supports multi-source data integration and storage, leveraging metrics, strategies, and deep learning-based models. Combined with the Xintell Intelligent Modeling Platform, it automatically optimizes security strategies, mines potential risks based on logs and data, and customizes fraud prevention strategies for various scenarios. By standardizing processes from data handling to model deployment, this end-to-end platform enhances the effectiveness of risk control.

2024-12-23
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.