blog
Criminals Use AI Technology to Impersonate "College Best Friend," Woman Scammed Out of 5 Gold Bars

Recently, employees at a courier company on Youth Road in Kunming City reported a suspicious situation involving a woman attempting to mail gold bars worth over 300,000 yuan. Police quickly responded to the scene and successfully prevented a telecommunications fraud case.

Ms. Wang explained that the "best friend" in question was her former college roommate, who had helped her financially during tough times, and they had stayed in touch ever since. The friend later moved abroad, and despite the distance, they continued to communicate regularly through QQ. On the day of the incident, Ms. Wang received a video call request from her friend's QQ account. In the video, her friend asked Ms. Wang to buy 300,000 yuan worth of gold bars and send them to Guangzhou as a wedding gift. Trusting her long-time friend, Ms. Wang agreed to the request and even paid for the gold bars out of her own pocket. 2024091401.png

Following the police's advice, Ms. Wang immediately asked the "friend" to verify her identity through an international phone call or other means. Shortly after sending this message, Ms. Wang found that her QQ account had been deleted by the other party. It was then that Ms. Wang realized she had narrowly avoided falling into a telecommunications fraud trap that involved AI technology.

The Fraudster's Tactics

This is a classic case where fraudsters used AI technology to swap faces and clone voices to impersonate a friend and commit fraud. 2024080801.png

In such scams, criminals collect target individuals' speech, facial data, or video, using AI technology to create fake audio, video, or images. They simulate others' voices or appearances to gain trust and then use excuses such as borrowing money, asking for investments, or emergency assistance to trick their victims into transferring funds or providing sensitive information like bank account passwords. Additionally, criminals might use AI to artificially create audio or video of celebrities, experts, or law enforcement personnel, impersonating them to spread false messages and achieve fraudulent goals.

The Four Phases of the Scam

Phase 1: Information Gathering. The first step in the scam involves collecting the victim's personal information. Fraudsters obtain sensitive data such as facial photos, phone numbers, and voice recordings through black market transactions, illegal database leaks, or organized cyberattacks. This information serves as the foundation for their next steps.

Phase 2: Forging Face Videos and Cloning Voices. Once they have gathered enough personal information, criminals use software to capture the target's facial features and voice patterns. With AI synthesis technology, they create fake videos and audio clips.

Phase 3: Identity Impersonation. The fraudsters begin contacting the victim, mimicking the tone, speech style, and behavior of the person they are impersonating. They communicate through text, voice, or video, gradually lowering the victim's guard and gaining their trust.

Phase 4: Inducing Victim to Transfer Money or Mail Goods. Once the victim fully trusts the impersonated identity, the criminals make their move by asking for money transfers, payments for goods, or mailing packages. They often fabricate urgent situations or business needs, pushing the victim to transfer funds to a designated bank account. Under the combined pressure of urgency and trust, the victim often complies without further verification, following the fraudster's instructions.

Technical Measures to Prevent DEEPFAKE Fraud

To prevent new types of telecommunications fraud involving "DEEPFAKE" and "AI voice cloning," it is essential to verify any suspicious online activity through offline methods. Increasing the duration of communication, using probing techniques, such as asking the other party to perform specific actions to reveal potential flaws, can be helpful. Additionally, companies should adopt multiple technical measures and tools, while promoting the positive application of AI technology and strictly cracking down on criminal behavior, as the fundamental solution.

  1. Identifying face-swapped videos and voices. During video chats, one can ask the other party to press their nose or face to observe changes in facial features. If the person is real, their nose will deform when pressed. Alternatively, ask the person to eat food or drink water and observe facial movements. Another strategy is to ask them to perform unusual actions or expressions, such as waving or making difficult hand gestures, to detect any anomalies. Waving, for example, may interfere with facial data and cause some shaking, flickering, or other irregularities.

  2. Identifying abnormal devices. Dingxiang Device Fingerprinting helps distinguish legitimate users from potential fraudsters by tracking and comparing Device Fingerprints. This technology assigns a unique identifier to each device, identifying malicious devices such as virtual machines, proxy servers, and emulators. It also analyzes whether the device has multiple account logins, frequently changes its IP address, or exhibits abnormal behavior inconsistent with typical user habits, aiding in the detection and tracking of fraudulent activities.

  3. Identifying accounts with abnormal operations. Abnormal behaviors such as logging in from a different location, changing devices, changing phone numbers, or suddenly reactivating dormant accounts require frequent validation. Continuous identity verification during a session is crucial to ensure user identity consistency. Dingxiang's atbCAPTCHA can quickly and accurately distinguish between human and machine operators, precisely identifying fraudulent behavior and monitoring for abnormal activities in real-time. 2024062705.png

  4. Preventing face-swapped fake videos. Dingxiang's Full-Chain, Full-Perspective Facial Security Threat Detection Solution conducts intelligent verification through multi-dimensional data, including device environment, facial information, image forgery detection, user behavior, and interaction state. This enables rapid detection of injection attacks, fake live bodies, image forgery, camera hijacking, debugging risks, memory tampering, rooting/jailbreaking, malicious ROMs, emulators, and over 30 other malicious attack behaviors. Upon detecting fake videos, fake face images, or abnormal interaction behaviors, it can automatically block operations. Additionally, it allows for flexible video verification intensity and friendliness, dynamically adjusting to provide atbCAPTCHA for regular users and enhanced verification for suspicious users.

  5. Identifying potential fraud threats. Dingxiang's Dinsight Real-Time Risk Control Engine helps enterprises conduct risk assessments, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. Dinsight's daily risk control strategy processing speed averages under 100 milliseconds, supporting configurable access to and storage of multiple data sources. Based on established indicators, strategies, and model expertise, along with deep learning technology, it offers self-monitoring and iterative risk control mechanisms. Complemented by Xintell, an intelligent modeling platform, Dinsight automatically optimizes security strategies for known risks. It leverages risk control logs and data to uncover potential risks, offering one-click configuration for different scenarios and supporting tailored risk control strategies. The platform standardizes the complex processes of data processing, mining, and machine learning, providing end-to-end modeling services from data preparation to model deployment.

2024-09-19
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.