blog
Criminal Gang Uses AI to Forge Facial Videos, Successfully Logging into Others’ Accounts

Recently, the Hangzhou Public Security Bureau disclosed a major online fraud case during a press conference, involving the use of AI technology to forge live-action videos, bypassing biometric login authentication on certain platforms. This case highlights how AI technology is being abused by criminals, posing a significant threat to network security defenses.

2024062706.jpg

In June this year, Hangzhou cyber police discovered a criminal gang on an overseas platform claiming that they could bypass the biometric authentication of some leading platforms using AI face-swapping technology, illegally obtaining other people's network accounts and information. The gang attracted individuals by promoting services to "help retrieve others' network information," encouraging the use of AI technology for network intrusion activities.

Police investigations revealed that criminals were relying on AI large model technology from overseas sources. By simply uploading a photo, they could generate forged videos through text-based dialogues. These fake videos were used to bypass biometric login procedures on platforms, enabling the criminals to forcibly log into the target user's account and steal personal information.

After obtaining access to others’ accounts, the stolen private data and sensitive information were either sold to fraud groups or used to satisfy some individuals' desire to snoop on others' privacy. Since May this year, the gang had been advertising their "services" on overseas platforms and had successfully stolen large amounts of user data through AI-generated fake identities.

After a thorough investigation, police arrested four key suspects—Hu Mouyun, Hu Mouliang, Zhang Mouguo, and Wu Mouhao—on August 7 in Anhui, Guizhou, and Zhejiang. The gang charged between 2,000 and 5,000 RMB per login attempt, accumulating over 200,000 RMB in illegal profits. The case is still under further investigation.

During the investigation, police found that some platforms' biometric technologies had security vulnerabilities, with relatively simple identification methods, providing opportunities for criminal gangs. In contrast, some platforms had more complex identification systems with stronger defenses against AI-generated videos. This disparity in security technology across platforms highlights the need for further strengthening of login authentication mechanisms to prevent such new types of fraud.

The Hangzhou police advise users to avoid oversharing personal biometric information, such as frontal facial photos and fingerprints, online. Additionally, they recommend regularly monitoring login activities and, if any suspicious logins (such as from unfamiliar devices or IP addresses) are detected, immediately changing account passwords and contacting the platform for assistance.

User Self-Protection Recommendations

This case not only reveals the risks of AI technology abuse but also reminds both platform operators and users to be vigilant and strengthen information security measures to counter the growing threats posed by emerging technologies. 2024090402.png

Preventing AI fraud requires both technical and behavioral measures to ensure personal information is not easily exposed. Here are specific preventive measures:

  1. Enhance Identity Verification
    Enable two-factor authentication (2FA) or multi-factor authentication (MFA), such as SMS verification codes or dynamic passwords, to increase account security. Even if the biometric system is compromised, the additional layers of verification can effectively prevent unauthorized logins.

  2. Limit Sharing of Sensitive Information
    Avoid oversharing photos or videos with frontal facial images on social media and other public platforms to prevent AI from using these materials to generate forged information. Only use fingerprint or facial recognition on secure platforms, and choose trusted technology providers.

  3. Regularly Check Account Security
    Periodically review the login records provided by platforms to check for any logins from suspicious devices or locations. If any suspicious activity is detected, immediately change your password and notify the platform.

  4. Be Cautious of Suspicious Requests
    Even if family or friends "verify" their identity through video or voice, do not easily trust them, especially when financial transactions are involved. Confirm their identity through another channel. Agree with your family or friends on a "security question" that only you know, and use it to verify their identity in case of suspicious requests.

  5. Increase Awareness of AI Technology
    Familiarize yourself with common characteristics of AI-generated fake videos or audio. For example, during a video call, ask the other party to perform specific actions, gestures, or expressions to help determine if the video is AI-generated.

By following these preventive measures, individuals can significantly reduce the risk of falling victim to AI fraud. Even as AI technology advances, users can still protect their information by raising security awareness and using reliable security tools.

Platforms Should Strengthen Prevention

Enterprises can adopt multiple technologies and measures. Additionally, promoting the positive use of AI technology and strictly cracking down on criminal activities is the fundamental solution.

  1. Identify Abnormal Devices
    Dingxiang Device Fingerprinting can identify legitimate users and potential fraud by recording and comparing Device Fingerprinting data. It uses unique device identification and recognition technology to detect devices under malicious control, such as virtual machines, proxy servers, simulators, and so on. The system analyzes whether a device is logging into multiple accounts, frequently changing IP addresses, or modifying device attributes, identifying unusual behaviors that don't match user habits. This helps to track and identify fraudulent activities.

  2. Identify Abnormally Operated Accounts
    Accounts that log in from different locations, change devices, change phone numbers, or suddenly activate after being dormant need frequent verification. In addition, continuous identity verification during a session is crucial to ensure the user's identity remains consistent throughout the usage period. Dingxiang atbCAPTCHA can quickly and accurately distinguish between human operators and machines, identifying fraudulent activities with precision, and monitoring and intercepting abnormal behaviors in real-time.

  3. Prevent Fake Videos from Face-Swapping
    Dingxiang's Full-Chain, Panoramic Face Security Threat Detection Solution verifies information across multiple dimensions, including device environment, facial information, image forgery, user behavior, interaction status, and more. It quickly identifies over 30 types of malicious attacks, such as injection attacks, live body forgery, image forgery, camera hijacking, debugging risks, memory tampering, root/jailbreak, malicious ROM, and simulators. Once fake videos, false face images, or abnormal interaction behaviors are detected, the operation can be automatically blocked. It also flexibly configures the intensity and friendliness of video verification, implementing a dynamic mechanism where normal users are verified with atbCAPTCHA, while suspicious users undergo enhanced verification.

  4. Uncover Potential Fraud Threats
    Dingxiang Dinsight helps enterprises with risk assessment, anti-fraud analysis, and real-time monitoring, improving risk control efficiency and accuracy. The average processing speed for Dinsight's daily risk control strategies is within 100 milliseconds. It supports configurable access and accumulation of multiple data sources and can perform self-monitoring and iterative upgrades of risk control mechanisms using deep learning technology. Coupled with the Xintell Intelligent Modeling Platform, known risks can undergo automatic security strategy optimization. Based on risk control logs and data mining of potential risks, it enables one-click configuration to support risk control strategies for different scenarios. Utilizing relational networks and deep learning technology, it standardizes complex processes like data processing, feature derivation, model building, and ultimately deploying models into production, providing an end-to-end modeling service.

2024-10-09
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.