blog
Scammers Use AI to Forge Face-Swap Videos, Log in to Others' Medical Insurance Cards

In December 2024, Ms. Li, from Kaiyang County, Guiyang City, encountered an unbelievable online fraud incident - her medical insurance account suddenly received a deduction notification, showing that she had purchased over 10,000 yuan worth of precious Chinese medicinal materials in another place. The investigation found that this was a case of criminals using AI face-swapping technology to steal from personal medical insurance accounts. The police quickly tracked down the criminal gang and successfully apprehended them.

Ms. Li's Medical Insurance Card Was Stolen and Swiped

The police investigation found that after obtaining Ms. Li's personal medical insurance information, the criminals used AI technology to forge Ms. Li's face, successfully logged into Ms. Li's medical insurance card, and then carried out a frenzied theft.

2025020601.png

The Steps Involved:

  1. Obtain Medical Insurance Information: The criminals obtained Ms. Li's medical insurance account information through illegal channels. This information may have been obtained through social media, data breaches, phishing websites, false telecommunications fraud, etc.

  2. Forge Face through AI Face-Swapping: The criminals used AI face-swapping technology to successfully forge a video of Ms. Li's face.

  3. Log in to the Medical Insurance APP in a Different Place: The criminals used the forged face video to successfully log into Ms. Li's medical insurance account.

  4. Steal from the Medical Insurance Account: The criminals obtained Ms. Li's electronic medical insurance voucher (QR code) and purchased over 10,000 yuan worth of precious Chinese medicinal materials, including "Dong'e Donkey-hide Gelatin" and "Pianzaihuang" and other high-priced drugs, at a pharmacy.

Self-Protection Suggestions for Users

This case not only exposed the potential risks of the abuse of AI technology but also reminded platform operators and users to be vigilant in the increasingly complex network environment and take effective measures to protect personal information security and resist the network security threats brought by new technologies. To deal with the constantly evolving AI fraud methods, preventive measures should be taken from both technical protection and behavioral habits to ensure that personal information is not easily leaked. Here are some practical self-protection suggestions to help users effectively deal with AI-driven cyber fraud.

  1. Strengthen Identity Verification: Enable two-step verification (2FA) or multi-factor authentication (MFA), such as SMS verification codes, dynamic passwords, email verification, etc., to increase the security of your account. Even if the biometric system (such as fingerprint or facial recognition) is breached, additional verification layers can still effectively prevent unauthorized logins. For example, for payment accounts, social media platforms, etc., when logging in, users need to enter a one-time verification code received by their mobile phone in addition to their password for confirmation, thereby increasing security. Even if the account's biometric information is stolen, enabling additional authentication methods can still effectively prevent the account from being illegally logged in.

  2. Reduce Sharing of Sensitive Information: Try to avoid sharing sensitive information such as photos or videos with frontal facial photos in public places such as social media and online platforms. This information may be used by AI technology to forge personal facial features or generate false information. For example, do not upload too many photos with facial features on open social platforms, especially selfies, full-body photos, etc. that may be used by AI for face-swapping technology. At the same time, when enabling facial recognition or fingerprint verification, choose those with highly secure technology providers, and avoid using unverified or untrustworthy tools. Reducing unnecessary exposure of personal information can effectively reduce the risk of being abused.

  3. Regularly Check Account Security: Regularly check the login records provided by the platform to check for abnormal devices or location logins. Once any abnormal behavior is found, immediately change the account password and promptly notify the platform to take further measures. Most banks, payment platforms, and social media provide login record query functions, and users can regularly check their account's login history to check for unknown devices or logins from different places. If any abnormalities are found, change the password immediately and turn on more security settings (such as device binding or security question verification).

  4. Improve the Ability to Identify AI Technology: Understanding some common features of AI-generated fake videos and audio, and improving the ability to identify AI-forged content can help you make the right judgments when faced with potential scams.

Through the above measures, users can effectively reduce the risk of encountering AI fraud, and improve their information security protection capabilities through the combination of technology and behavior. Only by staying vigilant and strengthening security awareness can we effectively protect personal privacy in the digital age and avoid becoming victims of fraud.

How Platforms Can Strengthen Prevention?

Fraudsters are using a combination of technology and psychology to precisely manipulate their victims. In the face of AI-driven scams, society must work together through technological means, education, and emotional support to build a protective wall and prevent more people from falling victim. Short video platforms, in particular, need to adopt multiple technical measures to identify fraudulent accounts at the source.

Identify Abnormal Devices Dingxiang Device Fingerprinting records and compares devices to distinguish between legitimate users and potential fraudulent behavior. This technology uniquely identifies and recognizes each device, identifying maliciously controlled devices such as virtual machines, proxy servers, and simulators. It analyzes whether the device shows abnormal or inconsistent behaviors, such as logging in with multiple accounts, frequently changing IP addresses, or altering device attributes, helping to track and identify fraudulent activities.

Identify Accounts with Abnormal Operations Continuous identity verification during the session is crucial to ensure the user’s identity remains consistent throughout the usage. Dingxiang atbCAPTCHA quickly and accurately distinguishes whether the operator is a human or a machine, precisely identifying fraudulent activities, monitoring in real time, and intercepting abnormal behavior.

Prevent Fake Videos from Face-Swapping Dingxiang's full-link panoramic face security threat perception solution conducts intelligent verification across multiple dimensions, including device environment, facial information, image authentication, user behavior, and interaction status. It quickly identifies over 30 types of malicious attacks, such as injection attacks, live-body forgery, image forgery, camera hijacking, debugging risks, memory tampering, Root/Jailbreak, malicious ROM, simulators, etc. Once fake videos, fraudulent facial images, or abnormal interactions are detected, the system automatically blocks the operation. It also allows flexible configuration of video verification strength and user-friendliness, enabling a dynamic mechanism that strengthens verification for abnormal users while maintaining regular atbCAPTCHA for normal users.

Identify Potential Fraud Threats Dingxiang Dinsight helps enterprises with risk assessments, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. The average processing speed of Dinsight’s daily risk control strategies is under 100 milliseconds, supporting configurable multi-party data access and accumulation. Based on mature indicators, strategies, models, and deep learning technologies, it enables self-monitoring and self-iteration of risk control performance. Dinsight, paired with the Xintell smart model platform, optimizes security strategies for known risks, analyzes potential risks through data mining, and configures risk control strategies for different scenarios with one click. Using association networks and deep learning technologies, the platform standardizes complex data processing, mining, and machine learning processes, offering a one-stop modeling service from data processing and feature derivation to model construction and deployment.

2025-02-06
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.