blog
AI Impersonates a Doctor’s Face to Steal 40,000 HKD from Medical Insurance Account

According to CCTV reports, Ms. Peng's medical insurance account was fraudulently charged more than 40,000 HKD without her knowledge. Preliminary investigations by the police revealed that a criminal group used advanced AI technology to create a fake video of Ms. Peng's face, successfully bypassing her medical insurance account's face verification and carrying out a series of fraudulent transactions.

2025011503.png

Case Overview

The criminals first searched the internet for Ms. Peng's photos to obtain her personal data. Using AI technology, they transformed her photos into a realistic facial video. Then, the group used these AI-generated videos to successfully bypass the facial recognition feature of the medical insurance system. With the fake facial video, the criminals were able to log into Ms. Peng's medical insurance account and begin using her account funds to purchase medications and other medical services.

Steps Taken by the Criminals:

  1. Collecting Personal Photos: The criminals obtained publicly available photos of Ms. Peng via the internet. These photos served as the foundation for generating the fake video.

  2. AI Impersonation of the Face: Using AI face-swapping technology, the criminals transformed Ms. Peng’s photos into a lifelike facial video. The video looked very natural and was able to deceive the identity verification system.

  3. Illegal Login to the Medical Insurance Account: With the forged facial video, the criminals successfully bypassed the facial recognition system of the medical insurance platform and accessed Ms. Peng’s account.

  4. Stealing Account Funds: Once logged in, the criminals started making large purchases at pharmacies and other places using Ms. Peng’s medical insurance card, totaling more than 40,000 HKD in stolen funds.

The police investigation revealed that this case was not an isolated incident. The criminal group had previously used similar methods to fraudulently charge multiple doctors' medical insurance accounts, with nearly 20 cases reported, and a total stolen amount exceeding 500,000 HKD.

Self-Protection Advice for Users

This case not only exposes the potential risks of AI technology misuse but also reminds both platform operators and users to remain vigilant and adopt effective measures to protect personal information security in an increasingly complex online environment. In response to evolving AI-driven fraud methods, protective measures should address both technological defenses and behavioral habits to ensure that personal information is not easily leaked. Below are some practical self-protection suggestions to help users effectively cope with AI-driven cyber fraud.

  1. Enhance Identity Verification: Enable two-factor authentication (2FA) or multi-factor authentication (MFA), such as SMS codes, dynamic passwords, or email verification, to increase account security. Even if biometric systems (such as fingerprints or facial recognition) are compromised, additional layers of verification can still prevent unauthorized logins. For payment accounts and social media platforms, users should not only enter passwords but also confirm with a one-time verification code received on their phones to increase security. Even if biometric information is stolen, enabling additional identity verification methods can effectively prevent illegal access.

  2. Limit Sharing of Sensitive Information: Avoid sharing sensitive information such as front-facing photos or videos on social media or public platforms. Such data can be exploited by AI to fabricate facial features or generate false profiles. For example, avoid uploading too many facial images, especially selfies or full-body photos that could be used for face-swapping technology. Also, when enabling facial or fingerprint recognition, choose highly secure technology providers and avoid using unverified or unreliable tools. Reducing unnecessary exposure of personal information can effectively lower the risk of misuse.

  3. Regularly Check Account Security: Periodically check login records provided by platforms to look for any unusual devices or locations. If any suspicious activity is found, immediately change your account password and notify the platform for further action. Most banks, payment platforms, and social media services offer login history checks, so users can regularly review their account login history and check for any unknown devices or logins from unusual locations. If any anomalies are found, change your password immediately and enable additional security settings (such as device binding or security question verification).

  4. Increase Awareness of AI Technology: Learn about common characteristics of AI-generated fake videos and audio. Improving your ability to recognize AI-manipulated content can help you make the right judgment when facing potential scams.

By following these measures, users can effectively reduce the risk of AI fraud and improve their information security through a combination of technology and behavior. By staying vigilant and strengthening security awareness, users can better protect their personal privacy in the digital age and avoid falling victim to fraud.

How Platforms Can Strengthen Prevention?

Fraudsters are using a combination of technology and psychology to precisely manipulate their victims. In the face of AI-driven scams, society must work together through technological means, education, and emotional support to build a protective wall and prevent more people from falling victim. Short video platforms, in particular, need to adopt multiple technical measures to identify fraudulent accounts at the source.

Identify Abnormal Devices Dingxiang Device Fingerprinting records and compares devices to distinguish between legitimate users and potential fraudulent behavior. This technology uniquely identifies and recognizes each device, identifying maliciously controlled devices such as virtual machines, proxy servers, and simulators. It analyzes whether the device shows abnormal or inconsistent behaviors, such as logging in with multiple accounts, frequently changing IP addresses, or altering device attributes, helping to track and identify fraudulent activities.

Identify Accounts with Abnormal Operations Continuous identity verification during the session is crucial to ensure the user’s identity remains consistent throughout the usage. Dingxiang atbCAPTCHA quickly and accurately distinguishes whether the operator is a human or a machine, precisely identifying fraudulent activities, monitoring in real time, and intercepting abnormal behavior.

Prevent Fake Videos from Face-Swapping Dingxiang's full-link panoramic face security threat perception solution conducts intelligent verification across multiple dimensions, including device environment, facial information, image authentication, user behavior, and interaction status. It quickly identifies over 30 types of malicious attacks, such as injection attacks, live-body forgery, image forgery, camera hijacking, debugging risks, memory tampering, Root/Jailbreak, malicious ROM, simulators, etc. Once fake videos, fraudulent facial images, or abnormal interactions are detected, the system automatically blocks the operation. It also allows flexible configuration of video verification strength and user-friendliness, enabling a dynamic mechanism that strengthens verification for abnormal users while maintaining regular atbCAPTCHA for normal users.

Identify Potential Fraud Threats Dingxiang Dinsight helps enterprises with risk assessments, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. The average processing speed of Dinsight’s daily risk control strategies is under 100 milliseconds, supporting configurable multi-party data access and accumulation. Based on mature indicators, strategies, models, and deep learning technologies, it enables self-monitoring and self-iteration of risk control performance. Dinsight, paired with the Xintell smart model platform, optimizes security strategies for known risks, analyzes potential risks through data mining, and configures risk control strategies for different scenarios with one click. Using association networks and deep learning technologies, the platform standardizes complex data processing, mining, and machine learning processes, offering a one-stop modeling service from data processing and feature derivation to model construction and deployment.

2025-02-11
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.