On the morning of November 12, the Jin Yuan Police Station of the Economic Development Division of the Yichun City Public Security Bureau, Jiangxi Province, received a distress call from a local bank. Bank staff reported an elderly woman insisting on obtaining a bank loan despite incomplete documentation. She adamantly refused to leave, stating, “I won’t leave without the loan.”
Upon receiving the call, officers promptly arrived at the scene and communicated with the elderly woman. Through patient questioning, they discovered the reason behind her loan application: she wanted to secure 2 million yuan to support her “boyfriend,” Jin Dong, in financing his film project. She claimed to have met this "Jin Dong" on a short video platform, where he had professed his love and shared edited photos of them together. He also explained that he needed funds to support his film production.
Police Discovery and Intervention
The officers, suspicious of her story, examined her phone and found that the "Jin Dong" she referred to was an imposter. His identity and the accompanying photos and videos were AI-generated fakes. Despite this, the woman was convinced of his authenticity, believing him to be her idol.
The police remained patient, educating her about similar scams reported in the media and demonstrating the principles of AI-generated content. With the assistance of her family, who was contacted to intervene, the elderly woman finally realized the scam and abandoned her loan application.
Scammer's Fraud Process
The real Jin Dong is a well-known actor with a positive public image, often portrayed in dramas as a caring and trustworthy character. These roles generate trust, especially among elderly fans, who may blur the line between fiction and reality. Scammers exploited this trust using the following steps:
-
Creation of AI-Generated Content
Scammers used AI to create realistic short videos mimicking Jin Dong’s appearance, voice, and expressions, making them highly convincing. -
Targeted Distribution via Short Video Platforms
By leveraging platform recommendation algorithms, scammers ensured the fake content reached specific users, quickly capturing their attention. -
Building Emotional Connections
Through frequent interactions, such as declarations of affection and filming "short videos together," the fake "Jin Dong" fostered an emotional bond. Presenting the narrative of "a celebrity noticing an ordinary person" further reinforced trust. -
Manipulation and Financial Exploitation
Continuous interaction built unwavering trust in the fake identity. Once the victim was emotionally invested, the scammers introduced the financial scheme, claiming it was for a film project, and applied psychological pressure to obtain money.
Elderly individuals are often unfamiliar with AI technology and online scams, making it difficult for them to discern the authenticity of videos or photos. Additionally, loneliness and emotional needs can make them more susceptible to forming false emotional connections, causing them to overlook logical judgment.
Platforms Must Strengthen Protections
Fraudsters are increasingly combining technology with psychological manipulation to precisely control their victims. To combat AI-generated scams, society must unite and establish robust defenses through technological measures, educational outreach, and emotional support. Short video platforms, in particular, must employ multi-layered techniques to identify and eliminate fraudulent accounts and detect fake celebrity videos at their source.
1. Detecting Anomalous Devices
Dingxiang Device Fingerprinting can distinguish legitimate users from potential fraudsters by tracking and comparing device fingerprints. This technology uniquely identifies and analyzes devices to detect malicious tools like virtual machines, proxy servers, and emulators. It also flags unusual behaviors such as multi-account logins, frequent IP address changes, and inconsistent device attributes. These insights help trace and identify fraudulent activity.
2. Detecting Anomalous Account Activity
Anomalous activities like remote logins, device changes, phone number updates, and sudden activation of dormant accounts require frequent validation. Persistent identity verification during sessions is crucial to ensure consistent user identity.
Dingxiang atbCAPTCHA can accurately distinguish between humans and bots, effectively identifying fraud in real-time. It continuously monitors and blocks anomalous behavior.
3. Preventing Deepfake Videos
Dingxiang Full-Chain Panoramic Facial Security Threat Perception Solution leverages multi-dimensional data, including device environment, facial information, image forensics, user behavior, and interaction states, for intelligent verification. It quickly identifies over 30 types of malicious activities, such as injection attacks, liveness forgery, image manipulation, camera hijacking, debugging risks, memory tampering, root/jailbreak actions, malicious ROMs, and emulators.
When detecting forged videos, fake facial images, or suspicious interactions, the system can automatically block operations. Additionally, it allows flexible configuration of video verification strength to maintain a dynamic balance between user-friendliness and security. Normal users are verified via atbCAPTCHA, while suspicious users undergo enhanced scrutiny.
4. Uncovering Potential Fraud Threats
Dingxiang Dinsight assists businesses with risk assessments, fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. Dinsight's average processing speed for daily risk control strategies is under 100 milliseconds and supports configurable multi-source data integration.
With mature indicators, strategies, and deep learning models, Dinsight enables self-monitoring and iterative risk control mechanisms. Paired with the Xintell Intelligent Model Platform, it optimizes security strategies for known risks, mines potential risks through log analysis and data exploration, and applies one-click configuration for various scenarios. Based on correlation networks and deep learning, Xintell standardizes data processing, feature engineering, model development, and deployment into a one-stop modeling solution.
By combining advanced technology, proactive education, and emotional support, platforms can significantly reduce the risks posed by AI-driven fraud, ensuring safer online experiences for all users.