blog
The Ministry of Public Security Discloses a Case of Using AI to Create Obscene Images

In 2024, during the Ministry of Public Security's "Net Clean 2024" special operation, a case involving the use of AI to create fake obscene images attracted widespread attention.

2024100701.png

The suspect, Liu Moumou, obtained photos of victims by secretly taking pictures and downloading them from social media platforms like "WeChat Moments." He then used AI software to create fake obscene images, incorporating the victim's personal information and spreading the images online, which caused large numbers of netizens to engage in attacks. This seriously impacted the victims' normal work and life. Liu Moumou has now been placed under criminal compulsory measures in accordance with the law.

Case Technical Process Analysis

In this case, the suspect, Liu Moumou, carried out the criminal activity through the following technical process:

  1. Stealing Photos: The suspect first obtained photos through secret photography or by downloading them from social media platforms such as "WeChat Moments." The personal photos shared on these platforms became a "resource library" for the criminal's AI image forgery. Many people frequently share personal life photos on social media, often with weak privacy protection, providing convenient conditions for such criminal acts.

  2. Image Forgery: Liu Moumou used available AI image generation software to combine the victim's real photos with obscene images, creating fake obscene pictures. By using AI technology to replace faces, adjust colors, and enhance features, highly realistic fake images were produced.

  3. Online Dissemination and Malicious Use: After creating the fake images, Liu Moumou added the victim's personal information and widely spread the images on social networks, leading to mass attacks and accusations from netizens. This not only caused severe damage to the victim's mental well-being, reputation, and life but also triggered a larger-scale cyberbullying event.

Preventing the Misuse of Personal Photos

  1. Focus on Personal Privacy Protection: As social media has become an indispensable part of people's daily lives, avoiding excessive sharing of sensitive information is essential. Social media can become a warehouse of fraud materials for criminals. Avoid sharing personal photos, voices, videos, and other sensitive information on social media. Limit the exposure of private details such as personal accounts, family, or work to reduce the risk of identity theft. If you discover AI-generated fake voices or videos, immediately report them to social media administrators and law enforcement, and take steps to remove them and trace their source.

  2. Strengthen Content Review on Social Platforms: Social platforms should implement effective identification and timely handling of content potentially involving AI forgery to prevent negative social impacts. For instance, image authenticity detection tools can analyze pixel-level anomalies and detect “flaws” in the image generation process to determine whether the image was AI-generated.

At the same time, social media platforms should establish a security alert system by analyzing user behavior patterns and identity information. They should monitor and restrict suspicious activities, such as abnormal logins and high-frequency messaging. For example, by analyzing mouse movement patterns, typing styles, and other user behavior patterns, platforms can identify abnormal situations, flag suspicious activities that deviate from regular use, and detect attackers' abnormal operations through extra identity and device verification. Large models can quickly filter massive amounts of data and identify subtle inconsistencies that humans may miss.

  1. Dingxiang Device Fingerprinting: This technology generates a unified and unique device fingerprint for each device. It builds a multi-dimensional identification strategy model based on device, environment, and behavior. This model identifies risky devices, such as virtual machines, proxy servers, and emulators that may be maliciously controlled. It also analyzes whether a device has multiple account logins, frequently changes IP addresses, or alters device properties in ways that show abnormal behavior or deviate from typical user patterns. This helps trace and identify fraudsters’ activities, enabling businesses to manage the same ID across different channels and enhance cross-channel risk identification and control.

  2. Dingxiang Dinsight: This tool helps businesses conduct risk assessments, anti-fraud analysis, and real-time monitoring, improving the efficiency and accuracy of risk control. Dinsight’s daily risk control strategies are processed within an average speed of 100 milliseconds. It supports the configuration and accumulation of multi-source data, allowing risk control to self-monitor and iterate using deep learning techniques. Paired with the Xintell intelligent model platform, Dinsight can optimize security strategies for known risks, mine potential risks from risk logs and data, and enable one-click configuration for different scenarios to support risk control strategies. Based on association networks and deep learning technology, the platform standardizes complex processes such as data processing, feature derivation, machine learning, and provides an end-to-end modeling service from data processing to final model deployment.

By leveraging these technologies, social platforms can more effectively screen and filter uploaded content, preventing the spread of fake images.

2024-10-08
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.