PicSo AI, an image generation platform, was advertising content that emphasized generating images of 'girls' which appeared to be targeting pornographic content creation, with these ads being served through Meta/Instagram's advertising platform.
A user reported encountering advertisements for PicSo AI, a generative AI image creation platform, while browsing content on Meta/Instagram. The advertisements appeared alongside the user's regular social media content including NBA and F1 memes. The user expressed concern about PicSo's emphasis on generating images of 'girls' which they interpreted as targeting pornographic applications of the AI technology. The user found this content objectionable enough to report it to the United States Department of Justice and encouraged others to do the same. The incident highlights concerns about AI-generated content being used for potentially inappropriate purposes and the role of social media platforms in advertising such services. No specific technical details about the AI system itself or the scale of its deployment were provided in the report.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI that exposes users to harmful, abusive, unsafe or inappropriate content. May involve providing advice or encouraging action. Examples of toxic content include hate speech, violence, extremism, illegal acts, or child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed