In a concerning turn of events, Meta, the parent company of Facebook and Instagram, is now facing a legal battle initiated by 34 U.S. states. These states have filed a lawsuit against Meta, alleging that the company engages in improper manipulation of minors who use its platforms. This legal action has been spurred by the rapid advancements in artificial intelligence (AI), particularly in the realms of text and generative AI.
The legal representatives from states like California, New York, Ohio, South Dakota, Virginia, and Louisiana are accusing Meta of utilizing its algorithms to foster addictive behaviors and negatively impact the mental well-being of children, primarily through in-app features like the infamous “Like” button. The government litigants are pushing forward with this legal action, despite Meta’s Chief AI Scientist recently stating that concerns over the existential risks of AI technology are premature, and Meta has already implemented AI to address trust and safety issues on its platforms.

The states’ attorneys are seeking various amounts of damages, restitution, and compensation for each state involved, with figures ranging from $5,000 to $25,000 per alleged occurrence.
Simultaneously, the UK-based Internet Watch Foundation (IWF) has raised alarms about the disturbing proliferation of AI-generated child sexual abuse material (CSAM). According to a recent report by the IWF, over 20,254 AI-generated CSAM images were discovered within a single dark web forum in just one month. This concerning trend has the potential to inundate the internet with such disturbing content.
The IWF is calling for global cooperation to combat this issue and has suggested a multi-faceted strategy to address the problem. This strategy includes adjustments to existing laws, improvements in law enforcement education, and the implementation of regulatory oversight for AI models. In the context of AI developers, the IWF recommends banning the use of AI for generating child abuse content, excluding associated models, and focusing on removing such material from their models.
The development of generative AI image generators has significantly enhanced the creation of lifelike human replicas. Platforms like Midjourney, Runway, Stable Diffusion, and OpenAI’s Dall-E are noteworthy examples of tools capable of generating highly realistic images.
The legal action against Meta and the growing concerns surrounding AI-generated content emphasize the need for a robust and comprehensive approach to child safety and responsible AI use in the digital age.