The Legal Landscape
Governments are starting to catch up—but not fast enough.
Even with these regulations, real progress has been slow. The technology is advancing faster than legislation can keep up.
Generated content can now look nearly identical to real footage. This is especially harmful to people who may not recognize the signs of AI manipulation.
Elderly people fall for AI scams. Children can't distinguish real from fake. Vulnerable communities are targeted with tailored disinformation. The labels exist on paper, but enforcement is almost nonexistent.
EU AI Act
The EU AI Act (Regulation 2024/1689) requires AI-generated content to be labeled. Fully applicable from August 2, 2026, it affects generative AI providers and anyone creating deepfakes.
Providers must mark AI-created or manipulated content (images, videos, text) in a machine-readable format. Deepfakes and AI content used in matters of public interest require clear labeling.
The European Commission published the first draft of a practical code in December 2025, to be finalized by mid-2026. It doesn't ban filtering, but it does obligate platforms to label content.
USA Regulations
California's AB 2015 (2025) requires AI-generated images and videos to be labeled, especially in election contexts, with penalties for violations.
At the federal level, the proposed No AI FRAUD Act similarly targets deepfakes, aiming for enforcement by 2026.
China & Asia
China has required "synthetic" labeling on AI video and audio content since 2023, overseen by the Cyberspace Administration.
South Korea mandated transparency for generative AI outputs starting in 2024.
Rest of the World
Australia and Brazil have voluntary codes that may become mandatory by 2026. India already penalizes platforms for unlabeled AI advertisements.
These regulations mainly target deception, similar to the EU approach.