Privacy and digital rights advocates are raising concerns over a new law that, on the surface, seems like a positive step: a federal effort to combat revenge porn and AI-generated deepfakes. The recently enacted Take It Down Act aims to make it illegal to publish nonconsensual explicit images—whether real or AI-created—and mandates that platforms respond to victims’ takedown requests within just 48 hours or face potential liability. While many see this as a long-overdue victory for victims, experts warn that vague language, lax verification standards, and tight deadlines could lead to unintended consequences such as overreach, censorship of legitimate content, and increased surveillance.
“Content moderation at scale is inherently problematic and often results in the censorship of important and necessary speech,” says India McKinney, director of federal affairs at the Electronic Frontier Foundation.
Platforms now have one year to develop processes for removing nonconsensual intimate images (NCII). Although the law requires takedown requests to come from victims or their representatives, it only asks for a physical or electronic signature—no photo ID or other verification methods are mandated. This approach aims to lower barriers for victims but also opens the door for abuse.
“I fear we’ll see more requests to remove images of queer and trans people in relationships, and even more so, requests for consensual adult content,” McKinney warns.
Senator Marsha Blackburn, a Republican from Tennessee and co-sponsor of the Take It Down Act, has also sponsored the Kids Online Safety Act, which places responsibility on platforms to protect children from harmful content. Blackburn has expressed the belief that content related to transgender individuals can be harmful to kids. Similarly, the conservative Heritage Foundation, behind Project 2025, emphasizes that “keeping trans content away from children is protecting kids.”
Because platforms are liable if they don’t remove requested images within 48 hours, many may opt to delete content swiftly without thorough verification—potentially removing legitimate speech or content protected by free expression rights, warns McKinney.
Major platforms like Snapchat and Meta have expressed support for the law but haven’t clarified how they will verify whether the requester is truly a victim. On decentralized platforms like Mastodon, which host independently operated servers, the response might be even more cautious. Mastodon indicated it would lean toward removal if verification proves difficult. These decentralized networks, often run by nonprofits or individuals, could be especially vulnerable to the law’s “chilling effect,” as the Federal Trade Commission (FTC) can treat non-compliance as unfair or deceptive practice, even if the platform isn’t a commercial entity.
“This is troubling, especially as the FTC’s leadership has shown signs of politicizing the agency and using its power to target platforms based on ideology rather than principles,” says the Cyber Civil Rights Initiative, a nonprofit dedicated to ending revenge porn.
To prepare for the rapid takedown deadlines, platforms may begin proactively monitoring content before dissemination—using AI tools to detect harmful material. For example, AI startup Hive works with platforms like Reddit, Giphy, Vevo, Bluesky, and BeReal to identify deepfakes and child sexual abuse material. Hive’s CEO, Kevin Guo, explained that many clients integrate Hive’s API at the point of upload to flag problematic content early, helping platforms address issues before they go viral.
Reddit, for instance, employs sophisticated internal tools and partners with nonprofit SWGfL to scan traffic for known NCII matches. However, how these platforms verify the identity of the requester remains unclear, raising concerns about potential misuse.
McKinney warns that such proactive monitoring could extend into encrypted messaging services, despite the law focusing on public or semi-public content. Since the law also requires platforms to “remove and make reasonable efforts to prevent reuploading” of NCII, there’s a risk that encrypted messaging apps like WhatsApp, Signal, or iMessage could face pressure to scan private communications—raising significant privacy and free speech issues. Major companies like Meta, Signal, and Apple have yet to specify their plans regarding encrypted messaging.
The law also has broader implications for free speech. Former President Donald Trump publicly praised the Take It Down Act, joking that he would use it for himself, citing his claims of being unfairly targeted online. This remark highlights ongoing debates about content moderation and political influence, especially as previous administrations have taken actions to restrict or retaliate against certain speech—such as barring Harvard University from accepting foreign students or freezing federal funding in response to curriculum content they oppose.
McKinney expresses concern that the law’s broad scope, combined with political pressures, could lead to increased censorship or suppression of diverse viewpoints. She emphasizes that, amid ongoing efforts to ban books or restrict information on topics like critical race theory, abortion, or climate change, it’s troubling to see both political parties supporting content moderation at this scale.
As these developments unfold, the balance between protecting individuals from harmful content and preserving free expression remains a critical and complex challenge.