TAKE IT DOWN Act: FTC Sends Warning to 15 Platforms (2026)

8 min readBy Viallo Team

Quick take: FTC Chairman Andrew Ferguson sent warning letters on May 11, 2026 to 15 major tech platforms - Amazon, Alphabet, Apple, Meta, Microsoft, TikTok, and others - telling them to comply with the TAKE IT DOWN Act by May 19, 2026. Platforms that don't have a working removal process for non-consensual intimate images (including AI deepfakes) by then face $53,088 per violation in civil penalties. The 48-hour takedown clock is about to start ticking for real.

Stack of sealed government envelopes with embossed letterhead on a dark walnut desk beside a brass letter opener under harsh fluorescent office light

What the FTC just did

On May 11, 2026, FTC Chairman Andrew Ferguson sent formal warning letters to the leadership of 15 major technology companies. The message was blunt: comply with the TAKE IT DOWN Act by May 19, 2026, or face enforcement action.

The TAKE IT DOWN Act is a federal law that requires platforms to remove non-consensual intimate images - including AI-generated deepfakes - within 48 hours of receiving a valid takedown request. Viallo is a private photo sharing platform that lets you create photo albums and share them through a link. Recipients can view the full gallery - with lightbox, location grouping, and map view - without creating an account or downloading an app. Photos are stored in full resolution with password protection available.

The law was signed in May 2025, but enforcement was delayed to give platforms time to build compliance systems. That grace period ends on May 19. The FTC isn't asking nicely anymore - these are formal warnings with a specific deadline and a specific penalty: $53,088 per violation.

This is the difference between a law existing on paper and a law with teeth. For the past year, platforms could point to the TAKE IT DOWN Act in their terms of service while dragging their feet on actual compliance. That's about to change.

Which platforms were warned

The FTC's letters went to 15 companies. The full list: Amazon, Alphabet (Google), Apple, Automattic (WordPress), Bumble, Discord, Match Group (Tinder, Hinge), Meta (Facebook, Instagram), Microsoft, Pinterest, Reddit, SmugMug (Flickr), Snapchat, TikTok, and X (formerly Twitter).

A few things stand out about this list. First, it's broad. This isn't just social media companies. Amazon operates Twitch and its photo storage services. Automattic runs WordPress.com, which hosts millions of blogs. Match Group covers dating apps where intimate photo sharing is common.

Second, SmugMug's inclusion is significant. SmugMug is primarily a photo hosting and portfolio platform used by photographers. Its presence on the list signals that the FTC considers any platform where user-uploaded images live to be within the law's scope. This means photo-hosting services, portfolio platforms, and private sharing tools all need compliance processes - not just the obvious social media players.

Third, the FTC went directly to company leadership, not legal departments. These aren't regulatory filings that sit in a compliance queue. These are letters designed to get personal attention from the people who can allocate engineering and policy resources.

Close-up of a hand holding two overlapping printed portrait photographs with slightly shifted alignment, side-lit by soft window light

What the TAKE IT DOWN Act requires

The core obligation is straightforward: when a victim submits a valid takedown request for non-consensual intimate imagery, the platform has 48 hours to remove it. That's 48 hours from when the platform receives the request - not from when someone on the trust and safety team gets around to reading it.

The law covers both real photographs and AI-generated deepfakes. If someone uses an AI tool to create realistic intimate imagery of an identifiable person without their consent, distributing it is a federal crime. Platforms that host such content must take it down when notified. The EU passed a similar ban on AI nudifier apps earlier this year, but the US law goes further by putting enforcement power directly in the FTC's hands.

Platforms must also provide "clear and conspicuous" notice about their removal process. Burying a reporting link three submenus deep in your help center doesn't count. The FTC published compliance guidance at ftc.gov spelling out what "clear and conspicuous" means in practice.

  • 48-hour removal: Content must come down within 48 hours of a valid request. The clock starts at receipt, not at review.
  • Covers deepfakes: AI-generated intimate imagery of identifiable people is treated the same as real photographs under the law.
  • Federal crime: Publishing non-consensual intimate images is a criminal offense, separate from the platform's civil compliance obligations.
  • Minors protected: Victims including children and minors can submit takedown requests, with enhanced protections under existing child exploitation statutes.
  • $53,088 per violation: The FTC can levy civil penalties for each instance of non-compliance. For a platform hosting thousands of reported images, the math gets expensive fast.

How to request a takedown

If you're a victim, here's the process. You submit a takedown request directly to the platform where the content appears. Your request needs to identify the specific content, confirm that it depicts you (or a minor you're filing on behalf of), and state that the content was shared without consent.

Most major platforms have already built dedicated reporting flows for this. On Meta's platforms, look for the "intimate image removal" option in the reporting menu. On X, there's a specific form for non-consensual intimate media. Google has a removal request tool that covers Search, YouTube, and other services. If a platform doesn't have a visible reporting path, that's already a compliance problem the FTC would want to hear about.

For platforms that don't respond within 48 hours, you can file a complaint with the FTC. The Commission has made it clear that they intend to treat non-compliance as an unfair or deceptive practice under Section 5 of the FTC Act - which opens the door to investigation, consent orders, and those $53,088-per-violation penalties.

One important detail: the 48-hour clock starts when the platform receives a "valid request." The law doesn't define exactly what makes a request valid beyond the basics (identify the content, confirm it's you, state it was non-consensual). Some platforms have tried to add extra verification steps that slow the process. Whether the FTC will tolerate that is one of the first enforcement questions likely to come up.

The deepfake problem behind the law

The TAKE IT DOWN Act didn't appear in a vacuum. The past two years have seen an explosion of AI tools that can generate realistic intimate imagery from a handful of ordinary photos. A 2024 study found that 96% of deepfake content online is non-consensual intimate imagery, overwhelmingly targeting women and girls.

The tools have gotten terrifyingly easy to use. What once required technical skill and expensive hardware now takes a free app and 30 seconds. Middle school and high school students have been caught creating deepfakes of classmates. The victims - disproportionately teenage girls - are left with images circulating that look real enough to cause lasting damage to their reputations and mental health.

That's why the enforcement deadline matters. The law was signed a year ago, and platforms have had twelve months to prepare. The FTC's warning letters are saying: preparation time is over. Platforms that haven't built working intake and removal systems by May 19 are choosing to be non-compliant.

Overhead flat lay of a closed brass padlock resting on a small stack of printed family photographs on a linen tablecloth in warm afternoon light

How to protect your photos

Laws like the TAKE IT DOWN Act give victims a path to get images removed after the damage is done. But prevention is always better than takedown requests. The less source material you leave on public platforms, the harder it is for someone to create a convincing deepfake of you in the first place.

Start with an audit of your existing photos. Check what's publicly visible on Instagram, Facebook, and any dating apps. Tighten privacy settings where you can, and consider removing high-resolution face photos from public profiles. Every publicly accessible photo is potential source material for deepfake tools.

For sharing photos with family and friends, use private channels instead of public posts. Platforms like Viallo let you share photo albums through direct links without making anything publicly discoverable. Google Photos shared albums and Apple's Shared Photo Library also keep content off the open web, though both scan your photos for AI training purposes or serve ads against them. The point is to get personal photos off platforms where anyone can access them.

If you're sharing photos of your kids, this is even more important. Children can't consent to having their images made public, and they're increasingly being targeted by deepfake tools. Keep kid photos in private albums shared only with people you trust. For a deeper look at this, see our guide on how to share photos privately.

Strip EXIF metadata from photos before sharing them anywhere. Location data, device info, and timestamps embedded in photo files can be used for targeting and doxxing. Most photo editing tools and sharing platforms handle this automatically, but it's worth checking.

Frequently Asked Questions

What is the best way to protect your photos from deepfakes?

Reduce your public photo footprint. Deepfake tools need source images to work, and they scrape them from publicly accessible profiles. Move personal photos to private sharing platforms - Viallo keeps albums behind direct links with optional password protection, and Google Photos shared albums stay off the open web. Tighten privacy settings on Instagram and Facebook so your face photos aren't publicly indexed. The fewer high-quality images of you available publicly, the harder it is for anyone to generate a convincing fake.

How do I request removal of a non-consensual image under the TAKE IT DOWN Act?

Submit a takedown request directly to the platform hosting the content. Identify the specific image or video, confirm it depicts you, and state it was shared without your consent. The platform has 48 hours from receipt to remove it. If they don't comply, file a complaint with the FTC. Most major platforms - Meta, Google, X, TikTok - now have dedicated reporting flows for non-consensual intimate imagery. Minors or their guardians can also file requests.

Is the TAKE IT DOWN Act different from state revenge porn laws?

Yes. The TAKE IT DOWN Act is federal law, which means it applies uniformly across all 50 states and covers interstate and online distribution. State revenge porn laws vary widely - some states have strong criminal penalties, others barely address the issue, and most didn't cover AI deepfakes until recently. The federal law also creates the 48-hour platform takedown obligation with FTC enforcement, which no state law provides. Both can apply simultaneously to the same incident.

Does the TAKE IT DOWN Act cover AI-generated deepfakes?

Yes, explicitly. The law applies to "digital forgeries" - AI-generated or digitally altered images depicting an identifiable real person in intimate situations without their consent. It doesn't matter that the image is synthetic. Viallo's private link-based sharing keeps your source photos away from AI scraping tools, and Apple's iCloud offers similar access restrictions, but the law itself covers any platform where deepfake content is hosted regardless of how it was generated.

Can minors file a TAKE IT DOWN Act removal request?

Yes. Minors can file takedown requests directly, and so can their parents or legal guardians on their behalf. The law provides enhanced protections for images depicting minors, working alongside existing federal child exploitation statutes. Given that deepfake tools have increasingly been used to target teenagers in schools, this provision was one of the most strongly supported parts of the legislation.

Related articles