AI Photo Editors Leaked 1.5 Million User Photos - What You Need to Know

8 min readBy Viallo Team

Quick take: A popular AI photo and video editor on Google Play leaked 1.5 million user-uploaded photos and 8 million media files through a misconfigured cloud storage bucket that required zero authentication to access. Security researchers found this isn't an isolated incident - an audit of over 38,000 Android AI apps revealed systemic security failures across the entire category. If you've used any AI photo editor, your images may have been exposed.

An open laptop sitting on a desk with scattered printed photographs around it, warm natural light from a window

What actually happened

In February 2026, security researchers at Cybernews discovered that Video AI Art Generator & Maker - an app downloaded over 500,000 times from the Google Play Store - had been storing user media in a Google Cloud Storage bucket with no authentication whatsoever. Anyone who found the bucket could access everything inside it.

And there was a lot inside it. The exposed bucket contained more than 1.5 million user-uploaded images, over 385,000 user-uploaded videos, and roughly 8.27 million media files total. That's 12 terabytes of people's personal photos and videos sitting on an unprotected server.

Around the same time, another AI service called IDMerit was found leaking know-your-customer files - full names, addresses, dates of birth, government IDs, and contact details - spanning users in at least 25 countries. The dataset totaled about a terabyte of highly sensitive personal information.

This isn't an isolated incident

Here's the part that should worry you more than any single breach. Three independent large-scale research projects all arrived at the same conclusion: AI apps have a systemic security crisis.

  • Cybernews audited 38,630 Android AI apps and found widespread vulnerabilities including exposed cloud storage, hardcoded API keys, and missing authentication.
  • CovertLabs' Firehound project scanned 198 iOS apps and found similar patterns of misconfigured backends leaking user data.
  • Escape analyzed 5,600 "vibe-coded" applications - apps built rapidly with AI assistance - and found the same basic security failures repeated across nearly all of them.

The pattern is always the same: misconfigured Firebase databases, missing row-level security on Supabase, hardcoded API keys in client-side code, and exposed cloud storage backends. These aren't sophisticated attacks. They're the digital equivalent of leaving your front door wide open.

Close-up of a camera lens lying on its side on a wooden desk, shallow depth of field with a blurred office background

Why AI photo apps are worse than regular apps

Every app that stores your data can potentially leak it. But AI photo editors are uniquely dangerous for a few reasons.

First, they require your actual photos. You can't use an AI photo editor without uploading the images you want edited. That's different from, say, a weather app that might leak your location. An AI editor leaks the most personal visual record of your life.

Second, the AI gold rush has created an army of apps built by developers who prioritize speed over security. Many of these apps are essentially thin wrappers around AI APIs, built in days or weeks, with no security review. The Cybernews audit found that the rush to ship AI products has outpaced even the most basic security practices across the entire category.

Third, your photos contain more than pixels. EXIF metadata embedded in your images can include GPS coordinates, timestamps, device information, and sometimes even your name. When an AI app leaks your photos, it's leaking all of that context too.

What Purdue researchers discovered about AI photo editing

The security problems go deeper than misconfigured storage. Researchers at Purdue University published findings in March 2026 showing that AI photo editing services can extract identity information from your images even during the normal editing process.

When you upload a photo to an AI editor for, say, background removal or style transfer, the AI model processes your entire image - including your face and identifying features. The Purdue team demonstrated that these models can learn and retain attributes like eye color, facial hair, age group, and other biometric information, even when you're just asking for a simple edit.

Their solution is a patent-pending system that masks sensitive regions of your photo before uploading. The AI service never sees your face, but the final edited image still looks natural. In testing, the system reduced AI attribute-classification accuracy by more than 80%.

The fact that researchers had to build a dedicated system to prevent identity leakage during routine photo editing tells you everything about how little these services care about your privacy by default.

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

61 countries told AI companies to stop

In February 2026, 61 data protection authorities from around the world issued a joint statement specifically about AI-generated imagery and privacy. The statement, coordinated through the Global Privacy Assembly, addresses AI systems capable of generating realistic images and videos depicting identifiable individuals without their knowledge or consent.

The authorities called for enhanced protections for children, accessible removal processes for harmful content, and stronger safeguards against misuse. They urged AI companies to implement robust protections from the outset rather than treating privacy as an afterthought.

It's a strong signal. But joint statements don't patch misconfigured cloud storage buckets. The gap between regulatory intention and actual enforcement remains wide, especially for smaller AI apps that fly under the radar of data protection authorities.

How to protect your photos from AI app leaks

You don't need to avoid AI tools entirely, but you should be selective about which ones touch your photos.

  • Check the developer. Look up who made the app. If you can't find a real company website, a privacy policy with actual details, or any indication of who's behind it - don't upload your photos.
  • Avoid free AI editors from unknown developers. The apps most likely to have security issues are free, ad-supported AI wrappers built by small teams or solo developers with no security budget.
  • Strip metadata before uploading. If you must use an AI editor, strip EXIF data from your photos first. This won't prevent the image itself from leaking, but it removes location data, timestamps, and device information.
  • Use on-device tools when possible. Apple Photos and Google Photos both offer AI editing features that process images on your device. The photos never leave your phone.
  • Keep your originals somewhere safe. Don't let an AI editor be the only place your photos exist. Store originals in a service you trust before experimenting with AI tools.
A person walking through a European cobblestone street carrying a camera bag, shot from behind at golden hour

Where you store your photos matters more than ever

The AI app leak story is really a storage story. These apps leaked photos because they stored them carelessly. The developers treated user photos as disposable data rather than the personal property they are.

When you choose where to keep your photos, you're choosing who gets to be careless with them. A service that stores photos on EU servers under GDPR protection, doesn't run AI processing on your images, and keeps your originals at full resolution is fundamentally different from a free AI app that stores everything in an unprotected cloud bucket.

Viallo stores your photos on European servers, doesn't process them with AI, and never shares them with third parties. Your photos stay yours.

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

Frequently Asked Questions

How do I know if my photos were in the leak?

If you ever used Video AI Art Generator & Maker or IDMerit, your data was likely exposed. For other AI apps, there's no easy way to check. The Cybernews audit found that security failures are widespread across the category, so any AI photo editor from an unknown developer could have similar issues.

Are big-name AI editors like Adobe or Canva safe?

Large companies generally have better security practices, dedicated security teams, and more to lose from breaches. They're not immune to incidents, but they're far less likely to have the kind of basic misconfiguration that exposed millions of photos in smaller apps.

Can I delete my photos from a leaked database?

Usually not directly. If you discover your data was exposed, you can request deletion under GDPR (if the service operates in the EU) or similar state privacy laws. But if the data was already accessed by third parties before the bucket was secured, those copies are beyond anyone's control.

Does Viallo use AI to process my photos?

No. Viallo uses GPS metadata and timestamps for automatic organization - sorting photos into trips and locations. It doesn't use image recognition, facial scanning, or any AI that analyzes the visual content of your photos.

Related articles