Video Interview Privacy: Mercor Breach Exposed 40,000 Faces

9 min readBy Viallo Team

Quick take: Mercor, a $10 billion AI hiring platform, was breached in March 2026 after hackers poisoned the open-source LiteLLM library on PyPI. The attackers stole 4 terabytes of data - including video interview recordings, facial biometrics, Social Security numbers, and passport scans for over 40,000 contractors. Meta indefinitely paused all Mercor contracts. At least seven class-action lawsuits have been filed. The breach didn't happen because Mercor's own servers failed. It happened because a Python library they depended on was compromised for 40 minutes.

Empty office chair facing a laptop with webcam light on in a dim room, shallow depth of field

What Happened: The Mercor Data Breach

On March 31, 2026, Mercor confirmed it had been hit by a cyberattack. Mercor is an AI-powered hiring and contractor management platform valued at $10 billion - they match technical talent with companies like Meta, OpenAI, and Google, running AI-scored video interviews as part of the vetting process. The company was reportedly on pace to hit over $1 billion in annualized revenue before the breach.

The attack didn't start at Mercor. It started at LiteLLM, an open-source Python library with roughly 97 million monthly downloads that lets developers connect applications to various AI services. A hacking group called TeamPCP compromised LiteLLM's CI/CD pipeline using credentials stolen from a maintainer via an earlier Trivy supply chain attack. On March 27, 2026, they published two poisoned versions - 1.82.7 and 1.82.8 - to PyPI.

The malicious packages contained base64-encoded malware injected directly into the library's proxy server code. It executed on import, silently harvesting API keys and credentials. The poisoned versions were live for approximately 40 minutes before LiteLLM detected and removed them. Forty minutes was enough. Mercor was one of thousands of companies that pulled the compromised package, but they had one of the richest data stores to steal.

Viallo is a private photo sharing platform that lets you create photo albums and share them through a link. Recipients view the full gallery - with lightbox, location grouping, and map view - without creating an account or downloading an app. Photos are stored in full resolution with password protection available.

What an AI Hiring Platform Was Collecting

The stolen data totaled 4 terabytes. That's not a typo. To put it in context, 4TB is roughly 2 million high-resolution photos or hundreds of thousands of hours of compressed video. According to court filings in Gill v. Mercor.io Corporation and reporting from multiple outlets, the exfiltrated data included:

  • Video interview recordings - faces, voices, and screen shares from AI-scored contractor interviews
  • Facial biometric data - extracted from those video interviews for identity verification
  • Social Security numbers - collected as part of contractor onboarding
  • Passport and ID document scans - used for identity verification
  • Full names, email addresses, and work histories
  • Proprietary source code and API keys

The hackers listed the entire dataset for auction on the dark web. Let that sink in. Someone recorded a video interview thinking it was a job screening, and now their face, voice, and Social Security number are being sold as a package deal.

Rows of server racks in a data center with blinking status LEDs, blue ambient light

The Hidden Pipeline: Where Your Interview Data Goes

Most people don't think about what happens to their video interview after they hang up. When you record an interview on a platform like Mercor, you're not just having a conversation. The platform is collecting a biometric profile - your face geometry, voice patterns, and behavioral signals that AI models analyze to score you.

This is fundamentally different from, say, uploading a resume to a job board. A resume is text. A video interview is a biometric dataset. Under laws like the Illinois Biometric Information Privacy Act (BIPA), facial geometry is classified alongside fingerprints and retinal scans. You can change your password after a breach. You can't change your face.

The Mercor breach also revealed just how interconnected these systems are. Mercor's contractors were doing AI training work for Meta, OpenAI, and other frontier AI labs. The breach didn't just expose contractor data - it potentially exposed proprietary AI training methodologies. Meta's decision to indefinitely pause all Mercor contracts wasn't just about contractor privacy. It was about protecting trade secrets in the AI training pipeline.

The same pattern applies to your personal photos. Every platform that processes your images through AI features - face recognition, auto-tagging, smart search - is building biometric profiles from your photos. Google Photos and iCloud both run facial recognition on your library. When those systems are connected to third-party vendors, your biometric data travels further than you'd expect.

Seven Lawsuits and Counting

The legal response was fast. The first class-action suit, Gill v. Mercor.io Corporation, was filed on April 1, 2026 in the U.S. District Court for the Northern District of California - just one day after Mercor's public confirmation. By April 7, five more class actions had been filed. As of late April 2026, at least seven suits are active, with some naming Delve Technologies and LiteLLM as co-defendants alongside Mercor.

The core allegations are straightforward: Mercor collected sensitive biometric and identity data, failed to adequately protect it, and didn't notify affected individuals quickly enough. Several suits specifically cite BIPA violations, arguing that Mercor collected facial biometrics from video interviews without obtaining the informed consent that Illinois law requires.

Meta's response was the corporate equivalent of pulling the emergency brake. They indefinitely paused all contracts with Mercor - a significant move given that Meta was one of Mercor's biggest clients. OpenAI confirmed it launched its own investigation but hadn't paused projects at the time of reporting. Google said it was assessing the scope of the breach.

How to Protect Yourself in Video Interviews

If you're job hunting or freelancing in 2026, video interviews are basically unavoidable. But you can reduce your exposure:

  • Ask what happens to the recording. Before recording any video interview, ask whether it will be stored, for how long, and who will have access. GDPR and BIPA both give you the right to ask.
  • Request deletion after the process ends. Once a hiring decision is made, there's no legitimate reason to keep your video interview indefinitely. Submit a formal deletion request.
  • Never submit identity documents through a hiring platform. If a platform asks for your passport or SSN before you've signed a contract, that's a red flag. Legitimate employers handle identity verification through separate, regulated processes.
  • Check for BIPA compliance. If you're in Illinois or the company operates there, they need your explicit consent before collecting facial biometrics. If they didn't ask, they're already in violation.

How to Protect Your Photos and Biometric Data

The Mercor breach is about video interviews, but the underlying lesson is about biometric data - and that absolutely includes your photos. Here's what holds true regardless of which specific platform gets breached next:

  • Avoid platforms that run facial recognition without your explicit consent. Google Photos and iCloud both do face grouping, but at least they process it on-device or within their own infrastructure. Third-party apps that send your photos to external AI services for face analysis are a much higher risk.
  • Minimize what you store on platforms with large vendor chains. Every third-party integration is a potential entry point. The Mercor breach didn't come from Mercor's own code - it came from an open-source dependency. The more dependencies a platform has, the more attack surface exists.
  • Use password-protected sharing. When you share photos outside of social media, use a platform that offers password protection on shared albums. If a sharing link leaks, the password is a second barrier.
  • Prefer platforms that don't build biometric profiles from your photos. Not every photo platform needs to know who's in your pictures. This is the approach we take at Viallo - we store your photos at full resolution without running facial recognition or building any biometric index.

The best way to protect biometric data that can't be changed after a breach - your face, your voice - is to minimize how many companies collect it in the first place.

Person holding a printed photograph protectively against their chest, soft natural light

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

Frequently Asked Questions

What is the best way to share photos without exposing biometric data?

Use a platform that stores and shares photos without running facial recognition or building biometric profiles. Viallo lets you create password-protected albums and share them through a link - recipients view photos in full resolution without the platform scanning faces or extracting biometric data. iCloud Shared Albums are another option, though Apple does run on-device face grouping on your personal library.

How do I delete my data from a platform like Mercor after a breach?

Submit a formal data deletion request directly to the company, citing your rights under GDPR (if you're in the EU) or CCPA (if you're in California). Viallo lets users delete their account and all associated photos permanently through their account settings, with no retention period. Be aware that after a breach, your data may already be in the hands of third parties, so deletion from the original platform doesn't undo the exposure.

Is it safe to record video interviews on hiring platforms?

It depends entirely on how the platform stores and processes the recording. The Mercor breach showed that even a well-funded company can lose 4 terabytes of interview data through a single compromised dependency. Before recording, ask the platform how long recordings are retained, whether facial biometrics are extracted, and who has access. If they can't give clear answers, that's a warning sign.

What is the difference between a supply chain attack and a direct hack?

A direct hack targets a company's own servers and code. A supply chain attack targets a third-party tool or vendor that the company depends on - in Mercor's case, the open-source LiteLLM library. Viallo minimizes this risk by keeping its dependency chain short and not sending photos to external AI processing services. Supply chain attacks are harder to defend against because they exploit trusted software that a company has already approved for use.

Can someone use my stolen video interview to create a deepfake?

Yes - a recorded video interview provides exactly the raw material needed for deepfake generation: high-quality footage of your face from multiple angles, your voice, and your speaking patterns. Viallo doesn't collect video or audio data, only photos, and doesn't extract biometric profiles from them. The 40,000+ people affected by the Mercor breach now face a permanent risk of their likenesses being used for identity fraud or synthetic media.

Related articles