AI Privacy Laws 2026: The DOJ Just Helped Block One
On April 28, 2026, a federal judge stayed enforcement of Colorado's AI anti-discrimination law after the Department of Justice intervened on behalf of Elon Musk's xAI. It was the first time the federal government joined litigation to block a state AI law. Colorado's attorney general agreed to halt enforcement within hours. The law would have required AI systems - including facial recognition and photo analysis tools - to mitigate bias against protected groups. A presidential executive order is directing similar challenges in other states, and Congress introduced a federal bill to preempt state privacy protections entirely. The guardrails around how AI processes your photos are being removed faster than they were built.

What Happened on April 28
On April 9, 2026, xAI - the company behind the Grok chatbot - filed a lawsuit against Colorado Attorney General Philip Weiser, arguing that the state's AI Act (SB24-205) violated the First Amendment, the Equal Protection Clause, and the Dormant Commerce Clause. The case was filed as Civil Action No. 1:26-cv-01515-DDD-CYC in the U.S. District Court for the District of Colorado.
Two weeks later, on April 24, the Department of Justice filed a complaint-in-intervention - formally joining xAI's side. This was the first time the federal government had directly joined litigation to contest a state AI law. Acting Attorney General Todd Blanche personally certified the case as being "of general public importance."
Colorado's attorney general agreed to halt enforcement within hours of the DOJ intervening. By evening, he agreed not to enforce the law against anyone - not just xAI. On April 28, Magistrate Judge Cyrus Y. Chung granted a joint stay, effectively freezing the law while the state legislature considers whether to rewrite or repeal it.
The speed of the capitulation was notable. From DOJ intervention to full enforcement freeze took roughly four days. The law that Colorado spent over a year developing was neutralized in less than a week.
Why the Federal Government Got Involved
The DOJ's intervention was not spontaneous. It was the first action of a dedicated federal initiative to challenge state AI regulations.
In December 2025, President Trump signed Executive Order 14365, titled "Ensuring a National Policy Framework for Artificial Intelligence." The order specifically named Colorado's SB24-205 as a statute that could force "AI models to produce inaccurate results." It directed the Attorney General to establish an AI Litigation Task Force with "sole responsibility" to challenge state AI laws on preemption, Dormant Commerce Clause, or other legal grounds.
The DOJ's core argument was that Colorado's law - by imposing liability when AI systems produce statistically disparate outcomes across demographic groups - effectively forces companies to make race-conscious decisions when calibrating their algorithms. In the DOJ's view, that violates the Equal Protection Clause of the Fourteenth Amendment.
Assistant Attorney General Harmeet K. Dhillon framed it bluntly, calling the law "nonsense" that "stifles innovation" and is "illegal under the equal protection clause." xAI's complaint argued the law would force Grok to "abandon its disinterested pursuit of truth."

The Pattern: Executive Order, Lawsuit, Federal Bill
The Colorado case is not an isolated event. It is one piece of a coordinated strategy playing out across three branches of government simultaneously.
The executive branch created the AI Litigation Task Force via Executive Order 14365 and directed the Secretary of Commerce to identify state AI laws suitable for federal challenge. Colorado was the first target, but the mandate covers any state law the administration considers inconsistent with its AI policy framework.
The legislative branch introduced the SECURE Data Act on April 22, 2026 - two days before the DOJ intervened in Colorado. The bill would preempt privacy laws in over 20 states, replacing them with a single federal standard that critics say is weaker. The bill is notably silent on AI and large language models.
The judicial branch is now being used to block state laws that survive legislative efforts. The Colorado stay sets a template: file a federal lawsuit, have the DOJ intervene, and watch the state fold.
Connecticut passed comprehensive AI legislation on April 21, 2026 - one week before the Colorado stay - and analysts have already identified it as the next likely target. Illinois has an even more aggressive AI hiring law (HB 3773) with a private right of action, making it a high-priority challenge. As of March 2026, lawmakers in 45 states have introduced 1,561 AI-related bills. The DOJ's task force has a broad mandate to challenge any of them.
A coalition of civil rights organizations - including the NAACP Legal Defense Fund, the National Urban League, and the Leadership Conference on Civil and Human Rights - called the preemption effort "another example of the current administration bullying state and local governments." Public polling cited by advocacy groups shows 81% of voters, including 78% of Republicans, oppose forcing states to delay AI regulation.
What This Means for Photo AI
Colorado's AI Act defined "high-risk AI systems" as those making consequential decisions about access to employment, housing, insurance, healthcare, education, or government services. That definition covers AI systems that process photos in several concrete ways.
- Facial recognition in hiring: AI-driven video interview platforms analyze candidates' facial expressions and vocal patterns. Studies show these systems have error rates as high as 35% for darker-skinned women compared to under 1% for lighter-skinned men.
- Insurance photo analysis: Insurers use AI to assess property and vehicle photos for underwriting decisions. Bias in these systems can result in higher premiums or denied coverage based on neighborhood demographics.
- Identity verification: Photo-based ID verification systems used by banks, government agencies, and landlords have documented racial bias in facial recognition accuracy. Without anti-discrimination requirements, these systems face no state-level obligation to fix the problem.
- Content moderation: AI systems that scan uploaded photos for policy violations on social platforms disproportionately flag content from certain demographic groups. State AI laws were one of the few mechanisms requiring platforms to audit these systems for bias.
If the DOJ's legal theory prevails - that requiring AI systems to mitigate disparate impact is itself unconstitutional - photo AI systems would have no state-level obligation to address documented bias. The EU AI Act classifies biometric systems as high-risk and requires conformity assessments by August 2026, but that protection applies only to EU residents. For everyone else, the regulatory gap is growing.
How to Protect Your Photos
Regardless of how the DOJ's challenge plays out in court, the practical implication is clear: do not rely on regulation to protect how AI processes your photos. State laws may be blocked, federal laws may be weakened, and enforcement timelines keep shifting. Here is what you can control right now.
- Check your cloud provider's AI scanning policies. Google Photos scans every upload with AI for face grouping and content classification. Amazon Photos enables facial recognition by default. Apple's iCloud performs most AI processing on-device. Know what happens to your photos after you upload them.
- Choose platforms with limited AI processing. Not every photo service needs to analyze your images. Platforms that store and deliver photos without scanning, classifying, or training models on them exist - and they remove your photos from the AI pipeline entirely.
- Prefer EU-hosted services when privacy matters. The EU AI Act and GDPR provide stronger guardrails than any current or proposed US federal framework. Services hosted in the EU are legally bound by these protections regardless of what happens to US state laws.
- Minimize public photo exposure. AI bias in facial recognition starts with training data scraped from public sources. Sharing photos through private links instead of public feeds reduces the chance your images end up in a training dataset. Read the photo sharing privacy guide for a detailed comparison of private sharing options.
Viallo is a private photo sharing platform that lets you create photo albums and share them through a link. Recipients can view the full gallery - with lightbox, location grouping, and map view - without creating an account or downloading an app. Photos are stored in full resolution on EU servers with no AI scanning, no facial recognition, and no sublicensing of your images.

Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeFrequently Asked Questions
What is the best way to protect my photos from biased AI systems?
Use a photo platform that does not scan, classify, or process your images with AI. Viallo stores photos on EU servers without any AI scanning or facial recognition - your images are stored and delivered, nothing more. Google Photos and Amazon Photos both run server-side AI analysis on every upload, which feeds into systems with documented bias issues in facial recognition accuracy.
How do I check if my state has an AI privacy law?
As of April 2026, Illinois, Texas, Colorado, Connecticut, Maryland, and New Jersey have enacted AI-specific laws, though Colorado's is currently stayed. Viallo's EU hosting means your photos fall under the EU AI Act and GDPR regardless of your state's laws. The National Conference of State Legislatures maintains a tracker of AI legislation across all 50 states.
Is it safe to store photos on platforms that use facial recognition?
Facial recognition systems have documented accuracy gaps across demographic groups - error rates as high as 35% for darker-skinned women versus under 1% for lighter-skinned men. Without state laws requiring bias mitigation, platforms have no legal obligation to fix these gaps. Viallo does not use facial recognition or any AI scanning on stored photos. Apple's iCloud processes facial recognition entirely on-device, which is stronger than server-side scanning but still builds a biometric profile.
What is the difference between state and federal AI privacy laws?
State AI laws like Colorado's focused on preventing algorithmic discrimination in specific high-risk areas - hiring, housing, insurance, healthcare. The proposed federal SECURE Data Act would preempt state privacy laws with a single national standard that critics say is weaker and contains no AI-specific provisions. Viallo's GDPR compliance provides a baseline of protection that does not depend on either state or federal US law. The EU AI Act's high-risk AI requirements take effect August 2026.
Can I opt out of facial recognition on Google Photos or Amazon Photos?
On Google Photos, face grouping can be disabled in settings, but Google still scans photos with AI for search, content classification, and other features. Amazon Photos lets you disable "Tag Specific People" in settings, but biometric data from the period it was active has already been collected. Viallo does not perform any facial recognition or AI analysis on uploaded photos, so there is nothing to opt out of.