78 AI Bills Across 27 States Are Rewriting the Rules for Your Photos
Quick take: There are currently 78 AI chatbot safety bills live across 27 US states. Five have already been signed into law. Tennessee banned AI therapy bots entirely. California's SB 243 requires chatbots to disclose they're AI and implement suicide prevention protocols. Georgia's SB 540 adds child safety requirements. These laws don't just affect chatbots - they're setting the template for how AI can interact with any personal data, including your photos. Apps like Tinder, Meta AI, and AI photo editors will all be affected.

What's actually happening in state legislatures
While Congress continues to debate whether AI needs federal regulation, state legislatures have already moved. The Future of Privacy Forum is tracking 98 chatbot-specific bills across 34 states and three federal proposals. Of those, 78 bills are actively moving through 27 state legislatures right now.
These bills aren't getting bogged down in partisan fights. They're passing with overwhelming margins - 68 to 1, 39 to 1, 26 to 1 - across both red and blue states. Tennessee's AI therapy bot ban passed the Senate 32-0 and the House 94-0. That's the kind of bipartisan consensus that almost never happens in American politics.
The trigger was a series of lawsuits against Character.AI after teenagers died by suicide following interactions with AI chatbots. Google and Character.AI agreed to settle those cases in January 2026. But the legislative response was already in motion.
The key laws that have already passed
California SB 243 - the first chatbot safety law
California's Senate Bill 243, signed by Governor Newsom in October 2025, became the first AI chatbot safety law in the country. It took effect January 1, 2026, and sets requirements that other states are now copying.
- Chatbots must disclose they're AI, not human
- Minors must receive a reminder every 3 hours of continuous use
- Operators must implement protocols for addressing suicidal ideation, including referrals to crisis services
- Annual reporting on safety measures to the Office of Suicide Prevention
Tennessee's therapy bot ban
Tennessee went further than any other state by banning AI systems that represent themselves as qualified mental health professionals. Governor Bill Lee signed SB 1580 after it passed unanimously in both chambers. The law doesn't just regulate AI therapy bots - it prohibits them entirely.
Georgia SB 540 - child safety requirements
Georgia's bill requires chatbot operators to notify users when they're interacting with AI, limit certain actions by minors, provide privacy tools, and implement protocols for responding to suicidal ideation or self-harm. It passed both chambers and is on Governor Kemp's desk.

Why this matters for your photos
These bills target AI chatbots specifically, but the legal frameworks they're creating will affect every AI feature that touches personal data - including photos. Here's why.
AI photo features are chatbot-adjacent. When Tinder's AI scans your camera roll to suggest profile photos, or Meta AI analyzes your photo library to answer questions about your memories, or an AI photo editor processes your selfie - these are all AI systems interacting with personal data. The same consent and disclosure requirements being applied to chatbots will inevitably extend to these features.
Age verification requirements affect photo apps. Multiple state bills mandate age verification and parental consent for AI features used by minors. Photo apps that use AI - filters, enhancement, organization - will need to verify user age before deploying these features. That's a significant change for apps that currently don't check.
Disclosure requirements are expanding. If a chatbot must disclose it's AI, an AI photo editor must disclose it's modifying your images. AI organization features must disclose they're analyzing your photos. The principle is the same: users have a right to know when AI is processing their data.
Congress is watching, but not acting
The federal government has been remarkably slow on AI regulation. The US KIDS Act passed committee but hasn't reached a floor vote. The FTC launched an inquiry into AI companion chatbots and issued orders to seven companies, but that's investigation, not legislation.
Meanwhile, the AI industry is pouring money into the 2026 midterms. The Innovation Council Action announced at least $100 million in spending. OpenAI's co-founder contributed $25 million to a political group. The message is clear: the industry wants to shape regulation at the federal level before state laws create a patchwork they can't navigate.
But the state bills keep passing. And unlike federal legislation, state laws take effect quickly. Companies building AI features right now have to comply with California's requirements today, Tennessee's requirements as soon as they take effect, and potentially dozens more by the end of 2026.
Try Viallo Free
Share your photo albums with a single link. No account needed for viewers.
Start Sharing FreeWhat comes next for AI and your data
The current wave of state AI bills is focused on chatbot safety, but the regulatory framework is expanding. Connecticut's Senate Bill 5 includes provisions for AI in employment decisions. Colorado's AI Act targets algorithmic discrimination. These are the building blocks of a comprehensive AI regulatory framework - it's just being built state by state instead of federally.
For photo apps and services, the implications are clear. Any AI feature that touches user photos will eventually need to comply with disclosure requirements, age verification, consent mechanisms, and data handling standards. Companies that are building these features now without those guardrails are accumulating regulatory risk.
The EU's AI Act becomes fully applicable in August 2026 and sets even stricter requirements for AI systems processing personal data. US companies serving EU users will need to comply with both frameworks.
How to protect yourself now
Check which AI features are active on your photos. Google Photos, Apple Photos, and Meta all use AI to analyze your images by default. Review your settings and turn off features you don't need - especially facial recognition grouping and AI-powered search.
Be selective about what you upload. Not every photo needs to go through an AI pipeline. Keep sensitive family photos, children's photos, and private moments on platforms that don't process images with AI.
Read the disclosure. When an app tells you it's using AI to process your photos, take that seriously. California's law requires these disclosures for a reason - they're your signal that your data is being analyzed.

The bottom line
The 78 AI bills moving through state legislatures represent the most significant wave of technology regulation since GDPR. They're driven by real tragedies, supported by bipartisan consensus, and passing at a pace that the tech industry hasn't been able to slow down.
For your photos, the takeaway is simple: the era of AI features operating without oversight is ending. Companies will be required to tell you when AI is processing your data, get your consent, and implement safeguards. Until those requirements are universal, choose platforms that already operate this way.
At Viallo, we don't process your photos with AI. No facial recognition, no automated analysis, no machine learning on your images. Your photos are stored on EU servers with GDPR protections. We think that's the standard the new laws are moving toward - we just got there first.
Frequently asked questions
Do these AI laws apply to photo apps?
Most current bills target AI chatbots specifically, but the legal frameworks - consent, disclosure, age verification, data handling - are designed to extend to all AI systems. Photo apps that use AI features like facial recognition or automated organization will likely face similar requirements.
Which states have already passed AI safety laws?
California's SB 243 is already in effect. Tennessee's therapy bot ban has been signed. Georgia's SB 540 is on the governor's desk. Nebraska and Colorado have bills in advanced stages. At least 78 bills are actively moving across 27 states.
Will there be a federal AI law?
It's uncertain. The US KIDS Act has committee support but no floor vote. The AI industry is spending over $100 million on the 2026 midterms to shape federal regulation. For now, state laws are the primary regulatory mechanism.
How do these laws compare to the EU AI Act?
The EU AI Act is more comprehensive and becomes fully applicable in August 2026. It covers all AI systems, not just chatbots, and includes strict requirements for high-risk applications. US state laws are narrower but moving in the same direction.
Should I stop using AI photo features?
Not necessarily. AI photo features like search and organization are genuinely useful. But be aware of what you're opting into. Review your settings, understand which features analyze your photos, and keep sensitive images on platforms that don't process them with AI.