Meta Is Using Your AI Chats to Sell You Ads - And You Can't Opt Out

8 min readBy Viallo Team

Quick take: Meta updated its privacy policy to use your conversations with Meta AI - the chatbot built into Facebook, Instagram, and WhatsApp - for targeted advertising. Unlike likes or comments, AI chats are intimate. People ask about health, relationships, parenting, and finances. Meta is now feeding all of that into its ad machine. There's no opt-out. The only exception is the EU, UK, and South Korea, where privacy laws block it.

A person sitting alone at a cafe table seen from behind, phone face-down on the table next to a coffee cup, afternoon window light casting long shadows, shot on Sony A7III with 85mm f/1.4, shallow depth of field, warm muted tones, slight film grain

What Meta actually changed

In December 2025, Meta quietly updated its privacy policy to include a new category of data for ad targeting: your conversations with Meta AI. The policy now explicitly states that"prompts that can include questions, messages, media and other information you or others share with or send to AI at Meta" will be used for personalized advertising.

Meta AI is the company's chatbot, embedded directly into Facebook Messenger, Instagram DMs, and WhatsApp. It's also built into the Ray-Ban Meta smart glasses and various other Meta products branded under the "AI at Meta" umbrella.

Before this change, Meta used your public activity - likes, comments, shares, groups you joined - to target ads. That was already a lot. But AI chats are fundamentally different from public activity, and Meta knows it.

Why AI chats reveal more than likes and comments

When you like a post about hiking, Meta knows you're interested in hiking. That's surface-level. When you ask Meta AI "I'm going through a divorce and need to organize photos of my kids for a custody filing - what's the best approach?" - that's a completely different level of personal disclosure.

People talk to AI chatbots the way they talk to a therapist or a close friend. They share medical concerns, relationship problems, financial stress, parenting struggles. They ask questions they'd never type into a search bar because the conversational format feels private.

It's not private. Every question you ask Meta AI is now potential ad fuel. Ask about anxiety medication, and you might start seeing ads for mental health apps. Ask about family photo organization during a custody case, and the system knows you're going through a divorce.

Close-up of hands holding a smartphone from behind, screen not visible, fingers gripping the device tightly, urban street blurred in background, shot on Fujifilm X-T5 with 56mm f/1.2, natural overcast light, cool desaturated tones, sharp detail

The "sensitive topics" carve-out is narrower than you think

Meta says conversations about "sensitive topics" - religion, sexual orientation, political views, health, racial or ethnic origin - won't be used for ad targeting. That sounds reassuring until you look at what's not on the list.

Financial stress? Not sensitive. Divorce? Not sensitive. Job loss? Not sensitive. Moving cities? Not sensitive. Parenting struggles? Not sensitive. These are exactly the life events that advertisers pay premium rates to target, and Meta has explicitly excluded them from the "sensitive" category.

Even the excluded categories come with a catch. Meta says those conversations "may still be stored or used internally for product improvement." So while your religious questions won't directly trigger ads, Meta still keeps that data and can use it to build better AI models - which will eventually improve their ad targeting capabilities anyway.

There's no real opt-out

This is the part that privacy advocates find most alarming. Unlike some other data collection practices, Meta doesn't offer a way to use Meta AI without having your conversations fed into the ad system. Your choices are: don't use Meta AI, or accept that everything you say will be used to sell you things.

EPIC - the Electronic Privacy Information Center - wrote directly to the FTC, calling Meta's approach "aggressive expansion of AI for marketing and advertising" and asking the commission to intervene. So far, the FTC hasn't acted.

The only people who are actually protected are users in the EU, UK, and South Korea. GDPR and equivalent regulations in those regions require explicit consent for this kind of data processing, so Meta can't apply the policy there. If you're in the US or most other countries, you're out of luck.

Try Viallo Free

Share your photo albums with a single link. No account needed for viewers.

Start Sharing Free

What this means for your photos

You might not think a chatbot policy affects your photos. But Meta AI is integrated into the same apps where billions of photos are shared daily. If you use Instagram DMs to send family photos and then ask Meta AI a question in the same app, the system has both your photos and your private questions. It's all the same data pipeline.

Meta also recently pushed to access users'entire camera rolls through AI features. Combined with chat-based ad targeting, the company is building a profile that includes what you photograph, what you share, who you share it with, and what you privately worry about. That's a level of profiling that goes well beyond traditional social media advertising.

If you're using Meta's platforms for private photo sharing - Instagram DMs, WhatsApp groups, Messenger threads - the AI integration means your conversations about those photos can now contribute to your ad profile. "Can you send me the photos from Saturday?" is innocent enough. But over hundreds of conversations, the system learns your social circle, your activities, your life events.

What you can do about it

Don't use Meta AI for anything personal. If you see the AI prompt in Messenger, Instagram, or WhatsApp, ignore it. Every interaction teaches the system something about you that will be used for ad targeting.

Move private photo sharing off Meta's platforms. If you're sharing family photos through Instagram DMs or WhatsApp groups, consider switching to a platform that doesn't integrate AI into the sharing experience. Viallo lets you share albums via private links - no accounts, no AI, no ad targeting.

Separate your tools. Use Meta's apps for what they're good at - staying connected with a broad network. But keep intimate conversations and personal photos in tools that don't monetize them. The more data you keep out of Meta's ecosystem, the less complete your ad profile becomes.

A stack of printed photographs on a wooden desk next to a sealed envelope, soft morning light from a nearby window, shallow depth of field with bokeh, shot on Canon EOS R5 with 50mm f/1.2, warm natural tones, slight grain

The bigger picture

Meta's move is part of a broader trend. AI companies are discovering that the most valuable data isn't what people post publicly - it's what they say privately. Perplexity AI was just hit with a lawsuit for allegedly sharing user conversations with Meta and Google. OkCupid got caught sending user photos to an AI company. Tinder is testing AI features that scan your camera roll.

The pattern is always the same: introduce a helpful AI feature, then monetize the data it generates. The helpfulness is the bait. The ad targeting is the product.

Until regulators outside the EU catch up, the only real protection is choosing tools carefully. Your photo storage and sharing platform should be separate from your social media. Your private conversations should be in apps that don't run ad businesses. And if a free AI feature seems too good to be true, the price is probably your data.

Frequently asked questions

Is Meta reading my WhatsApp messages for ads?

WhatsApp messages remain end-to-end encrypted. However, if you interact with Meta AI within WhatsApp, those conversations with the chatbot are not end-to-end encrypted and are now used for ad targeting. Regular chats with other people are not affected.

Can I use Meta AI without being tracked?

Not if you're outside the EU, UK, or South Korea. Meta does not offer an opt-out for AI chat data being used for advertising in other regions. The only way to avoid it is to not use Meta AI at all.

Does this affect photos I share on Instagram or Facebook?

Indirectly. If you discuss photos, ask Meta AI about photo-related questions, or interact with AI features while sharing photos, those interactions feed into your ad profile. The photos themselves were already being analyzed for ad targeting through existing image recognition features.

Why are EU users protected but US users aren't?

GDPR requires explicit, informed consent for using personal data in automated decision-making and profiling. Meta can't apply this policy in the EU without getting consent first, and they know most users would say no. The US has no equivalent federal privacy law, so Meta can change its terms unilaterally.

What's the safest way to share photos privately?

Use a dedicated private photo sharing platform that doesn't run advertising and doesn't integrate AI into the sharing experience. Viallo stores photos on EU servers, doesn't process them with AI, and shares via private links that don't require recipients to create accounts or agree to tracking.

Related articles