Skip to main content
Social Media

Meta Will Use Your AI Chats to Gather Data on You

Meta's "personalization" means your private conversations are now ad data.

Key points

  • Meta's AI chats now factor into your likes, follows, and clicks when driving content and ad algorithms.
  • Our tendency to bond with AI leads to self-disclosure that can be recorded and repurposed.
  • AI-driven content reinforces existing thoughts, narrows exposure to new ideas and increases perceived threats.
  • Teens naturally use chat tools to explore inadvertently seeding algorithms to reinforce tentative ideas.

Starting December 16, 2025, Meta will begin adding AI chats to the behavioral data it gathers to further personalize its experiences and ad targeting. This move blurs the line between private communication and behavioral data, raising questions about how AI interactions can be used to shape what we see and how we see ourselves online.

Meta currently provides generative AI to users across its ecosphere of apps, including Facebook, Instagram, WhatsApp, and Messenger, and as a standalone app. Even though we don't pay in cash, Meta's apps aren't free. We pay with our data. AI chats take this to a whole new level. Conversations that feel private will now be part of what you "pay" Meta for using its tools.

Two Ways to Build Your Digital Superpowers

Every click, swipe, and chat tells a story. AI conversations are no different, but they're more revealing. All kinds of technology gather data about what you do. It can track your online activity using cookies, device fingerprinting, IP addresses, and tracking codes. Offline, technology tracks you through GPS, Bluetooth beacons, and Wi-Fi scanning.

But you are not powerless. You can do two things to build the superpower that lets you take control.

  1. Learn how any technology you use works, such as how it handles your data and privacy, and what settings you can control.
  2. Understand how our built-in vulnerabilities to technology keep us engaged with notifications, the allure of social connection, and FOMO.

Now Tracking Your Thoughts

When Meta uses your AI chat interactions as additional input, your thoughts about hiking, cooking, parenting challenges, or mental health, or most anything else, can shape the Reels, posts, suggested groups, and ads you see. Meta says it will not use chats about "sensitive topics" such as religion, sexual orientation, politics, and health for ad targeting. But to make that distinction, they still must collect and process your data to decide if it's fair game.

Why AI Feels Personal

Humans are wired to connect through conversation. Chatting, whether with a person or a bot, activates social cognition networks that foster trust and self-disclosure. We anthropomorphize technology, projecting intention and empathy onto digital assistants. Everyone I know has given their go-to chatbot a name.

That makes AI feel safe. When we treat AI as a confidant, we often share details, doubts, and ideas we'd never post publicly. This psychological illusion of intimacy makes AI chats a goldmine for platforms that trade on behavioral insights.

A New Kind of Targeting: Our Thoughts

Previously, algorithms could only infer our interests based on behavior. With AI chat, they can go straight to the source. Our thoughts.

Algorithms will be able to match our thoughts with tracked behaviors. If the algorithm knows we asked about teen stress, it might serve us more posts about adolescent behavior. If it links that AI query with visiting a site for parental-control apps or teen mental health resources, it now has a whole new level of specificity from which to infer and shape our reality.

Targeting can sometimes feel helpful, but it creates a tighter "feedback loop" that can narrow our worldview in ways we may not even realize. The more we engage with a topic, the more the system reinforces it. Over time, this can magnify anxieties or amplify perceived threats, fueled by our natural attentional bias toward emotionally charged content.

The Illusion of Privacy

AI chats can increase productivity, synthesize research, and be highly entertaining. And because it feels private, it gives us a place to try things out. We can role-play social situations, explore identities, or ponder our wildest dreams. AI chatbots are gushingly supportive, never critical, so a chat feels like a safe space to try on ideas, experimenting with self-presentation, and ask questions that may reflect our vulnerabilities or curiosities. This kind of exploration is particularly common (and healthy) during adolescence.

However, those signals will now feed our recommendation system. Therefore, the algorithm can strengthen aspects of the self that were tentative or experimental but not internalized. Narrow feeds can bias our views of what's good and bad in ourselves and others. They can be reflected in the stories we see, the communities we join, the ads we're served, and our sense of self. This algorithmic reinforcement has been linked to polarization, anxiety, and distorted perceptions of social norms (e.g., Pariser, 2011; Ito et al, 2023).

4 Things You Can Do

  1. Be mindful of what you ask your AI. When you ask Meta's AI a question, it may later influence what you see in your feed, so use it with awareness.
  2. Check your ad preferences. Navigate to Meta's Settings to check what personal data the system is using and limit personalization.
  3. Educate teens and kids. It's never too early or too late for digital literacy. Explain that AI chats in the Meta ecosystem are not private spaces. What feels like a private conversation with a friend can feed algorithms.
  4. Find alternate tools. If you use AI chats for personal reflections or sensitive topics, look for AI tools outside the Meta ecosystem. Meta does not have access to conversations with other AI chatbots, such as ChatGPT or Claude. But each has its own user agreements, so read them! Check the privacy and security policies to confirm that your content will not be used for targeting or AI training.

Final thoughts

Meta's policy shift marks another step toward algorithmic intimacy, the blending of private cognition and public data. What we once considered internal, our curiosities, fears, and reflections, is now measurable and marketable.

The good news: Awareness brings choice. You might not be able to stop all personalization. You can choose where and how you engage, what data you feed into these systems, and how you guide others, including your kids, to use AI thoughtfully.

References

Ito, M., Cross, R., Dinakar, K., & Odgers, C. L. (Eds.). (2023). Algorithmic rights and protections for children. MIT Press. https://doi.org/10.7551/mitpress/13654.001.0001

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin Press.

advertisement
More from Pamela B. Rutledge Ph.D., M.B.A.
More from Psychology Today