A federal lawsuit filed last week is making the rounds in privacy circles, and if you have ever used Perplexity AI to look up anything you would not want on a billboard, it is worth a few minutes of your time.
I will give you the short version first, then the details.
The claim is that Perplexity embedded advertising trackers — specifically the Facebook Meta Pixel, Google Ads, Google DoubleClick, and Meta’s Conversions API — directly into its platform. Those trackers allegedly transmitted the full text of user conversations to Meta and Google, along with email addresses, IP addresses, Facebook IDs, and device fingerprints. Not summaries. Not anonymized metadata. Full transcripts. And according to the complaint, this happened before a query even reached Perplexity’s own servers.
The Incognito Problem
Perplexity offers an Incognito Mode. The feature promises anonymous threads that expire after 24 hours and are not saved to your history. The lawsuit calls it a sham. The complaint alleges the same trackers fired in incognito sessions, routing conversation content and identifying information to Meta and Google regardless of what mode the user had selected.
The named plaintiff — a Utah man identified as John Doe — used Perplexity to figure out when he and his spouse could start drawing Social Security, how to move savings into a Roth IRA, and what to make of cannabis company stocks he was considering. He believed those conversations were private. He later discovered partial transcripts appear to have been shared with two of the largest advertising companies on the planet.
One more detail from the complaint that stopped me: Perplexity does not require users to agree to its privacy policy before using the service. The policy is not even linked from the main web app. You cannot consent to terms you cannot find.
When reporters reached out, Perplexity’s communications officer said the company had not been served a lawsuit matching the description and could not verify the claims. The statement did not deny the tracking practices. Meta pointed to its advertiser policies, which prohibit sending sensitive user information through its tracking tools — a response that speaks to what advertisers are supposed to do, not what Perplexity allegedly did. Google did not comment.
The case — Doe v. Perplexity AI Inc., 3:26-cv-02803, Northern District of California — names Perplexity, Meta, and Google as co-defendants. It covers free-account users who chatted with Perplexity between December 7, 2022 and February 4, 2026. Potential damages exceed $5,000 per individual violation. The 140-page complaint brings 14 counts including invasion of privacy, federal wiretapping violations, CCPA violations, and unfair competition.
This Fits a Longer Pattern
A colleague of mine — someone who works in health advocacy and handles genuinely sensitive information — told me a few months ago that she had switched to Perplexity for research because it felt less corporate than Google. Cleaner. Less like it was watching her. When I filled her in about this lawsuit over another phone call, there was a long pause. Then: “So I might as well have Googled it.”
That reaction makes sense when you look at how Perplexity has handled trust issues before this.
In June 2024, investigations by Wired and developer Robb Knight found Perplexity ignoring robots.txt — the standard web protocol that tells crawlers which content sites have opted out of sharing. Despite public claims to the contrary, the company was using undisclosed IP addresses and spoofed user-agent strings to scrape sites that had explicitly blocked them. Perplexity’s CEO blamed third-party crawlers and declined to commit to stopping it.
By August 2025, Cloudflare had seen enough. The company published research showing Perplexity was running stealth crawlers that switched identities when its declared bots were blocked — impersonating Google Chrome on macOS to continue scraping content from domains that had specifically opted out. Cloudflare set a trap: brand-new domains, never indexed, robots.txt blocking all access. Perplexity still answered questions about their content in detail. Cloudflare delisted Perplexity as a verified bot and CEO Matthew Prince posted on X that some supposedly reputable AI companies act more like North Korean hackers. Reddit later used the same comparison in its own lawsuit against Perplexity for unauthorized platform scraping.
Robots.txt violations, stealth crawlers, an incognito mode that allegedly does not work. Each one individually could be explained away. Together they describe a company that has consistently treated consent as an obstacle rather than a baseline.
What to Do With This Information
The lawsuit is early. Allegations are not verdicts. But you do not need a court ruling to decide what you share with an AI tool today.
Stop putting sensitive information — health details, financial specifics, legal questions — into any web-based AI platform until you have read its privacy policy and confirmed what third-party trackers it runs. If you cannot find the privacy policy from the main interface, that is your answer.
Incognito or private modes on AI platforms are not equivalent to browser-level privacy controls. Server-side tracking can bypass them entirely, and this lawsuit is a live example of why that distinction matters.
If you used Perplexity on a free account between December 2022 and February 2026, you may be part of the proposed class. California residents can file CCPA deletion requests to ask what data was shared and request its removal.
The larger question this case forces is one the whole industry should be answering: when someone asks an AI tool about a cancer diagnosis or a retirement account, what exactly happens to that conversation? Most people assume the answer is nothing. Most privacy policies say something different. And at least one federal lawsuit now argues the gap between those two things is not an accident.
If you work with a mission-driven organization and want to think through which AI tools are appropriate for your team’s work, let’s talk.