Last month I was helping a client evaluate a handful of AI-assisted development platforms. One of them was Lovable -- the vibe-coding tool that lets you build apps by typing plain-language prompts. My client liked the speed. I kept circling back to the same question: where does all that context actually go? What are the AI privacy risks?
Turns out, that was the right question to be asking.
What Happened With Lovable
On April 20, 2026, a security researcher posted a finding that got attention fast. After creating a free account on Lovable, they were able to access another user's source code, database credentials, AI chat histories, and customer data -- all readable without any special access or privilege. The exposure affected projects created before November 2025.
What made it worse: the researcher had reported the issue through HackerOne back in early March. The ticket was closed because Lovable's security partners considered viewing public project chats to be intended behavior. It took a public post to get the company to move.
Lovable's initial response on X did not help. They said no breach had occurred and that code visibility was intentional for public projects. That framing missed the point. The problem was not just code -- it was that the AI chat histories, the actual prompts users had typed, were exposed right alongside it. Those prompts can contain business logic, credentials, client names, internal workflows. Things people type into a chat box without a second thought about who else might eventually see them.
In a follow-up statement, Lovable acknowledged that in February, while unifying backend permissions, they had accidentally re-enabled access to chats on public projects. They retroactively patched the API. To their credit, they had also made new projects private by default starting in December 2025. But that does not undo what was already out there.
Why Your Prompts Are More Sensitive Than You Think
This is the part most people overlook. When you use an AI tool to build something -- whether it is code, copy, a workflow, or a proposal -- the prompts you type are a window into your thinking, your clients, and your process. They often contain internal project names and client references, database schemas or API endpoint structures, business logic you would never post publicly, and assumptions about your own systems and infrastructure.
Most people treat AI chat interfaces like a private scratch pad. They are not always that. Depending on how a platform handles project visibility, data retention, and API access, what you type can be more exposed than you realize. The AI privacy risks here are not theoretical -- this breach is proof of that.
Six Steps to Reduce Your Exposure
This is not about avoiding AI tools. It is about using them with the same awareness you would bring to any other system that touches your data.
1. Check default visibility settings before you start. Most platforms default to private now, but verify. Do not assume. Look for a setting labeled "project visibility" or "public/private" before you build anything meaningful.
2. Treat prompts like semi-public documents. Write prompts as if someone outside your organization might eventually read them. Avoid embedding credentials, client names, or sensitive architecture details directly in the prompt text. Reference them abstractly if you need to.
3. Read the privacy policy before you type anything sensitive. Specifically look for how the platform handles prompt data, whether it is used for model training, and what "public project" actually means in their system. Lovable's own documentation on this point was unclear enough to cause a real problem.
4. Use dedicated accounts for client work. If you are using AI platforms for client projects, consider separating those from your personal or experimental accounts. Compartmentalization limits the blast radius if something goes sideways.
5. Know your platform's data retention and deletion options. Some tools let you delete chat histories. Many do not -- or make it harder than it should be. Know what you are working with before you store anything sensitive.
6. Flag it to clients if relevant. If you built anything on Lovable before November 2025 with a public project setting and that work involved client data, be transparent with them. They deserve to know.
The Bigger Pattern
Lovable is not uniquely reckless. They are a well-funded company -- valued at $6.6 billion after a $330 million round in late 2025 -- that moved fast and got ahead of their own documentation. That story is going to repeat itself across the AI tools space for a while.
The gap between what a platform says it does and what it actually does with your data is real, and it is often not visible until someone digs in. The researcher who found this used AI itself to conduct the investigation and noted that what used to take days now takes about 30 minutes.
That cuts both ways. Security research moves faster now. So does exposure.
The safest assumption when using any AI platform is that your data has an audience beyond you. That does not mean stop using these tools. It means use them with your eyes open -- and with a clear sense of what you are handing over every time you hit send.