March 23, 2026 / MOBILE SECURITY, VULNERABILITIES, ANDROID SECURITY, PRIVACY, AI SECURITY That AI You Confide in May Be an Open Book: Researchers Find Cloud Keys, Exposed Conversations, and Injectable Chat in Companion Apps One app ships with the developer’s OpenAI token and Google Cloud private key in its code. Another lets any app on the phone inject scripts into what users experience as a private conversation. Oversecured, a mobile application security company, has identified security vulnerabilities in several popular AI companion and chatbot apps on Google Play. The category includes virtual friends, romantic partners, coaching assistants, and general-purpose AI wrappers. AI companion apps are one of the fastest-growing categories on Google Play — downloads surged after the launch of ChatGPT, and new apps appear weekly. The security audit focused not on major platforms like OpenAI or Google, but on the wave of independent apps that millions of users are actually installing. The most severe findings: hardcoded cloud credentials that give anyone who decompiles the app access to the developer’s backend, and a cross-site scripting flaw that allows code injection into a conversation interface. The affected apps include: A productivity AI chatbot with OpenAI and Google Cloud credentials exposed in its code A multi-voice AI companion with cross-site scripting in its conversation WebView A metaverse-oriented AI companion with hardcoded authentication tokens Multiple chatbots with host validation errors that could redirect users to attacker-controlled sites Users tell AI companions about relationships, sexual preferences, loneliness, financial stress, and family conflicts. Unlike clinical mental health apps, AI companions operate in a regulatory gap — there is no equivalent of HIPAA for a conversation with a virtual partner. A chatbot ships with cloud keys in its code A productivity AI chatbot contains a hardcoded OpenAI API token and a Google Cloud service account private key. Anyone who downloads the APK and runs a standard decompiler can extract both. The OpenAI token allows API calls at the developer’s expense. The Google key provides access to a project called “invoice_maker” — the developer’s invoicing and billing infrastructure. In practice: the private key is a direct path to the developer’s invoice and payment data. If the same Google Cloud project also handles user data — as is common when developers run multiple services under one account — the exposure could extend to conversation histories and personal information as well. Script injection into a private conversation A multi-voice AI companion has an exported activity that accepts raw HTML and loads it into a WebView with JavaScript enabled. A malicious app can inject arbitrary code that executes within the companion’s interface, under its base URL origin. In practice: an attacker could read the user’s conversation history, inject fake messages into the chat, or present a phishing screen requesting personal data — all inside what the user sees as a trusted conversation with their AI companion. The wrapper problem Many AI companion apps are “wrappers” — they connect to a third-party API (OpenAI, Google, or an open-source model) and add an interface, a personality, and a payment model. The API provider handles the AI. The wrapper developer handles authentication, data storage, and Android security. Every vulnerability in this audit sits in the wrapper layer. Users trust the AI brand. The failures happen in the layer between the user and the model. ‘One app includes both its OpenAI token and its Google Cloud private key in the code — the Cloud key belongs to the developer’s invoicing system. With those two credentials, you can reach the AI backend and the billing infrastructure. The AI companion category handles a different but equally sensitive type of data as therapy apps — personal confessions, relationship details, sexual content. These apps grew so fast that basic security was never part of the process,’ says Sergey Toshin, founder of Oversecured. The researchers have not disclosed specific app names or technical details as the vulnerabilities remain unpatched. Please find technical report on our findings here. About Sergey Toshin Sergey Toshin is the founder of Oversecured, a mobile application security company. He has discovered and helped fix over 1,000 mobile vulnerabilities. His research earned the #1 ranking on Google Play’s security researcher leaderboard, top researcher status with Samsung Mobile Security, and a top-3 position on HackerOne. He has collected over $1 million in bug bounties from major technology companies. About Oversecured Oversecured provides automated security scanning for Android and iOS applications. The company has identified vulnerabilities in apps from Google, Samsung, Amazon, PayPal, TikTok, Airbnb, Netflix, and other major technology companies. The scanner covers 175+ vulnerability categories for Android and 85+ for iOS with 99.8% detection accuracy. CNN, TechCrunch, and other media outlets have featured Oversecured’s research. Ready to strengthen your mobile security? Start your free trial of Oversecured today Get access to files Please fill out the form to access the research files. We will send you an email containing them. First Name * Last Name * Email Address * Company * Job Title Cancel Submit Thank you for reaching out An email with the requested files will be sent to the email address you provided shortly. Got It Your message was sent. Thank you! Our specialists will contact you soon. Protect your apps today! It can be challenging to keep track of security issues that appear daily during the app development process. Drop us a line and we'll help you automate this process internally, saving tons of resources with Oversecured. First Name Last Name Corporate Email Company Submit