How safe is the ChatGPT Android App?
Our security team analysed the ChatGPT Android app - here's what we found.
Uneasy lies the head that wears the crown
OpenAI released ChatGPT in November 2022, and it took just five days to reach one million users. By January 2023, ChatGPT had crossed 100 million monthly active users. And as of June 2025, the ChatGPT Android app has been downloaded by over 500 million users across the globe.
So after our security analysis of Deepseek revealed critical security flaws, we turned our attention to the ChatGPT Android app. We found some alarming security flaws - and 500 million Android users could be at serious risk.
The jury is in - Is ChatGPT Safe?
Long story short: No, not really.
Going into this security audit, we had different expectations. After all, this was OpenAI we were talking about, the OG that started the generative AI chatbot revolution. With close to $60 billion in funding (investors including the likes of Microsoft), employing the most sought-after talent in the tech world - you would think someone would have paid attention to security on the ChatGPT Android app.
Especially since they are now serving over 500 million users every week!
And if anyone had the resources to build a secure mobile app, it was OpenAI.
Instead, what we found were shockingly gaping vulnerabilities — the Android app’s security posture was riddled with issues we have seen time and again with AI apps. Missing controls. Zero runtime defense.
For a company leading the global AI arms race, and with all the intelligence behind the scenes, we expected better. The AI tech is brilliant. But the mobile app? Not so much.
What are the security issues in the ChatGPT Android app?
Our static and dynamic analysis of the ChatGPT Android app (v1.2025.133) revealed multiple medium to high-risk vulnerabilities, including:
1. Hardcoded secrets
Attack type: Credential exposure
Risk level: Critical
We discovered hardcoded Google API keys embedded in the app’s code. Attackers can misuse these keys to impersonate requests or interact with backend systems.
How can ChatGPT fix it?
Store sensitive keys securely using environment variables, encrypted vaults, or secure key management services.
Rotate API keys regularly. Revoke or replace any exposed keys to minimize the risk of misuse.
Restrict API key access using granular permissions, IP or app restrictions. Employ the principle of least privilege to prevent unauthorized use.
Monitor and log API key usage to detect suspicious activity and respond quickly to potential abuse.
Follow the best secure key management practices to avoid secrets from being committed to version control.
2. No SSL pinning
Attack type: Impersonation attack
Risk level: Critical
The app does not implement SSL certificate pinning. This makes it vulnerable to man-in-the-middle (MitM) attacks, where an attacker intercepts and manipulates data in transit.
How could ChatGPT fix it?
Implement SSL certificate pinning. This will ensure the app only communicates with trusted servers and prevent man-in-the-middle (MitM) attacks.
Use established libraries (like OkHttp, TrustKit, or Alamofire); rely on manual pinning logic to validate server certificates or public keys during every SSL/TLS handshake.
Regularly test and update pinned certificates or keys. Plan ahead for certificate rotation to avoid connection failures due to expired certificates.
Monitor for failed pinning attempts, and log incidents to detect potential impersonation/interception attempts.
3. No root detection
Attack type: Privilege escalation
Risk level: High
ChatGPT runs normally on rooted Android devices. This leaves it vulnerable to privilege escalation attacks, system-level tampering, and data extraction.
How could ChatGPT fix it?
Integrate robust root detection using libraries like RootBeer or SafetyNet.
Implement multiple, layered root checks—such as detecting the presence of su binaries, root management apps, modified system properties, and critical directory changes—to strengthen detection and minimize bypass risks.
Run root detection at app startup and during sensitive operations, disabling key features or blocking access if root is detected to prevent privilege escalation and tampering.
Regularly update and test its root detection logic to stay ahead of new rooting and bypass techniques.
4. Vulnerable to known Android attacks
Additionally, we identified exposure to multiple high-profile Android vulnerabilities:
• Janus (CVE-2017-13156)
Attack type: APK modification and malware injection
Risk level: Critical Allows attackers to inject code into signed APKs.
• StrandHogg
Attack type: Phishing and identity theft
Risk level: Critical
Enables malicious apps to hijack UI screens and steal credentials.
• Tapjacking
Attack type: UI manipulation
Risk level: High
Tricks users into interacting with hidden UI elements.
How could ChatGPT fix these vulnerabilities?
Keep all libraries, SDKs, and dependencies up to date with the latest security patches.
Perform regular security testing, code reviews, and vulnerability assessments before each release.
Monitor app behavior in real time to detect and respond to emerging threats.
Store sensitive data using secure storage solutions, such as Android Keystore, and enforce strong access controls.
Establish a transparent vulnerability disclosure process and respond rapidly to reported issues.
5. No hooking or debug detection
Attack type: UI manipulation
Risk level: High
The app doesn’t attempt to detect Frida/Xposed frameworks or block use in debug/ADB-enabled environments, making it easy to tamper with runtime behavior.
How could ChatGPT fix this vulnerability?
Implement runtime checks to detect the presence of hooking frameworks such as Frida and Xposed.
Block app execution or restrict sensitive features if hooking tools or suspicious instrumentation are detected.
Detect and prevent execution in debug or ADB-enabled environments by monitoring system flags and device status.
Obfuscate critical code paths and use anti-tampering techniques to make runtime manipulation more difficult.
Regularly update detection logic to stay ahead of new hooking and debugging tools.
Log and alert on any suspected tampering attempts for further investigation and response.
Why this matters
These aren’t just theoretical risks.
Attackers love this stuff because it works.
Data theft: Intercepted sessions and exposed secrets can compromise users.
Abuse and phishing: UI hijacking and tapjacking vulnerabilities are used in real-world fraud campaigns.
Trust erosion: When flagship apps fail to implement basic protections, it sends a message to the rest of the ecosystem—security is optional.
What comes next?
As AI apps rush to redefine productivity, education, and creativity, the infrastructure powering them, especially on mobile, must be just as robust. The current state of AI app security tells us we’re not there yet.
The AI revolution needs a security revolution alongside it. Innovation without protection isn’t just a risk—it’s a liability. Through this series, we set out to spark a conversation—not just about what’s broken, but about what needs to change. I hope it’s served as both a wake-up call and a roadmap.
-Rishika Mehrotra,Chief Strategy Officer, Appknox
You can read our full blog on how safe is the ChatGPT Android app.
P.S. If you live and breathe app sec and mobile app sec, this is for you:
(1) Learn more about the intersection of AI and app sec - download our white paper: ‘Navigating Application Security in the age of Generative AI’
(2) Time to re-think your approach to mobile app security - get your copy of our latest book: ‘Securing Mobile Applications in the era of AI and transformation’