OmniGPT Data Breach Exposes 30,000 Users and Millions of Chat Messages
6:44
Thu | Feb 13, 2025 | 3:42 PM PST

A major security incident has allegedly struck OmniGPT, a popular AI aggregator that provides users access to multiple AI models, including ChatGPT-4, Claude 3.5, Gemini, and Midjourney. A hacker claims to have breached OmniGPT's infrastructure, leaking a staggering 30,000 user email addresses, phone numbers, and 34 million lines of chat messages. The leaked data reportedly includes API keys, credentials, and file links, raising severe cybersecurity and privacy concerns.

The leak: what happened?

According to a report by cybersecurity research group KrakenLabs, a hacker operating under the alias "Gloomer" has posted samples of the allegedly stolen data on BreachForums, a notorious online marketplace for illicit data sales. The hacker's post claims that the leak contains:

  • All chat messages exchanged between users and AI models via OmniGPT

  • Links to files uploaded by users

  • Email addresses and phone numbers of approximately 30,000 users

  • API request details and authentication payloads, potentially exposing OmniGPT's session management vulnerabilities

A leaked excerpt from the data shows API request headers and payloads, specifically referencing OmniGPT's application endpoint (https://app.omnigpt.co). If verified, this could indicate serious flaws in how the platform manages authentication and secures sensitive user information.

Expert analysis: what cybersecurity professionals are saying

Andrew Bolster, Senior R&D Manager at Black Duck, emphasized the deeply personal nature of chatbot interactions and the broader implications of this breach, saying:

"If confirmed, this OmniGPT hack demonstrates that even practitioners experimenting with bleeding edge technology like Generative AI can still get penetrated, and that industry best practices around application security assessment, attestation, and verification should be followed. But what's potentially most harrowing to these users is the nature of the deeply private and personal 'conversations' they have with these chatbots."

Bolster also highlighted the ethical obligations of AI developers, referencing the IEEE's recently published 7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems.

Eric Schwake, Director of Cybersecurity Strategy at Salt Security, warned about the broader security risks associated with exposed API keys and credentials:

"The possible exposure of user information and conversation logs—including sensitive items like API keys and credentials—highlights the urgent need for strong security measures in AI-powered platforms. Organizations creating and deploying AI chatbots must prioritize data protection throughout the entire lifecycle, ensuring secure storage, implementing access controls, utilizing strong encryption, and conducting regular security evaluations."

Jason Soroko, Senior Fellow at Sectigo, pointed to the rapid pace of AI innovation potentially outpacing essential security safeguards:

"The reported OmniGPT breach highlights the risk that rapid AI innovation is outpacing basic security, neglecting privacy measures in favor of convenience. Unchecked progress in AI inevitably invites vulnerabilities that undermine user confidence and technological promise."

Potential consequences for users

If the breach is legitimate, it could have significant security and privacy implications for OmniGPT users, including:

  • Phishing and identity theft: Exposed email addresses and phone numbers can be leveraged for targeted phishing attacks and social engineering scams.

  • Compromised credentials and API keys: Users who shared API keys, authentication tokens, or login credentials within their AI-generated conversations may now be at risk of unauthorized access to third-party services or financial accounts.

  • Corporate espionage and financial fraud: Many businesses rely on AI models for sensitive work-related tasks. If leaked files contain confidential company data, billing information, or trade secrets, organizations may face financial fraud, legal issues, or competitive espionage.

  • Privacy violations: AI conversations may include personal or sensitive discussions. A leak of 34 million chat messages could expose anything from medical information to business strategies, putting individuals and companies at risk.

OmniGPT's response: silence so far

Despite the severity of these claims, OmniGPT has not publicly acknowledged the breach or issued an official statement regarding the security of its platform. This lack of transparency raises further concerns about how user data is being protected and whether appropriate security measures are in place.

In the past, major AI platforms have been scrutinized for their data security policies, particularly regarding how chat logs and user data are stored, processed, and shared. If OmniGPT's alleged breach is confirmed, it may fuel ongoing debates about AI security and user privacy in a rapidly evolving industry.

What users should do now

Given the potential risks, OmniGPT users should take immediate action to protect their data:

  1. Change passwords and secure accounts: If you've used OmniGPT, update any passwords associated with your account and enable multi-factor authentication (MFA) where possible.

  2. Revoke API keys: If you shared any API keys or authentication credentials in chatbot conversations, regenerate or revoke them immediately.

  3. Monitor for phishing attempts: Be extra cautious about emails, texts, or calls claiming to be from OmniGPT or other services requesting login information or sensitive data.

  4. Check for data leaks: Use tools like Have I Been Pwned to see if your email or credentials have been exposed.

  5. Contact OmniGPT Support: If the breach is verified, users should demand clarity on the situation and ask the company about mitigation steps and compensation.

The alleged OmniGPT data breach is one of the largest AI-generated conversation leaks to date, underscoring the urgent need for stronger cybersecurity measures across AI platforms. As AI continues to integrate into everyday business and personal workflows, companies must prioritize robust encryption, secure authentication mechanisms, and transparency regarding data usage policies.

For now, OmniGPT users remain in the dark, waiting for official confirmation and guidance. If these allegations hold true, it will serve as another stark warning that AI-driven platforms must not compromise security in the race for innovation.

Follow SecureWorld News for more stories related to cybersecurity.

Comments