Summary

Two AI companion apps (Chattee Chat and GiMe Chat), developed by Hong Kong-based Imagime Interactive Limited, exposed over 43 million intimate messages, 600,000+ images and videos, and data belonging to 400,000 users after leaving a Kafka Broker server entirely unsecured. Discovered by Cybernews on August 28, 2025; server taken offline mid-September after appearing on public IoT search engines.

Key Points

  • 43 million+ intimate messages and 600,000+ images/videos exposed
  • 400,000 users affected; two-thirds iOS, one-third Android; majority US-based
  • Server had zero authentication or access controls — anyone with a link could view content
  • Exposed data: IP addresses, device identifiers, private conversations, purchase logs, AI-generated images
  • Purchase logs revealed users spent up to $18,000 on AI companion interactions; developer estimated to have earned $1M+ before breach
  • Average user sent 107 messages to their AI partner — significant behavioral profiling data exposed
  • Threat vector for victims: sextortion, phishing, identity theft, reputational harm
  • Company’s privacy policy claimed security was “of paramount importance” — directly contradicted by the breach

Newsletter Angles

  • The gap between claimed privacy and actual practice is the story: the privacy policy existed; the security did not. This is a pattern across consumer AI products where trust is manufactured, not earned.
  • The scale of intimacy exposed: 43 million messages between humans and AI systems designed to simulate romantic relationships. The psychological and social dimensions of this — people confiding things to AI they wouldn’t say to anyone — are barely being discussed in policy circles.
  • The economics of AI companionship: one user spent $18,000 chatting with an AI girlfriend. The monetization of loneliness via AI is a major emerging market with essentially no regulatory framework.

Entities Mentioned

Concepts Mentioned

  • Data Privacy Weaponization — core example; AI apps as data collection vectors with weak protections
  • AI Legal Personhood — tangentially relevant: the legal status of AI companions affects what protections users can claim

Quotes

“The leak exposes a deep gap between user trust and developer responsibility.”

“Although the company’s privacy policy claimed that user security was ‘of paramount importance,’ Cybernews found no authentication or access controls on the server.”

Notes

Source is a consumer tech column (CyberGuy/Fox News) — readable but not analytical. Focuses on personal protection tips rather than systemic critique. No follow-up on whether Imagime Interactive faced regulatory consequences. The breach’s full scope (whether bad actors accessed data before takedown) remains unknown.