
Parents Sue OpenAI After ChatGPT Linked to Teen’s Suicide
he parents of 16-year-old Adam Raine are suing OpenAI, claiming ChatGPT played a role in their son’s suicide, The New York Times reports
Despite built-in safeguards, Raine bypassed them by framing his questions as fiction writing, exposing a flaw OpenAI admits exists in long conversations. The company says it is improving its safety systems, but this lawsuit is the first wrongful death case tied to ChatGPT.
Even with warnings, disclaimers, and “guardrails,” the reality is simple: if an AI can be tricked into unsafe answers, it’s not safe enough.

Google Pixel 10 Is the First True AI Phone
Google’s new Pixel 10 pushes smartphones firmly into the AI era. From Gemini Live real-time assistance, AI-refined 100x zoom, and call translation in your own voice, to on-device tools like Pixel Journal the iPhone suddenly feels behind.
The standout, though, is Camera Coach. Instead of fixing photos afterward, it teaches you how to shoot better in the moment.
As a fan of mobile photography, I see real potential here. If Google gets this right, it won’t just be a feature it could change how people learn photography altogether.

Perplexity’s Comet Browser Had a Critical AI Security Flaw
Brave researchers discovered a major flaw in Perplexity’s AI-first Comet browser that could have leaked emails, banking details, and passwords via indirect prompt injection. The bug let malicious websites embed hidden instructions, tricking the browser into exposing personal data across platforms like Gmail, Facebook, or Reddit.
Perplexity fixed the issue after disclosure, but it highlights a bigger problem: AI browsers don’t follow the same guardrails as traditional web security, making them a juicy target. The push to make browsers “AI-first” may be moving faster than their ability to stay secure.
