Home » Data Breaches
ChatGPT browsing window with a blurred results pane, a visible MFA prompt, and a warning about “q=” links and allowlisted redirects

ChatGPT Data Leaks: Seven New Prompt Injection Paths and Real

Seven fresh techniques let attackers leak ChatGPT data through everyday workflows: poisoned search, “q=” one-click links, allowlisted ad redirects, conversation injection, markdown hiding, and memory poisoning. Because exposure rides on normal browsing and memory behavior, prevention requires policy plus proof: sanitize URLs, block bing.com/ck/a, disable Saved Memory for high-risk roles, and validate controls continuously with OWASP LLM Top 10 and MITRE ATLAS as your benchmarks.

Read More