ChatGPT Data Leaks: Seven New Prompt Injection Paths and Real
Seven fresh techniques let attackers leak ChatGPT data through everyday workflows: poisoned search, “q=” one-click links, allowlisted ad redirects, conversation injection, markdown hiding, and memory poisoning. Because exposure rides on normal browsing and memory behavior, prevention requires policy plus proof: sanitize URLs, block bing.com/ck/a, disable Saved Memory for high-risk roles, and validate controls continuously with OWASP LLM Top 10 and MITRE ATLAS as your benchmarks.