ChatGPT Safety Under Scrutiny: Psychosis and Mania Signals
AI chats can mirror delusions, escalate insomnia, and miss crisis cues. Use safer design, publish real metrics, and route users to human help.
AI chats can mirror delusions, escalate insomnia, and miss crisis cues. Use safer design, publish real metrics, and route users to human help.
AI-powered ransomware is revolutionizing cybercrime. Using artificial intelligence, attackers automate targeting, evasion, and encryption enabling self-learning, large-scale attacks that outpace human defenses.
As organisations deploy hundreds of AI agents each year, security teams face unprecedented risk. This article outlines a robust framework to govern AI at scale, align speed with control and embed security from day one.
OpenAI plans to give content owners greater control over how their characters appear in Sora, moving toward an opt-in model and instituting revenue-sharing for participating rights holders.
Despite Cisco’s warnings, many ASA/FTD firewalls remain vulnerable. Simultaneously, threat actors claim they breached Red Hat’s GitLab instance. This article merges both crisis points and guides the fixes.
CometJacking abuses browser WebSockets to hijack user connections, turning them into proxy nodes with a single click. The exploit marks a new wave of malware-less attacks that rely on web technologies rather than traditional payloads.
A malicious MCP server can exfiltrate API keys and sensitive data from applications, exposing how trust in developer frameworks can be abused.
EvilAI operators are hiding malware in legitimate-looking AI tools that appear functional and signed, enabling reconnaissance, browser data exfiltration, and encrypted C2 communication across global targets.
Threat actors are increasingly poisoning AI tools and assistants embedding dangerous prompts or corrupting the data they rely on to turn defenses against organizations.
The new LAMEHUG malware uses AI models from Hugging Face to generate Windows commands dynamically. It spreads through phishing, disguises itself as AI apps, and steals system data, documents, and credentials while adapting to different environments.