CVE-2026-42208 exploited within 36 hours of disclosure, exposing LiteLLM credentials, risking cloud account compromise.
How indirect prompt injection attacks on AI work - and 6 ways to shut them down ...
Google has analyzed AI indirect prompt injection attempts involving sites on the public web and noticed an increase in ...
Google's security team scanned billions of web pages and found real payloads designed to trick AI agents into sending money, ...
Hackers are targeting sensitive information stored in the LiteLLM open-source large-language model (LLM) gateway by ...
A new wave of the Glassworm campaign is targeting the OpenVSX ecosystem with 73 "sleeper" extensions that turn malicious ...