AI Security
-
NIST proposes AI security overlays built on SP 800-53, invites public feedback
NIST published a concept paper proposing a framework of AI security overlays built on SP 800-53 and opened a public call for input, detailing use cases like generative, predictive, and agentic AI while inviting feedback through COSAIS channels.
-
Critical Vulnerabilities Discovered in NVIDIA’s Triton Inference Server
A set of critical vulnerabilities in NVIDIA’s Triton Inference Server has been discovered, posing significant risks to organizations using the platform for AI operations. Potential exploits could lead to remote control of servers and theft of sensitive data.
-
Hacker Compromises Amazon’s AI Coding Extension, Raises Concerns Over Security
A hacker compromised Amazon’s AI coding extension, raising serious concerns about the security of generative AI tools and software supply chains. The incident highlights critical vulnerabilities in the integration of open-source code and underscores the need for improved security measures.
-
Security Flaw in Google’s Gemini Could Facilitate Phishing Attacks
A newly discovered security flaw in Google’s Gemini for Workspace may enable phishing attacks through deceptive email summaries. Researchers warn that invisible directives can be injected into emails, leading Gemini to generate misleading content. While Google is reinforcing its defenses, users are advised to remain cautious.
-
NIST Seeks Public Feedback on High-Performance Computing Security Guidelines
NIST has released a draft for public comment on high-performance computing security guidelines aimed at enhancing data protection and securing AI models, with comments accepted until July 3, 2025.
-
OpenAI Expands Bug Bounty Program and Cybersecurity Initiatives
OpenAI has announced expansions to its bug bounty and cybersecurity grant programs, including a significant increase in the maximum bug bounty payout from $20,000 to $100,000 and new microgrants for innovative cybersecurity research proposals.