Prompt Injection
-
Google Enhances AI Security with Layered Defenses Against Prompt Injection Attacks
Google has taken significant steps to enhance the security of its generative AI systems by implementing layered defenses against indirect prompt injections, which pose a new cybersecurity risk. These measures include advanced filtering techniques and a proactive approach to preventing malicious user inputs.
-
Security Flaw in GitLab’s AI Assistant Exposes Source Code to Attackers
A significant vulnerability in GitLab’s AI coding assistant, Duo, has been discovered, allowing potential theft of source code and injection of malicious instructions, prompting urgent security measures from GitLab.