-
Google reinvents "taint" checking:
Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.
The new paper grounds CaMeL's design in established software security principles like Control Flow Integrity (CFI), Access Control, and Information Flow Control (IFC), adapting decades of security engineering wisdom to the challenges of LLMs.
Honestly, this is great. Data flow tracing/taint checking is exactly the method that needed to be applied, IMO, so good job DeepMind. Also as Jeremy Kahn suggested, the name is definitely a shout-out to Perl, the language where taint checks were first widely-used. :)
Paper: https://arxiv.org/pdf/2503.18813
(Via Jeremy Kahn.)
Tags: llms ai security via:trochee data-flow infosec taint-checking taint camel papers