As developers, we’re increasingly using AI and large language models (LLMs) in our apps. But there’s a growing concern we need to stay on top of: prompt injection. This sneaky attack can mess with our AI systems by manipulating the input to get unexpected results.
This session breaks down what prompt injection is and look at some common tricks attackers use, like instruction overrides and hidden prompts. The session also goes deeper into more advanced challenges, including escalation techniques that can make these risks worse. Most importantly, the session won’t just talk about the problem, it shares practical steps you can take to protect your AI systems and keep things secure. Attend to learn how to stay ahead in AI security and make sure your apps are resilient against these emerging threats.
Type: Learning Session (50 min)
Track: Machine Learning and Artificial Intelligence
Audience Level: Intermediate
Speaker: Brian Vermeer