How to Hack an Agent – or Not · Thomas Fraunholz [PyData Rhein-Main]
How to Hack an Agent – or Not
Thomas Fraunholz
Senior Researcher AI @ Smart Labs AI
Large language models (LLMs) are not as secure as they seem. Beyond their tendency to “hallucinate,” they can be manipulated using jailbreaks and adversarial prompts, bypassing safeguards designed to keep them in check. But the real challenge arises when LLMs are connected to agents with real-world capabilities—like sending emails. This talk explores the security risks of AI agents and the ongoing research into making them more resilient. Using the “Adaptive Prompt Injection: LLMail Inject” challenge from the IEEE Conference on Secure and Trustworthy Machine Learning as a case study, we’ll examine how Microsoft’s Phi3 and OpenAI’s GPT-4o-mini handle adversarial attacks. We’ll break down security techniques like LLM judges, task drift detection, and prompt shields—critical concepts as the EU AI Act’s security mandates take effect in August 2025. Attendees will gain insights into the strengths and weaknesses of current AI security mechanisms and learn practical strategies for assessing the safety of AI agents in production environments.
Acknowledgements
Hessian.AI, for hosting the meetup.
https://hessian.ai/
PIONEERS HUB, for organising.
https://pioneershub.org/
www.pydata.org
PyData is an educational program of NumFOCUS, a 501(c)3 non-profit organization in the United States. PyData provides a forum for the international community of users and developers of data analysis tools to share ideas and learn from each other. The global PyData network promotes discussion of best practices, new approaches, and emerging technologies for data management, processing, analytics, and visualization. PyData communities approach data science using many languages, including (but not limited to) Python, Julia, and R.