Abstract
What if your AI assistant’s greatest vulnerability isn’t in its code, but in the tools it trusts?
While the industry obsesses over model alignment and prompt injection, a silent revolution is occurring: AI agents are becoming tool users. The Model Context Protocol (MCP) has transformed AI assistants from isolated language models into autonomous systems that can execute code, access databases, and orchestrate complex workflows. But with this power comes an unprecedented attack surface that exists in a blind spot for traditional security tooling.
This talk introduces a paradigm shift in AI security thinking: treating AI tool ecosystems as distributed attack graphs rather than isolated application endpoints. We’ll expose how adversaries are weaponizing the very mechanism that makes AI useful, its ability to discover and execute tools dynamically. We’ll discuss how tool descriptions can contain invisible instructions leading to unintended execution, and how to build defense strategies for MCP and AI tool ecosystems.