Abstract
AI browsers have evolved from simple LLM sidebars into fully agentic, automation-driven environments but their security architecture has not always kept pace.
This talk examines how emerging AI browsers weaken long-standing security guarantees through privileged extension surfaces, opaque capabilities, and loosely governed agent execution. We talk about how hidden/fake extensions can impersonate trusted UI components, and spoof AI panels to trigger OAuth compromise, phishing workflows, and persistent session hijacking with minimal permissions. As browsers introduce autonomous agents that click, type, and authenticate on behalf of users, a deeper gap appears: there is no architectural distinction between human intent and agent execution. Enterprise controls and browser defenses cannot reliably differentiate the two. This enables prompt injection, workflow poisoning, and UI deception attacks that manipulate agents while appearing legitimate. We also assess how vendors are making improvements to tackle the security issues highlighted by the security researchers. The talk concludes with a roadmap for securing AI browsers: agent-identity separation, hardened permission boundaries, transparent extension ecosystems, and clear isolation between model, automation engine, and browser runtime.