Breaking AI Browsers: Hidden APIs, Agentic Chaos, and the Race Toward Secure Architecture

By Nishant Sharma , Kabilan Sakthivel on 01 Mar 2026 @ Nullcon
πŸ”— Link
We need help to complete this entry! Missing: presentation, Video
I can help!
#ai-security #browser-security #web-security #social-engineering
Focus Areas: πŸ€– AI & ML Security , πŸ” Application Security , 🎯 Penetration Testing , πŸ“š Security Awareness , 🌐 Web Application Security

Abstract

AI browsers have evolved from simple LLM sidebars into fully agentic, automation-driven environments but their security architecture has not always kept pace.

This talk examines how emerging AI browsers weaken long-standing security guarantees through privileged extension surfaces, opaque capabilities, and loosely governed agent execution. We talk about how hidden/fake extensions can impersonate trusted UI components, and spoof AI panels to trigger OAuth compromise, phishing workflows, and persistent session hijacking with minimal permissions. As browsers introduce autonomous agents that click, type, and authenticate on behalf of users, a deeper gap appears: there is no architectural distinction between human intent and agent execution. Enterprise controls and browser defenses cannot reliably differentiate the two. This enables prompt injection, workflow poisoning, and UI deception attacks that manipulate agents while appearing legitimate. We also assess how vendors are making improvements to tackle the security issues highlighted by the security researchers. The talk concludes with a roadmap for securing AI browsers: agent-identity separation, hardened permission boundaries, transparent extension ecosystems, and clear isolation between model, automation engine, and browser runtime.