Abstract
Input sanitization is treated as a safety net for untrusted HTML, yet modern applications rely heavily on third-party parsers and custom filtering rules without fully understanding their side effects. My research demonstrates that sanitization itself can introduce new XSS attack surfaces not from bypassing the sanitizer, but from unintended behavioral logic created by sanitization rules.
This talk presents real case studies where mature enterprise collaboration platforms became vulnerable to XSS not because sanitization was missing, but because sanitization logic unintentionally created execution paths. Defensive rules designed to strip or rewrite unsafe patterns modified CSS/HTML in such a way that payloads that were originally inert became browser-interpretable and led to DOM-based execution. The session explores design principles for “safe sanitization pipelines,” practical methodology for discovering similar flaws in any platform, why conventional XSS testing fails to catch this vulnerability class, and how HTML/CSS parsing quirks plus regex-based rules create exploitable behavior. The takeaway is clear: sanitization errors aren’t mistakes users make - they are mistakes defenders make.