Abstract
As AI systems increasingly drive critical decisions from healthcare diagnostics to fraud detection, understanding why a model makes a particular prediction has become as important as the prediction itself. While Large Language Models (LLMs) dominate today’s AI discourse, the majority of real-world systems still rely on traditional classification, regression, and neural network models, each of which remains largely opaque to developers, auditors, and security teams.
This session demystifies Explainable AI (XAI) through the lens of practical, hands-on techniques such as SHAP, Counterfactual Explanations, and Layer-wise Relevance Propagation (LRP). Attendees will learn how to visualize and interpret predictions from image classifiers and regression models, uncovering which features drive outcomes, a crucial step toward ensuring fairness, robustness, and accountability in AI pipelines. Finally, the session bridges these foundational explainability methods with emerging paradigms in LLM-based systems, where chain-of-thought reasoning and contextual transparency become essential for responsible and secure deployment.