Presentation Material
Abstract
Artificial intelligence and machine learning are the new buzzwords in the industry. At present days, it has been widely used for analytics purposes and defensive mechanisms like detecting anomalies, raising alerts, etc. But what people are not aware of is its huge potential to be used as an offensive mechanism and another weapon in the toolkit of pentesters and red teams. In fact, proper implementation of such techniques can even eliminate the need for having red teams. Using AI, one can replace the need of human beings and can trigger attacks in a fully automated manner. Although it sounds a little futuristic, the truth is it is possible to build an AI BOT army as penetration testers to trigger comprehensive offensive attacks. This talk is about how to apply various techniques of AI/ML into advanced cybersecurity use cases. Here we will talk about various categories of use cases: Offensive attacks using AI Bypass authentication systems using AI Trigger social engineering attacks using AI How to attack various AI based systems
AI Generated Summary
The talk examines the offensive application of artificial intelligence and machine learning (AI/ML) in cybersecurity, positioning these technologies as potent tools for penetration testers and red teams. It argues that AI/ML enables more efficient, scalable, and evasive attack vectors compared to traditional methods.
Key techniques discussed include using AI to generate realistic password guesses, drastically reducing the dictionary size needed for credential stuffing attacks. The presentation details model poisoning, where an exposed training API is fed malicious data to bias a model’s behavior, citing the Microsoft Tay chatbot incident as a real-world example. Model stealing, or extracting a proprietary model from a deployed application for reverse engineering, is also highlighted. Furthermore, the talk covers adversarial attacks, where subtle, optimized noise is added to inputs (e.g., images) to cause misclassification by object detection systems, such as causing a stop sign to be recognized as a mailbox.
A central concern is the security gap surrounding ML frameworks and pipelines. Organizations often adopt third-party AI/ML libraries and tools rapidly, integrating them into production without adequate security validation. Standard static analysis and code review tools frequently lack the sophistication to audit complex ML frameworks for vulnerabilities like buffer overflows. The practice of exposing training APIs in production environments, intended for model improvement, creates a direct vector for data poisoning.
Practical implications stress a “security-first” mindset for AI development. Defenders must secure ML APIs, rigorously vet third-party frameworks, and consider adversarial robustness during model design. For offensive security professionals, the talk advocates for incorporating AI-specific attack methodologies—such as API reconnaissance, poisoning, and adversarial example generation—into their assessment toolkit to evaluate the resilience of modern AI-driven systems.