MLsploit is a machine learning (ML) evaluation and fortification framework designed for education and research. It focuses on ML security related techniques in adversarial settings, such as adversarial creation, detection, and countermeasure. It consists of plug-able components or services which could demonstrate various security research topics.
The MLsploit has a service-oriented architecture (SOA), a web portal to interact with users, and a RESTful API to automate the requests. The web portal is the main module to integrate various components through RESTful API with a defined JSON message. Each component can be implemented by any language on any platform with RESTful API supports. The components can be built as a serverless function or a micro-service wrapped into a portable container image. This flexible component design is agnostic to underlining ML module implementation and not lock-in to a specific cloud provider. MLsploit provides essential services to support cloud environments such as unique id, message queue, big data storage, and basic authentication.
Several security projects will be demoed in this presentation. The “Resilient ML” is a micro-service for malware classifier creation, inference, and feature evaluation. The “AVPass” and “PETransformer” are Docker wrapped services which can transform malicious binaries to adversarial and bypass the ML detector. The components “Shield” and “Adagio” are defenses against image and audio adversarial. The project “Barnum” is for anomaly detection on Windows platform. MLsploit could integrate various of security projects and evaluate the ML adversarial and defenses. We believe that MLsploit can be a useful general framework for ML security researches.
Contributors from ISTC-ARSA: Nilaksh Das, Siwei Li, Chanil Jeon, Jinho Jung*, Shang-Tse Chen*, Carter Yagemann*, Evan Downing*, Haekyu Park, Evan Yang, Li Chen, Michael Kounavis, Ravi Sahita, David Durham, Scott Buck, Polo Chau, Taesoo Kim, Wenke Lee (*equal contribution).