Hackers of India

Uncovering Azure’s Silent Threats: A Journey into Cloud Vulnerabilities

 Nitesh Surana  , Magno Logan  , David Fiser 

2023/08/10

Abstract

Cloud service providers offer Machine Learning as a Service platform, enabling companies to leverage the power of scalability and reliability while performing ML operations. However, with the massive adoption of such AI/ML systems worldwide, the security posture of the platform itself often may go unnoticed.

We investigated Azure ML, a managed MLaaS from Microsoft. Our findings talk of two broad classes of security issues, namely: Insecure logging of sensitive information: We found five instances of credentials leaking in cleartext on Compute Instances due to insecure usage of open-source components and insecure system design of how the environment was being provisioned. Sensitive information disclosure: We found a case of exposed APIs in cloud middleware leaking sensitive information from Compute Instances. Network-adjacent attackers could leverage the vulnerability after initial access to laterally move or snoop in on the commands executed using a Jupyter terminal on a Compute Instance.

Through this talk, the attendees will learn about the different issues found in AML, which may extend to other Cloud-based MLaaS platforms. As we take a deep dive into the security issues, we will demonstrate various analysis techniques we adapted to while researching the service, giving the attendees a glimpse of how managed services like AML can be assessed when there are blurred lines in the shared responsibility model. No chatGPT was used or abused during this research.