Uncovering Azure’s Silent Threats: A Journey Into Cloud Vulnerabilities

By Nitesh Surana on 23 Sep 2023 @ Nullcon
πŸ“Š Presentation πŸ“Ή Video πŸ”— Link
#azure #cloud-pentesting #application-hardening #api-security #security-testing #container-security #threat-modeling
Focus Areas: πŸ“¦ Software Supply Chain Security , πŸ” Application Security , ☁️ Cloud Security , βš™οΈ DevSecOps , πŸ—οΈ Security Architecture , 🌐 Web Application Security

Presentation Material

Abstract

Cloud service providers offer Machine Learning as a Service platform, enabling companies to leverage the power of scalability & reliability while performing ML operations. However, with the massive adoption of such AI/ML systems worldwide where companies would be seeking to build services like ChatGPT, the security posture of the platform itself often may go unnoticed. We investigated Azure ML, a managed MLaaS from Microsoft. We found five vulnerabilities over three broad classes of security issues, namely: [CVE-2023-23312] Insecure logging of sensitive information: We found five instances of credentials leaking in cleartext on Compute Instances due to insecure usage of open-source components and insecure system design of how the environment was being provisioned. [CVE-2023-28312] Sensitive information disclosure: We found a case of exposed APIs in cloud middleware leaking sensitive information from Compute Instances. Network-adjacent attackers could leverage the vulnerability after initial access to laterally move or snoop in on the commands executed using a Jupyter terminal on a Compute Instance. Achieving stealthy Persistence: While reversing cloud middleware to decipher their functionality, we found two ways to achieve persistence in AML environments. An attacker could fetch the Storage Account access key and the Azure AD JWT of the system-assigned managed identity assigned to the Compute Instance, even from non-Azure environments. The logs generated while fetching the credentials from non-Azure environments would not be distinguishable from the legitimate logs generated from the Azure environment, making this technique of persistence, stealthy. Through this talk, the attendees will learn about the different issues found in AML. As we take a deep dive into the security issues, we will demonstrate various analysis techniques we adapted to while researching the service, giving the attendees a glimpse of how managed services like AML can be assessed when there are blurred lines in the shared responsibility model of security {of, in} the cloud.

AI Generated Summary

The talk details a security analysis of Azure Machine Learning (AML) compute instances, revealing multiple “silent threat” vulnerabilities stemming from credential leakage and insufficient network isolation.

Key findings include: First, storage account credentials were logged in clear text within standard error/output logs from Azure Batch start tasks and stored in environment variable files for internal AML agents (DSi Mount agent, DSi IDL stop agent). Second, JWT tokens for workspace owners were exposed via URL parameters in Nginx proxy access logs when using JupyterLab terminals. Third, a locally listening, unauthenticated API on port 46802 (exposed via the Nginx proxy) allowed network-adjacent attackers to retrieve systemd service logs, which included commands executed as root within Jupyter terminals, effectively spying on data scientists.

These issues were compounded by AML’s design: a shared file system across compute instances, default public storage account access, and the default ‘azureuser’ having passwordless sudo. An attacker compromising one user’s session or logs could pivot to the entire workspace. The vulnerabilities were reported to Microsoft and addressed via a CVE, including masking credentials in logs, removing JWT exposure from URLs, and restricting the internal API’s network exposure.

Practical implications highlight the critical risk of credential leakage in cloud development environments, where debug logs may be shared publicly (as seen in a separate 38TB Microsoft storage exposure). The research underscores that even services deployed within recommended virtual network configurations can harbor expanded attack surfaces through management agents. Takeaways stress the necessity of rigorous threat modeling for managed services, avoiding sensitive data in logs or URL parameters, and recognizing that shared cloud development environments inherently create a high blast radius for any single compromise.

Disclaimer: This summary was auto-generated from the video transcript using AI and may contain inaccuracies. It is intended as a quick overview β€” always refer to the original talk for authoritative content. Learn more about our AI experiments.