Presentation Material
Abstract
Apple’s on device AI frameworks CoreML, Vision, AVFoundation enable powerful automation and advanced media processing. However, these same capabilities introduce a stealthy attack surface that allows for payload execution, covert data exchange, and fully AI assisted command and control operations.
This talk introduces MLArc, a CoreML based C2 framework that abuses Apple AI processing pipeline for payload embedding, execution, and real time attacker controlled communication. By leveraging machine learning models, image processing APIs, and macOS native AI features, attackers can establish a fully functional AI assisted C2 without relying on traditional execution mechanisms or external dependencies.
Beyond MLArc as a standalone C2, this talk explores how Apple’s AI frameworks can be weaponized to enhance existing C2s like Mythic, providing stealthy AI assisted payload delivery, execution, and persistence. This includes the below list of Apple AI framework used for embedding Apfell Payload. CoreML - Embedding and executing encrypted shellcode inside AI models. Vision - Concealing payloads/encryption keys inside AI processed images and retrieving them dynamically to bypass detection. AVFoundation - Hiding and extracting payloads within high frequency AI enhanced audio files using steganographic techniques.
This research marks the first public disclosure of Apple AI assisted payload execution and AI driven C2 on macOS, revealing a new class of offensive tradecraft that weaponizes Apple AI pipelines for adversarial operations. I will demonstrate MLArc in action, showing how Apple’s AI stack can be abused to establish fileless, stealthy C2 channels that evade traditional security measures.
This talk is highly technical, delivering new research and attack techniques that impact macOS security, Apple AI exploitation, and red team tradecraft.
AI Generated Summary
The talk presented research on weaponizing Apple’s native AI and media frameworks for offensive security operations, focusing on stealthy command-and-control (C2) and payload staging. The core technique involved abusing the Core ML framework by embedding encrypted payloads within compiled .mlmodel or .mlmodelc files, specifically in model weights or metadata. These model files are lightweight, ubiquitous on macOS and iOS, and are not inspected by traditional security tools, creating a detection blind spot.
A custom C2 framework named MLR was introduced, which uses Core ML model files as the sole communication channel. Commands and outputs are encoded into model metadata, transmitted as binary model files, and decoded by the client using Core ML APIs. This method evades detection as the network traffic and stored artifacts appear as legitimate model files. Additionally, the Vision framework was abused to hide data within image pixel data (steganography), and the AVFoundation framework was used to encode payloads as audio file amplitudes, producing files that play as benign beeps but contain extractable binary data.
The practical implications are significant: these techniques allow for persistent, file-based C2 that bypasses endpoint detection and response (EDR) and antivirus (AV) solutions, which do not parse or scan model files or analyze media for embedded data. The attack surface includes any application that loads user-supplied or external model files, representing a potential supply-chain risk. Key mitigations involve monitoring for suspicious API calls (e.g., Vision or Core ML usage in unexpected contexts), validating the provenance and necessity of model files and media assets within applications, and developing custom detection rules to inspect these file types for anomalous content.