Beyond Trust: Building Secure Workflows with Untrusted Parts

Author: Denis Avetisyan


A new system, Mica, enables confidential computing by enforcing user-defined policies that govern communication between secure enclaves, even when the underlying components aren’t fully trusted.

The system delineates component trust levels - trusted and confidential in green, untrusted in red, and confidential but distrustful in yellow - while managing shared memory access rights, where incoming arrows signify read access and outgoing arrows denote write permissions, highlighting a nuanced approach to secure data handling.
The system delineates component trust levels – trusted and confidential in green, untrusted in red, and confidential but distrustful in yellow – while managing shared memory access rights, where incoming arrows signify read access and outgoing arrows denote write permissions, highlighting a nuanced approach to secure data handling.

Mica decouples data confidentiality from component trustworthiness by leveraging attestation and policy-driven control of communication within Trusted Execution Environments.

While confidential computing promises data protection within Trusted Execution Environments (TEEs), current systems rely on inherent trust between independently developed components-a fragile assumption for modern cloud deployments. This paper, ‘Sharing is caring: Attestable and Trusted Workflows out of Distrustful Components’, introduces Mica, an architecture that decouples confidentiality from trust by enabling tenants to define and enforce explicit communication policies between TEEs. Mica achieves this through a policy language and attestation mechanisms, extending Arm CCA with minimal changes to the trusted computing base, and demonstrably supports realistic cloud pipelines. Could this approach unlock truly composable and verifiable confidential workflows, even amongst mutually distrustful parties?


Data in Use: The Weakest Link

Historically, data security strategies have prioritized protecting information while it is being moved – in transit – or when it is stored – at rest. However, a significant vulnerability emerges the moment data is actively processed; this “use” phase represents a critical gap in traditional defenses. During computation, sensitive information must be decrypted for the processor to function, leaving it exposed to potential compromise from malicious software, insider threats, or even hardware vulnerabilities. This exposure is particularly concerning in modern computing environments where data frequently moves between various systems and services, increasing the attack surface and necessitating a more comprehensive security approach that extends beyond mere data storage and transmission.

Confidential Computing represents a significant evolution in data security by extending protection beyond data at rest and in transit to encompass its usage. This innovative approach utilizes hardware-based technologies – such as secure enclaves – to create isolated environments where sensitive data can be processed without exposure to the underlying operating system, hypervisor, or other potentially compromised software. The result is a dramatically reduced attack surface and the enablement of previously impossible applications; machine learning on encrypted datasets, collaborative analytics involving competitive data, and secure multi-party computation all become viable. By safeguarding data during active processing, Confidential Computing unlocks new possibilities for data-driven innovation while simultaneously addressing growing privacy concerns and regulatory requirements.

The increasing reliance on cloud computing and other untrusted environments has fundamentally altered the landscape of data security. Traditionally, organizations focused on protecting data both during transmission and while at rest, but this approach leaves a critical vulnerability: data in use. Processing sensitive information – financial records, personal health information, or intellectual property – within an environment beyond direct control necessitates a new approach. This demand for secure computation, even while actively being processed, is the primary driver behind the rise of Confidential Computing. The need to maintain privacy and integrity when leveraging the scalability and cost-effectiveness of the cloud, or collaborating with external partners, has created a pressing need for technologies that can isolate and protect data throughout its entire lifecycle, including during computation.

Building Walls Around Computation

Confidential Computing leverages Trusted Execution Environments (TEEs) as hardware-based isolation mechanisms. TEEs establish secure, isolated execution spaces that protect sensitive code and data from unauthorized access or modification, even with a compromised operating system or hypervisor. These environments typically involve dedicated hardware resources and a minimal, verified runtime, ensuring that code executed within the TEE operates with a high degree of integrity and confidentiality. Common TEE implementations include Intel Software Guard Extensions (SGX), AMD Secure Encrypted Virtualization (SEV), and ARM TrustZone, each providing a distinct approach to establishing these isolated execution spaces.

Ryoan and Veil are systems designed to build upon the foundational principles of Trusted Execution Environments (TEEs) by implementing enhanced sandboxing and enclave-like isolation specifically within Confidential Virtual Machines (CVMs). These systems move beyond standard virtualization security by providing a more granular level of isolation, effectively creating a hardware-rooted security boundary within the virtual machine itself. This approach allows for the protection of sensitive code and data even if the underlying hypervisor is compromised. Ryoan and Veil achieve this through techniques like memory encryption and attestation, verifying the integrity of the CVM’s execution environment and ensuring that only authorized code is running within the isolated space. The result is a strengthened security posture for applications running inside CVMs, offering a more robust defense against various attack vectors.

Plug-in Enclaves and Elasticlave represent advancements in confidential computing by offering granular control over memory sharing between the enclave and the host environment. Specifically, Plug-in Enclaves facilitate read-only shared memory regions, enabling the enclave to access static data without compromising its isolation. Elasticlave extends this capability with writable shared memory, allowing for dynamic data exchange while maintaining security boundaries. These features are designed for portability across various CPU architectures, including x86, ARM, and RISC-V, and aim to minimize performance overhead associated with data transfer between the enclave and the host, thereby supporting a broader range of applications requiring secure data processing and communication.

Existing confidential pipeline designs differentiate trusted components (green) from untrusted ones (red), with dotted lines highlighting the exposed interface to the untrusted host.
Existing confidential pipeline designs differentiate trusted components (green) from untrusted ones (red), with dotted lines highlighting the exposed interface to the untrusted host.

Verifying the Fortress: The Role of Attestation

Attestation is a critical security process used to validate that a Trusted Execution Environment (TEE) is genuine and operating correctly before sensitive computations are performed within it. This process involves the TEE providing cryptographic proof of its identity and its current state – specifically, the software and configuration it is running – to a verifier. Successful attestation confirms that the TEE has not been tampered with and is running trusted code, thereby establishing a root of trust. The verifier, which could be a remote server or a local component, uses this proof to determine whether to grant access to sensitive data or allow the execution of confidential operations within the TEE.

Mica utilizes attestation to separate the establishment of trust from the requirement of confidentiality, resulting in both improved security and increased deployment flexibility. Traditional confidential computing systems require trust to be established before data is decrypted, necessitating a large Trusted Computing Base (TCB) encompassing all components involved in decryption and execution. Mica, by leveraging attestation to verify TEE integrity prior to data access, significantly reduces the TCB size. Benchmarks indicate a reduction of approximately 22 to 3 times compared to conventional confidential computing deployments, as the TCB is limited to the attestation process itself rather than encompassing the entire execution environment.

Group Attestation and Swarm Attestation are scaling mechanisms designed to support attestation processes involving a substantial number of Trusted Execution Environments (TEEs). These methods address the limitations of traditional attestation approaches when applied to large deployments. Specifically, they minimize the overhead associated with attestation by reducing the amount of data each TEE contributes to the overall attestation report. Current implementations utilizing these techniques demonstrate an average policy blob size of only 450 bytes per peer, significantly decreasing network bandwidth requirements and processing load compared to methods requiring larger attestation bundles from each TEE.

Mica exhibits a linearly scaling attestation size with the number of Realms, offering a more efficient solution compared to vanilla CCA.
Mica exhibits a linearly scaling attestation size with the number of Realms, offering a more efficient solution compared to vanilla CCA.

From Theory to Practice: Real-World Impact

Confidential computing is rapidly expanding the horizons of machine learning and content analysis, and platforms like Mica are central to this evolution. By creating a hardware-based, isolated execution environment, sensitive data used in applications such as Large Language Model (LLM) inference and video moderation remains protected even while being processed. This is particularly crucial for maintaining user privacy and intellectual property. LLMs, often trained on vast datasets containing personal information, can be deployed with greater security, allowing for personalized experiences without compromising data confidentiality. Similarly, in video moderation, sensitive content can be analyzed for policy violations without exposing it to the broader system or potential attackers, ensuring both compliance and user safety. The ability to perform these computationally intensive tasks on encrypted data, facilitated by confidential computing, is unlocking entirely new possibilities for data-driven innovation.

A robust system of memory management is foundational to secure and functional confidential computing. Protected Memory establishes isolated enclaves, shielding sensitive data and code from unauthorized access – a critical defense against malicious software and data breaches. Conversely, Unprotected Memory provides a carefully regulated channel for communication and data exchange between the secure enclave and the host environment. This controlled interaction is essential for practical applications; it allows the enclave to receive input, deliver results, and leverage host resources without compromising the confidentiality guaranteed by the protected space. The interplay between these two memory types-security through isolation balanced with necessary connectivity-is what enables confidential computing to move beyond theoretical possibility and deliver real-world utility.

Efficient data transfer is paramount in modern computing, and shared memory architectures provide a significant performance boost for Trusted Execution Environments (TEEs). By allowing direct access to a common memory region between the TEE and other processes, the overhead associated with traditional inter-process communication methods – such as copying data between address spaces – is substantially reduced. This approach proves particularly valuable in scenarios demanding high throughput, like real-time video processing or large language model inference, where minimizing latency is crucial. The TEE can rapidly receive input data and return results without the performance penalties of slower communication channels, effectively bridging the gap between secure execution and application responsiveness. This streamlined data exchange not only accelerates processing but also optimizes resource utilization, making shared memory a cornerstone of high-performance, security-focused systems.

Orchestrating Trust: Secure Control Flow

The movement of execution between a Trusted Execution Environment (TEE) and the normal operating system, known as control-flow transitions, represents a critical security boundary. These transitions aren’t simply jumps in code; they are orchestrated pathways where sensitive data and operations are potentially exposed. Consequently, each transition demands meticulous policy enforcement to guarantee confidentiality and integrity. Without stringent controls, malicious code could manipulate these shifts, bypassing the TEE’s protections and accessing confidential information. Establishing a robust framework for managing these transitions is therefore paramount in building systems where security isn’t just an add-on, but a fundamental aspect of the execution model, effectively creating a secure enclave for sensitive operations.

Mica establishes a robust security framework by employing Policy Configuration to meticulously govern transitions between standard execution environments and trusted execution environments (TEEs). This configuration isn’t simply a gatekeeper, but a dynamic rule set defining precisely under what conditions data can move, which computations are permitted within the TEE, and how results are handled upon return. By formalizing these controls, Mica ensures that sensitive data remains isolated and protected throughout its lifecycle, adhering to stringent compliance requirements and mitigating the risk of unauthorized access or modification. The system allows for granular policies, enabling developers to tailor security measures to the specific needs of each application and data type, thus establishing a verifiable chain of trust from data origin to final output.

Arm’s Confidential Computing Architecture (CCA) establishes a robust and versatile foundation for building confidential virtual machines (CVMs), fundamentally altering the landscape of secure computation. By creating isolated execution environments, CCA protects data in use – a critical advancement beyond traditional encryption methods that only safeguard data at rest or in transit. This architecture leverages hardware-based isolation, employing memory tagging and encryption to prevent unauthorized access even from privileged software, including the hypervisor. Consequently, sensitive operations – such as processing financial data, executing machine learning models with proprietary algorithms, or handling personal identifiable information – can be performed with a significantly reduced risk of compromise. The design principles of Arm CCA are not limited to specific cloud providers or operating systems, fostering interoperability and enabling a broader ecosystem of trust in increasingly complex computational environments, ultimately paving the way for more secure and privacy-preserving applications.

The CCA architecture facilitates compositional control by enabling the creation of modular policies from reusable skills.
The CCA architecture facilitates compositional control by enabling the creation of modular policies from reusable skills.

The pursuit of isolated, trustworthy components feels…familiar. Mica, with its focus on policy-driven communication between TEEs, attempts to solve the problem of inherent distrust. It’s a noble effort, predictably complex, and destined to become, at some point, a source of wonderfully creative debugging sessions. As Alan Turing observed, “There is no position which is not occupied by some body.” In this case, that ‘body’ is the inevitable edge case, the unforeseen interaction, the production environment’s insistence on proving the system’s limits. The architecture elegantly decouples confidentiality from component trust, yet one anticipates the day when a misconfigured policy, or an unanticipated data flow, will demonstrate that even the most meticulously designed system is merely a temporary reprieve from chaos. It’s not a failure of the concept, simply a testament to the relentless nature of reality.

What’s Next?

The pursuit of isolating computation within trusted enclaves, as exemplified by Mica, inevitably bumps against the realities of system complexity. It’s a beautifully elegant solution, naturally, until production discovers a corner case involving a forgotten system call or a misconfigured network policy. One suspects the documentation will lie again. The decoupling of confidentiality from component trustworthiness is a worthy goal, but the devil, as always, resides in the enforcement. Expect a proliferation of ‘TEE Policy as Code’ frameworks, followed swiftly by a corresponding rise in policy-related incidents. They’ll call it AI and raise funding, naturally.

A crucial, and likely overlooked, challenge lies in the operationalization of these policies. Mica addresses communication constraints, but the lifecycle management of those constraints – versioning, auditing, revocation – will be a logistical nightmare. The system used to be a simple bash script, after all. Future work will undoubtedly focus on automated policy generation and verification, but anyone who’s stared into the abyss of formal verification knows that completeness is a myth.

Ultimately, this field will be defined not by theoretical advancements, but by the accumulated weight of tech debt – or, more accurately, emotional debt with commits. The promise of secure computation is alluring, but the path forward is paved with compromises, workarounds, and the inevitable realization that even the most meticulously crafted enclave is still just a box, and boxes can be broken.


Original article: https://arxiv.org/pdf/2603.03403.pdf

Contact the author: https://www.linkedin.com/in/avetisyan/

See also:

2026-03-05 23:44