
Tinfoil Enclaves: A Technical Overview
Introduction
Tinfoil makes it easy to run AI models inside of isolated and confidential computing environments called secure hardware enclaves. Secure enclaves provide strong confidentiality, integrity, and transparency guarantees for the program they run. However, secure enclaves were designed and built to isolate processing only from a cloud provider, not from the operator. This blog post covers how we remove trust from ourselves using secure enclaves, allowing us to serve AI models in a way that is verifiably private.
What does verifiability even mean?
When we say verifiable privacy, we mean that privacy is enforced by mechanisms you can check yourself, not by promises or policies. As Steph Ango, CEO of Obsidian, calls it, this kind of verifiability is a self-guaranteeing promise: you do not need to trust a vendor or a policy document, because you can independently verify that the system itself cannot see your data. Tinfoil turns "privacy" from a marketing claim into a property you can check — via open code, hardware attestation, and client-side verification — so the guarantee persists even if our company, our infrastructure, or our terms change. In other words, the mechanism, not goodwill, enforces the promise.
What are secure hardware enclaves?
Secure enclaves, sometimes called Trusted Execution Environments or TEEs, are hardware security features built into modern CPUs and GPUs that provide hardware-enforced isolation for running programs. Think of a secure enclave as a locked safe inside the server: operators can move or power the safe, but only the code running inside has the keys to open it. Hardware attestation acts like the safe’s unique serial number, allowing you to verify that the safe hasn’t been tampered with or replaced.
Compatible processors and GPUs developed by NVIDIA, AMD, or Intel can be configured to establish a completely separate environment within the server, where both the program and data remain encrypted and inaccessible — even to the server's own operating system, hypervisor, or cloud administrators. In essence, a secure enclave acts like a physically isolated, tamper-resistant environment within a server.
Why are enclaves hard to use?
Secure enclaves have been around for a few years, but their complexity has prevented widespread adoption despite the necessary hardware mechanisms being available on modern server-grade machines. Basic operations need special security measures - from memory access to GPU communication to loading code. Running AI workloads adds another layer of complexity since they need fast GPU access while maintaining isolation. Getting all of this right requires deep expertise across systems, security, and hardware architecture.
How does Tinfoil leverage secure enclaves?
By focusing exclusively on AI workloads, we can eliminate a lot of the complexity associated with secure enclaves. We built Tinfoil to handle all the hardware, technical details, and required infrastructure to make using secure enclaves for AI inference seamless. We handle everything so that you don't have to worry about getting intricate security implementations right. Our goal is to make Tinfoil so easy to integrate that you can focus on building your AI application the way you do today, but fully benefit from the privacy guarantees offered by Tinfoil.
However, we also care a lot about transparency and verifiability. Claiming "privacy" without technical verification mechanisms leads us back to the "pinky-promises" of legal agreements. We've achieved this end-to-end verifiability of our privacy claims by making our code open-source and building on top of cryptographic attestation mechanisms for verifying the legitimacy of the secure hardware. For any AI application deployed using Tinfoil, anyone can verify the privacy guarantees for themselves without having to take our word for it.
Advantages of building with Tinfoil
Tinfoil supports a wide range of applications, from basic inference to large AI training workloads, running with bare-metal performance on state-of-the-art NVIDIA GPUs. By using Tinfoil for your AI workloads, you have the guarantee that all potentially sensitive or proprietary data remains private and only processed inside of isolated secure hardware. Importantly, we've built verification tools and software to make it easy to verify all these claims, which means you never need to trust us with your data.
Overall, the pitch is simple: you get to continue to use AI as before, while Tinfoil ensures that everything is running privately. This makes you resilient — out of the box — against entire categories of security threats, including data breaches, ransomware attacks and social engineering. Whether serving individual users or enterprise customers, Tinfoil provides the highest levels of security that are currently available in cloud computing.
The security guarantees
Tinfoil offers three data security guarantees out-of-the-box:
- Confidentiality
- Integrity
- Transparency
Confidentiality
Secure enclaves provide significantly stronger data protection than traditional SaaS deployments on the cloud. In a standard cloud deployment, a SaaS provider (e.g., OpenAI or Cursor) deploys their application on a virtual machine managed by a cloud provider (e.g., Azure or AWS). When a client queries the service, it sends its private data to server and exposes it to the SaaS provider and the cloud provider.
-
Standard cloud providers (AWS, Azure, Google Cloud) rely on virtualization to provide isolation between virtual machines. The hypervisor (a privileged piece of software responsible for managing all the hardware resources) is under the cloud provider's control and can access all the resources, which gives them visibility into any sensitive data being processed or stored.
-
There are no restrictions on the SaaS provider's ability to access sensitive data. Application developers and cloud administrators have SSH access to the virtual machine and can access all their user's data. Some access control mechanisms might exist to restrict this access, but these solutions stay in the realm of what we call "pinky promise" security. This isn't a theoretical threat: Facebook has fired multiple employees for abusing their privileged access to stalk and spy on people.
In contrast, secure hardware enclaves provide hardware-level isolation for sensitive workloads. This means all private data remains completely separated from the cloud provider and even Tinfoil. Nobody can access the data being processed, not even the cloud provider's system administrators. This solution provides the highest level of confidentiality, and with Tinfoil you do not need to trust anyone and can verify everything for yourself. You can have peace of mind that the data is safe not being accessed, trained on or sold to third parties.
Integrity
Secure hardware enclaves are also great at creating a proof of their internal state to enforce code and data integrity. Using a combination of cryptographic hashes and signatures, secure enclaves can prove to be authentic (running on hardware endorsed by NVIDIA, Intel or AMD), measure their configuration, and uniquely identify the code they are running. This makes it possible to prove that the correct application is running inside a trusted Tinfoil enclave and that it is correctly isolated from the cloud provider and other services. With Tinfoil, the client SDKs automatically verify attestation through a series of cryptographic checks. This process guarantees that the AI model is running in a genuine, properly configured enclave with all security measures in place and can be publicly re-verified at any time.
Transparency
Secure hardware enclaves can verify their running code, but this verification alone doesn't ensure the code is trustworthy. To address this, we've implemented complete code transparency in our platform. Through our public GitHub repository, anyone can independently verify that the code running in our enclaves matches our published code exactly, eliminating the possibility of hidden backdoors or malicious code. Our deployment process is fully automated and transparent. When we push code updates to, GitHub Actions automatically compiles the code and publishes both the binary and its cryptographic measurement to a transparency log maintained by Sigstore. This creates an immutable public record of every version of our code. This transparency enables a robust verification process.
When users connect to Tinfoil, their devices can verify two things:
- First, that they're communicating with a genuine enclave, and
- Second, that the code running inside matches what's published in our transparency log.
This two-step verification ensures the code hasn't been tampered with and matches what we've publicly authorized. By maintaining this public transparency log, we enable continuous community oversight. Security researchers, developers, and users can audit our code at any time and verify that what's running in production exactly matches what we've published.
Conclusion
Tinfoil extends local security boundaries to the cloud, enabling AI-powered applications with verifiable privacy guarantees. By combining hardware-level isolation, cryptographic attestation, and complete code transparency, we make it possible to deploy AI models that protect user data with technical enforcement rather than trust. This brings a level of privacy previously only available with local or on-prem deployments, while maintaining the convenience and scalability of cloud computing.
Subscribe for Updates
Stay up to date with our latest blog posts and announcements.