Shrinking Complexity Risks in AI Cloud Deployments

The Challenge of Trust in AI Cloud Services
Today, when using an AI product deployed in the cloud, companies must trust many third parties and complex infrastructure. This includes the cloud provider, AI stack services (offering observability, analytics, and agent capabilities), and the inference provider that runs the AI model. All these parties can see the data in plain text and present a large attack surface.
The majority of enterprises cite data security and privacy of proprietary data as the number one inhibitor for AI adoption. The current alternatives are limited to legal contracts (such as data privacy agreements or privacy policies), or a regression to on-premise deployments.
Tinfoil's Approach: Reducing Trust
Tinfoil's approach greatly reduces the complexity risk of using AI in cloud services by excluding all third parties and most of the existing AI infrastructure from the end-user trust boundaries. We isolate the inference server, hardening it against common threats and providing a private alternative to the classic AI stack. For a more detailed introduction to our approach, check out our introduction to Tinfoil post.
When you use an application deployed with Tinfoil, you can verify that nobody but you can access your data—similar to WhatsApp's end-to-end confidentiality guarantees. This means enterprises deploying AI applications through Tinfoil can be certain their data won't be:
- Exposed to third-party security breaches
- Used for AI model training without consent
- Sold to the highest bidder
AI Inference is Perfect for Secure Enclaves
AI inference servers are ideal candidates for secure enclave technologies. With their straightforward control flow and static data access patterns, inference workloads can be easily made stateless by disabling cross-user query caching, ensuring complete isolation between different users' requests. This is a perfect match with the natural statelessness (all memory is encrypted and volatile) of secure enclaves, and ensures that we can prove our confidential inference endpoints do not expose nor remember anything about your data. Learn more about how our enclaves work in our technical overview of Tinfoil enclaves.
Our Technical Safeguards
We build on cutting-edge secure enclaves and hardware-backed isolation (AMD SEV-SNP, NVIDIA Confidential Compute) to enforce end-to-end confidentiality. This means data in enclave memory is always encrypted, keeping it inaccessible, even from an attacker with physical access to the machine. We further enhance security by eliminating all remote access capabilities from our isolated inference servers, effectively removing the possibility of unauthorized data access by Tinfoil and other internal threats.
We architect Tinfoil to reduce the likelihood and impact of side channels through several mitigation strategies:
- All AMD platform secrets are kept in a separate AMD secure co-processor
- AMD attestation reports are additionally signed by Tinfoil, ensuring a powerful external attacker with AMD secrets cannot confuse a user
- We do not share secrets (such as TLS keys) across different enclaves and frequently rotate all key material
- User requests are only transiently present on the server, minimizing the risk of direct exposure to side channels
- The machines we run on are never shared with anyone else and only ever execute Tinfoil-authorized code
Transparency Through Open Source
Furthermore, we make our security claims auditable by open-sourcing all security-sensitive code, using automated builds and transparency logs to prevent supply chain attacks, and enabling instant client-side verification. To learn more about how we build trust through these mechanisms, read our detailed explanation of how Tinfoil builds trust.
At Tinfoil, we've built a transparent architecture allowing you to audit the entire trusted codebase of your cloud deployment. Specifically, our secure enclaves do not rely on closed-source hypervisors (as required with AWS Nitro Enclaves) or paravisors (as required with Azure Confidential Computing).
Remote attestation of secure enclaves enables a client to instantly verify a server's configuration and binary integrity. We combine automated builds and transparency logs to provide the added guarantee that the attested binary corresponds to code we have open-sourced, making it possible for enterprises to expedite security audit and compliance.
You can see how this works in our private chat.
The Future of Confidential AI
Tinfoil significantly reduces the risks of cloud-based AI deployment, enabling scalable adoption of AI applications across enterprises. Our vision is for every business function to use cutting-edge AI tools without confidentiality concerns or the laborious process of on-premises deployment.
We are building a future where using AI with strong confidentiality guarantees becomes as ubiquitous and essential as TLS/SSL on the web. Our primary goal when architecting Tinfoil is to provide customers with a clear trust boundary—shifting away from vague, expansive cloud infrastructure toward an auditable, open-source, human-scale trusted codebase.
Subscribe for Updates
Stay up to date with our latest blog posts and announcements.