← Back to Posts

Running Private DeepSeek R1 with Verifiable Security

February 2, 2025 • Tinfoil Team • 4 min read
A technical drawing of a whale.

Today, we're thrilled to launch Tinfoil Private Chat with Deepseek R1-70B, our first product designed to redefine trust in AI. We're replacing blind faith in inference providers with provable security. It's a particularly opportune time to do so: since R1 was released last week, we've seen an uptick in concerns around AI and privacy. Curiously, even OpenAI employees are concerned about all the data users are sending to AI and cloud providers:

"Do As I Say, Not As I Do"

The Chinese government might be spying on you,1 but so is OpenAI,2 your car, your AI wearable, and the US government (maybe all at once). The Deepseek servers have already had a data breach. The problem is that models like DeepSeek R1 are too large to run locally by most, and need to be run on large GPUs on the cloud. So to use the latest AI capabilities, you need to trust many third parties — all of which have the ability to see, collect, sell, and use your private data, even if they "pinky promise" to you that they won't!

Or if you're a company with proprietary data trying to use AI applications, this leaves you with only a few unpleasant options: keep AI on-prem and sacrifice the enormous benefit of cloud deployments, or trust often unenforceable contracts — like Data Processing Agreements (DPAs) or access control policies offered by AI providers — while accepting the significant privacy and security risks that can arise even from accidental breaches. Or simply choose not to use the technology.

Just as using TLS to secure your internet connections ensures someone on the network can't see your financial details when you log into your bank's website, or using a secure messaging app like iMessage, Signal, or WhatsApp ensures your conversations remain private with end-to-end encryption, your interactions with AI chat assistants, AI financial assistants, or AI therapists should also be kept confidential. Your messages and private data must remain accessible only to you, never to the AI service providers or other snooping third parties.

Replace trust with provable security

At Tinfoil, we're making it easy to deploy private AI applications with verifiable security guarantees — ensuring only end-users ever access their private data. Whether you're an individual who wants truly private chats with an LLM or an AI startup hoping to build trust with enterprise buyers, Tinfoil is building the tools to make it possible.

How it works, under the hood, is detailed in our deep-dive blogs, but at a high level, we are using confidential computing, similar to Apple Private Cloud Compute, but with additional layers of hardware security, code transparency, and auditability.3

This gets you the highest levels of security and privacy that are currently available in cloud computing. Whether you're using Tinfoil Chat, a Tinfoil inference endpoint, or an AI application deployed on the Tinfoil platform, we can guarantee (and you can verify) that no one — not the developer, not Tinfoil, not the cloud provider, nor any third party — ever has access to your data.

What else is happening at Tinfoil?

The private chat and inference endpoint are just the tip of the iceberg. We're building an entire platform to deploy private, verifiable versions of AI applications: content moderation tools, secure code editors, medical and legal assistants, and more.

If you're building AI applications and want to offer your users the highest standard of security—with proven guarantees and without sacrificing performance or privacy-preserving observability—reach out at [email protected] or sign up for our Private Preview. Let's build the future of (private) AI together.


Footnotes

  1. Recently, large-scale attacks on US telecommunications systems prompted US government officials to recommend that American citizens use end-to-end encrypted messaging apps.

  2. A standard enterprise contract (aka, not Azure Private Cloud) with OpenAI only specifies that they don't train on your data, not that they don't have access or analyze your data.

  3. This is critical to alleviate concerns regarding backdoors. As security researchers like Matthew Green have highlighted, this can even be a concern for models running locally. Complete end-to-end transparency and client-side verification ensures that, if no such backdoor exists in our GitHub repository, none is present on the Tinfoil server you're using. More blog posts coming soon! https://blog.cryptographyengineering.com/2025/01/17/lets-talk-about-ai-and-end-to-end-encryption/

Subscribe for Updates

Stay up to date with our latest blog posts and announcements.