← Back to Posts

Launching gpt-oss-120b on Tinfoil

August 5, 2025 • Tinfoil Team • 3 min read
gpt-oss privacy illustration

Today we've added gpt-oss-120b to Tinfoil, now available through our private chat and inference API.

The Privacy Problem with AI

Just last week, OpenAI CEO Sam Altman made headlines with a stark reminder about AI use and personal privacy:

"There's no legal confidentiality for users' conversations" with ChatGPT.

Tweet from @TheChiefNerd about OpenAI gpt-oss models and local privacy

Source: @TheChiefNerd on X

This highlights a fundamental issue with current AI systems:

Unlike conversations with doctors, lawyers, or therapists, your AI interactions have no privilege protection. As Altman put it, "People talk about the most personal sh*t in their lives to ChatGPT." From therapy sessions to legal advice, our most personal data is being fed into AI systems. And yet, OpenAI would be legally required to expose those conversations in case of a lawsuit, as was recently mandated in the New York Times lawsuit.

This creates a critical problem when handling sensitive information. Patent attorneys drafting confidential documents, M&A lawyers working on deals, doctors discussing patient cases, or anyone working with proprietary or personal data faces an impossible choice: embrace AI for its efficiency or protect confidentiality.

On Tinfoil, we never have access to this kind of user data and therefore cannot disclose it to anyone, even if we were legally required to.

Open-Source Models: Too Big to Run Locally at Scale

OpenAI just released state-of-the-art open-source GPT models: gpt-oss-20b and gpt-oss-120b. People on X and Reddit are excited to run them locally for privacy reasons — local-only deployments mean your data never leaves your computer. However, running these models locally requires 80GB of VRAM, making them very slow and impractical for most users and organizations on consumer-grade hardware.

Tinfoil: Have Your Cake and Eat It Too

With Tinfoil, you get the privacy of running your model locally with the speed and scalability of the cloud. Nobody can see your data — not us, not the cloud provider, not anyone. We replace trust with provable security.

At Tinfoil, we're making it easy to deploy private AI applications in the cloud with verifiable security guarantees, ensuring that only end-users ever access their private data. Whether you're an individual who wants truly private chats with an LLM or an AI startup hoping to deploy AI with state-of-the-art security, Tinfoil is building the tools to make it possible.

How It Works

Under the hood, we use confidential computing similar to Apple Private Cloud Compute but with additional layers of hardware security, code transparency, and auditability. This delivers the highest levels of security and privacy currently available in cloud computing without sacrificing speed and scalability.

Whether you're using Tinfoil Chat, the Tinfoil inference API, or an AI application deployed on the Tinfoil platform, we give you provable guarantees (which you can verify for yourself) that no one — not the developer, not Tinfoil, not the cloud provider, nor any third party — can see or access your data.

Getting Started

You can try out gpt-oss-120b right now in our private chat and via our inference API. Also feel free to reach out at [email protected] — we're always interested in hearing about your experience and feature requests.

Subscribe for Updates

Stay up to date with our latest blog posts and announcements.