DeepSeek-R1-Distill-Llama-70B logo

DeepSeek-R1-Distill-Llama-70B

High-performance reasoning model with exceptional benchmarks

Model Details

Part of DeepSeek's first-generation reasoning models, achieving strong performance across math, code, and reasoning tasks. This 70B parameter model is optimized for enhanced reasoning capabilities through advanced training techniques including chain-of-thought reasoning.

Parameters

70.6 billion

Context Window

64k tokens

Recommended Use

Ideal for complex reasoning tasks, mathematical problems, and advanced coding applications requiring strong logical capabilities.

Supported Languages

Multilingual with strong performance across major languages

Usage Examples

Installation:

pip install tinfoil

Inference:

from tinfoil import TinfoilAI

client = TinfoilAI(
    enclave="deepseek-r1-70b-p.model.tinfoil.sh",
    repo="tinfoilsh/confidential-deepseek-r1-70b-prod",
    api_key="YOUR_API_KEY",
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Hello!",
        }
    ],
    model="deepseek-r1-70b",
)
print(chat_completion.choices[0].message.content)