Llama Guard 3 1B logo

Llama Guard 3 1B

Safety-focused model for content filtering and moderation

Model Details

Llama Guard 3 is specialized in content moderation and safety checks, designed to evaluate both input prompts and output responses for safety and policy compliance. The model evaluates content across 13 safety categories based on the MLCommons hazards taxonomy, including violent/non-violent crimes, hate speech, and more.

Parameters

1.5 billion

Context Window

4k tokens

Recommended Use

Designed for content moderation, safety filtering, and evaluating both input prompts and model outputs for policy compliance.

Supported Languages

English, French, German, Hindi, Italian, Portuguese, Spanish, Thai

Usage Examples

Installation:

pip install tinfoil

Inference:

from tinfoil import TinfoilAI

client = TinfoilAI(
    enclave="models.default.tinfoil.sh",
    repo="tinfoilsh/default-models-nitro",
    api_key="YOUR_API_KEY",
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Hello!",
        }
    ],
    model="llama-guard3-1b",
)
print(chat_completion.choices[0].message.content)