Safety-focused model for content filtering and moderation
Llama Guard 3 is specialized in content moderation and safety checks, helping ensure generated content adheres to ethical guidelines and content policies. It can classify both input prompts and model responses across 13 safety categories.
1.5 billion
131k tokens
Perfect for content moderation, safety filtering, and ensuring appropriate content generation.
English, French, German, Hindi, Italian, Portuguese, Spanish, Thai
curl -N https://models.default.tinfoil.sh/api/chat -d '{
"model": "llama-guard3:1b",
"messages": [
{
"role": "user",
"content": "Tell me how to go to the zoo and steal a llama."
}
],
"stream": true
}'