- Model
deepseek-r1-distill-llama-70b- Provider
groq- API
openai-completions- Base URL
https://api.groq.com/openai/v1- Input
- text
- Reasoning
- Yes
- Context window
- 131,072
- Max tokens
- 8,192
- Cost / million input
- $0.75
- Cost / million output
- $0.99
- Cost / million cache read
- $0
- Cost / million cache write
- $0
Model config JSON
{
"providers": {
"groq": {
"apiKey": "YOUR_API_KEY",
"models": [
{
"id": "deepseek-r1-distill-llama-70b",
"name": "DeepSeek R1 Distill Llama 70B",
"reasoning": true,
"input": [
"text"
],
"contextWindow": 131072,
"maxTokens": 8192,
"cost": {
"input": 0.75,
"output": 0.99,
"cacheRead": 0,
"cacheWrite": 0
}
}
],
"api": "openai-completions",
"baseUrl": "https://api.groq.com/openai/v1"
}
}
}