GLM 5 Turbo

Model details

GLM 5 Turbo

Model
zai/glm-5-turbo
Provider
vercel-ai-gateway
API
anthropic-messages
Base URL
https://ai-gateway.vercel.sh
Input
text
Reasoning
Yes
Context window
202,800
Max tokens
131,100
Cost / million input
$1.2
Cost / million output
$4
Cost / million cache read
$0.24
Cost / million cache write
$0
Model config JSON
{
  "providers": {
    "vercel-ai-gateway": {
      "apiKey": "YOUR_API_KEY",
      "models": [
        {
          "id": "zai/glm-5-turbo",
          "name": "GLM 5 Turbo",
          "reasoning": true,
          "input": [
            "text"
          ],
          "contextWindow": 202800,
          "maxTokens": 131100,
          "cost": {
            "input": 1.2,
            "output": 4,
            "cacheRead": 0.24,
            "cacheWrite": 0
          }
        }
      ],
      "api": "anthropic-messages",
      "baseUrl": "https://ai-gateway.vercel.sh"
    }
  }
}