GLM-4.7-Flash

Model details

GLM-4.7-Flash

Model
zai-org/GLM-4.7-Flash
Provider
huggingface
API
openai-completions
Base URL
https://router.huggingface.co/v1
Input
text
Reasoning
Yes
Context window
200,000
Max tokens
128,000
Cost / million input
$0
Cost / million output
$0
Cost / million cache read
$0
Cost / million cache write
$0
Model config JSON
{
  "providers": {
    "huggingface": {
      "apiKey": "YOUR_API_KEY",
      "models": [
        {
          "id": "zai-org/GLM-4.7-Flash",
          "name": "GLM-4.7-Flash",
          "reasoning": true,
          "input": [
            "text"
          ],
          "contextWindow": 200000,
          "maxTokens": 128000,
          "cost": {
            "input": 0,
            "output": 0,
            "cacheRead": 0,
            "cacheWrite": 0
          },
          "compat": {
            "supportsDeveloperRole": false
          }
        }
      ],
      "api": "openai-completions",
      "baseUrl": "https://router.huggingface.co/v1"
    }
  }
}