- Model
zai/glm-4.7-flash- Provider
vercel-ai-gateway- API
anthropic-messages- Base URL
https://ai-gateway.vercel.sh- Input
- text
- Reasoning
- Yes
- Context window
- 200,000
- Max tokens
- 131,000
- Cost / million input
- $0.07
- Cost / million output
- $0.4
- Cost / million cache read
- $0
- Cost / million cache write
- $0
Model config JSON
{
"providers": {
"vercel-ai-gateway": {
"apiKey": "YOUR_API_KEY",
"models": [
{
"id": "zai/glm-4.7-flash",
"name": "GLM 4.7 Flash",
"reasoning": true,
"input": [
"text"
],
"contextWindow": 200000,
"maxTokens": 131000,
"cost": {
"input": 0.07,
"output": 0.39999999999999997,
"cacheRead": 0,
"cacheWrite": 0
}
}
],
"api": "anthropic-messages",
"baseUrl": "https://ai-gateway.vercel.sh"
}
}
}