@mcowger/pi-env-var-provider
Pi extension: register a custom OpenAI-compatible provider from environment variables. Configure baseUrl, apiKey, and models via env vars without editing models.json.
Package details
Install @mcowger/pi-env-var-provider from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:@mcowger/pi-env-var-provider- Package
@mcowger/pi-env-var-provider- Version
1.0.0- Published
- Apr 19, 2026
- Downloads
- 79/mo · 5/wk
- Author
- mcowger
- License
- MIT
- Types
- extension
- Size
- 12.9 KB
- Dependencies
- 0 dependencies · 1 peer
Pi manifest JSON
{
"extensions": [
"./index.ts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-env-var-provider
A pi extension that registers a custom OpenAI-compatible provider from environment variables. Configure baseUrl, apiKey, and model settings without editing models.json — perfect for proxies, API gateways, and local models.
Quick Start
# Set required environment variables
export PI_BASE_URL=https://proxy.example.com/v1
export PI_API_KEY=sk-your-api-key
# Optional: customize provider/model names
export PI_PROVIDER_NAME=my-proxy
export PI_MODEL_ID=gpt-4o
# Load and run with pi
pi -e npm:@mcowger/pi-env-var-provider
# Then select the model
/model my-proxy/gpt-4o
Two Modes of Operation
Mode 1: New Provider (Default)
Register a completely new provider with custom models:
export PI_BASE_URL=https://api.example.com/v1
export PI_API_KEY=sk-...
export PI_PROVIDER_NAME=custom-ai # Provider identifier
export PI_MODEL_ID=llama-70b # Model ID
export PI_MODEL_NAME="Llama 3 70B" # Display name
pi -e npm:@mcowger/pi-env-var-provider
Mode 2: Override Existing Provider
Change the baseUrl for a built-in provider (keeps all models):
export PI_BASE_URL=https://my-gateway.openai.azure.com
export PI_API_KEY=...
export PI_OVERRIDE_PROVIDER=openai # Override built-in "openai" provider
pi -e npm:@mcowger/pi-env-var-provider
All openai/* models now use your custom baseUrl.
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
PI_BASE_URL |
Yes | — | API base URL (e.g., https://api.openai.com/v1) |
PI_API_KEY |
Mode 1 only | — | API key or access token |
PI_OVERRIDE_PROVIDER |
No | — | Existing provider to override instead of creating new |
PI_PROVIDER_NAME |
No | env-provider |
New provider identifier |
PI_MODEL_ID |
No | default |
Model ID |
PI_MODEL_NAME |
No | matches PI_MODEL_ID |
Display name |
PI_MODEL_REASONING |
No | false |
Supports reasoning/thinking |
PI_MODEL_INPUT |
No | text,image |
Input types (text, image, or both) |
PI_CONTEXT_WINDOW |
No | 128000 |
Context window size in tokens |
PI_MAX_TOKENS |
No | 16384 |
Max output tokens |
PI_COST_INPUT |
No | 0 |
Input cost per 1M tokens |
PI_COST_OUTPUT |
No | 0 |
Output cost per 1M tokens |
PI_COMPAT_OVERRIDES |
No | — | OpenAI compat overrides as JSON |
Common Use Cases
Azure OpenAI
export PI_BASE_URL=https://my-resource.openai.azure.com/openai/deployments/my-deployment
export PI_API_KEY=...
export PI_OVERRIDE_PROVIDER=openai
export PI_MODEL_ID=gpt-4o
export PI_COMPAT_OVERRIDES='{"supportsReasoningEffort":false}'
pi -e npm:@mcowger/pi-env-var-provider
Local Ollama
export PI_BASE_URL=http://localhost:11434/v1
export PI_API_KEY=ollama # or any non-empty string
export PI_PROVIDER_NAME=ollama-local
export PI_MODEL_ID=llama3.1
export PI_CONTEXT_WINDOW=8192
pi -e npm:@mcowger/pi-env-var-provider
# Then: /model ollama-local/llama3.1
OpenRouter Proxy
export PI_BASE_URL=https://openrouter.ai/api/v1
export PI_API_KEY=sk-or-...
export PI_OVERRIDE_PROVIDER=openai
export PI_MODEL_ID=claude-sonnet-4
pi -e npm:@mcowger/pi-env-var-provider
Perplexity API
export PI_BASE_URL=https://api.perplexity.ai
export PI_API_KEY=pplx-...
export PI_PROVIDER_NAME=perplexity
export PI_MODEL_ID=sonar-pro
export PI_MAX_TOKENS=8192
pi -e npm:@mcowger/pi-env-var-provider
GitHub Action Usage
Use with shaftoe/pi-coding-agent-action for CI/CD workflows:
- name: Run Pi agent with custom provider
uses: shaftoe/pi-coding-agent-action@v2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
provider: env-provider # or PI_OVERRIDE_PROVIDER value
model: default # or PI_MODEL_ID value
token: ${{ secrets.API_KEY }} # or inline in PI_API_KEY
extensions: npm:@mcowger/pi-env-var-provider
env:
PI_BASE_URL: ${{ secrets.CUSTOM_API_URL }}
PI_API_KEY: ${{ secrets.API_KEY }}
PI_PROVIDER_NAME: my-gateway
PI_MODEL_ID: gpt-4o
GitHub Action with Provider Override
- name: Run Pi with Azure OpenAI
uses: shaftoe/pi-coding-agent-action@v2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
provider: openai
model: gpt-4o
token: ${{ secrets.AZURE_OPENAI_KEY }}
extensions: npm:@mcowger/pi-env-var-provider
env:
PI_BASE_URL: ${{ secrets.AZURE_OPENAI_ENDPOINT }}
PI_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}
PI_OVERRIDE_PROVIDER: openai
Advanced: Compat Overrides
For providers with OpenAPI compatibility quirks, use PI_COMPAT_OVERRIDES:
export PI_COMPAT_OVERRIDES='{
"supportsDeveloperRole": false,
"supportsReasoningEffort": false,
"maxTokensField": "max_tokens",
"requiresToolResultName": true
}'
Available options:
supportsDeveloperRole— Use "system" instead of "developer" rolesupportsReasoningEffort— Provider supports reasoning_effort parametermaxTokensField— Use "max_tokens" instead of "max_completion_tokens"requiresToolResultName— Tool results must include name fieldsupportsUsageInStreaming— Include usage in streaming responsesthinkingFormat— Format for thinking blocks ("openai", "zai", "qwen")
Installation
Global (all projects)
pi install npm:@mcowger/pi-env-var-provider
Project-local
pi install -l npm:@mcowger/pi-env-var-provider
Temporary (one-time)
pi -e npm:@mcowger/pi-env-var-provider
From Git (latest)
pi install git:github.com:mcowger/pi-env-var-provider
Requirements
- pi (any recent version)
@mariozechner/pi-coding-agent(bundled with pi)
Uninstalling
pi remove npm:@mcowger/pi-env-var-provider
License
MIT © mcowger