pi-openai-service-tier
Cost-correct OpenAI service tier / fast mode extension for pi
Package details
Install pi-openai-service-tier from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-openai-service-tier- Package
pi-openai-service-tier- Version
0.1.4- Published
- May 3, 2026
- Downloads
- not available
- Author
- anirudh-mehra
- License
- MIT
- Types
- extension
- Size
- 175 KB
- Dependencies
- 0 dependencies · 2 peers
Pi manifest JSON
{
"extensions": [
"./index.ts"
],
"image": "https://raw.githubusercontent.com/anirudhmehra/pi-openai-service-tier/main/assets/social-preview.png"
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-openai-service-tier
Cost-correct OpenAI service tier / fast mode for pi.
Most fast-mode extensions only patch the outgoing JSON payload:
{ service_tier: "priority" }
That can route the request correctly, but Pi's displayed cost accounting uses its internal provider option named serviceTier. This extension wraps Pi's built-in OpenAI provider calls and passes:
{ ...options, serviceTier: "priority" }
So Pi gets both the OpenAI request field and the matching Pi-side service-tier cost multiplier.
Features
/fasttoggles cost-correctprioritytier./openai-tierselectspriority,flex,default,auto, orscale.- Works with Pi's OpenAI Responses and OpenAI Codex Responses providers.
- Avoids sending tiers that a provider does not support.
- Includes
gpt-5.4andgpt-5.5OpenAI/Codex models by default. - Does not change model, reasoning level, prompts, tools, or
text.verbosity. - Does not make network calls of its own.
- Stores simple JSON config with project-over-global precedence.
Install
pi install npm:pi-openai-service-tier
GitHub install also works:
pi install https://github.com/anirudhmehra/pi-openai-service-tier
Then start Pi normally:
pi --provider openai-codex --model gpt-5.5
Enable priority tier at startup:
pi --provider openai-codex --model gpt-5.5 --fast
Try without installing:
pi -e npm:pi-openai-service-tier --provider openai-codex --model gpt-5.5 --fast
Commands
Fast mode
/fast
/fast on
/fast off
/fast status
/fast toggles priority service tier on/off.
Explicit service tier
/openai-tier priority
/openai-tier flex
/openai-tier default
/openai-tier auto
/openai-tier scale
/openai-tier off
/openai-tier status
/openai-tier <tier> enables that tier for supported models.
Configuration
The extension uses project-over-global config:
<repo>/.pi/extensions/pi-openai-service-tier.json
~/.pi/agent/extensions/pi-openai-service-tier.json
If neither file exists, the extension creates this global default on session start:
{
"persistState": true,
"active": false,
"serviceTier": "priority",
"supportedModels": [
"openai/gpt-5.4",
"openai/gpt-5.5",
"openai-codex/gpt-5.4",
"openai-codex/gpt-5.5"
]
}
Config fields
| Field | Type | Default | Description |
|---|---|---|---|
persistState |
boolean | true |
Whether /fast and /openai-tier persist state across sessions. |
active |
boolean | false |
Whether a service tier is active. |
serviceTier |
priority | flex | default | auto | scale |
priority |
Service tier passed to Pi's OpenAI provider option when supported by the current provider. |
supportedModels |
string[] | see above | Allow-list of provider/model-id pairs that should receive serviceTier. |
Add/remove allow-listed models by editing supportedModels.
Supported providers/APIs
The extension applies tiers only when all of these are true:
- the model appears in
supportedModels, - the model uses one of these Pi APIs:
openai-responsesopenai-codex-responses, and
- the selected tier is supported by that API.
Provider-specific tier support:
| Pi API | Supported tiers |
|---|---|
openai-responses |
priority, flex, default, auto, scale |
openai-codex-responses |
priority |
If a tier is configured but unsupported by the current model/provider, the extension leaves serviceTier unset for that request instead of sending an invalid value.
Compatibility notes
This extension overrides Pi's API stream handlers for:
openai-responsesopenai-codex-responses
It delegates back to Pi's built-in OpenAI implementations, adding serviceTier only for configured/supported OpenAI models. If another extension also overrides those API handlers, whichever extension loads last wins.
Requires Pi / @mariozechner/pi-ai >=0.72.1 and Node.js >=22.
Updating
For npm installs:
pi update npm:pi-openai-service-tier
For git installs, re-run:
pi install https://github.com/anirudhmehra/pi-openai-service-tier
Uninstall
pi remove npm:pi-openai-service-tier
or, if installed from git:
pi remove https://github.com/anirudhmehra/pi-openai-service-tier
If desired, remove config files manually:
rm -f ~/.pi/agent/extensions/pi-openai-service-tier.json
rm -f .pi/extensions/pi-openai-service-tier.json
Development
git clone https://github.com/anirudhmehra/pi-openai-service-tier.git
cd pi-openai-service-tier
npm install
npm run check
Local Pi smoke test:
pi -e ./index.ts --list-models
pi -e ./index.ts --provider openai-codex --model gpt-5.5 --fast
Security
Pi extensions run with your local user permissions. This extension only reads/writes its config JSON files and delegates LLM calls to Pi's built-in OpenAI providers; it does not perform independent network requests.
License
MIT
