pi-codex-fast
Fast mode toggle for OpenAI and Codex models in pi.
Package details
Install pi-codex-fast from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-codex-fast- Package
pi-codex-fast- Version
1.0.3- Published
- Apr 30, 2026
- Downloads
- 188/mo · 188/wk
- Author
- wobondar
- License
- MIT
- Types
- extension
- Size
- 25.3 KB
- Dependencies
- 0 dependencies · 3 peers
Pi manifest JSON
{
"extensions": [
"./index.ts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-codex-fast
Fast Mode extension for pi that toggles OpenAI/Codex priority service tier for configured models.
Install
pi install npm:pi-codex-fast
Try it temporarily without installing:
pi -e npm:pi-codex-fast
Or test from a local checkout:
pi -e ./
Usage
The extension adds the /fast command:
/fast Toggle Fast Mode on/off
/fast on Enable Fast Mode
/fast off Disable Fast Mode
/fast toggle Toggle Fast Mode on/off
/fast status Show current status
/fast style Cycle the footer status style
When enabled, requests for configured OpenAI/OpenAI Codex models use serviceTier: "priority".
How it works
pi-codex-fast registers provider wrappers for pi's OpenAI Responses and OpenAI Codex Responses APIs.
For configured models, the wrapper calls the native provider streamer with serviceTier: "priority". For all other models, providers, or disabled Fast Mode, it falls through to pi's normal simple streamers unchanged.
The extension intentionally does not use the before_provider_request hook to patch request payloads. That preserves pi's native provider flow, including pi's built-in usage and cost calculation.
Configuration
On first load, the extension creates:
~/.pi/agent/extensions/pi-codex-fast.json
If PI_CODING_AGENT_DIR is set, the config is created under that agent directory instead.
Default config:
{
"enabled": false,
"models": ["openai/gpt-5.4", "openai/gpt-5.5", "openai-codex/gpt-5.4", "openai-codex/gpt-5.5"]
}
Optional fields such as style are resolved internally and only written when changed via /fast style.
Model entries may be provider-qualified, for example openai/gpt-5.5, or bare model IDs, for example gpt-5.5.
License
MIT