pi-per-model-prompt
Model-scoped system prompt correction layers for pi-coding-agent
Package details
Install pi-per-model-prompt from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-per-model-prompt- Package
pi-per-model-prompt- Version
0.3.1- Published
- Apr 17, 2026
- Downloads
- 303/mo · 16/wk
- Author
- linioi
- License
- MIT
- Types
- extension
- Size
- 32.5 KB
- Dependencies
- 0 dependencies · 1 peer
Pi manifest JSON
{
"extensions": [
"./index.ts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-per-model-prompt
Model-scoped system prompt correction layers for pi.
This package ships a single pi extension that appends a small all-model harness core and then layers targeted model corrections on top. It stays intentionally narrow: it does not replace Pi's baseline harness policy and it does not try to become a second general-purpose prompt framework.
What it does
Every model receives one shared harness-core append layer with:
output_contractscope_disciplinetool_disciplinedependency_checksverification_contractuser_updates_specsafety_boundary
On top of that, the package targets two model families used inside Pi:
OpenAI GPT-5
gpt-5*→ GPT-5 family baseline layergpt-5.*-codex→ adds Codex-line coding correctionsgpt-5.4*→ adds GPT-5.4 execution/follow-through delta- exact selected model
gpt-5.4→ forces Responses APItext.verbosity = "low"at runtime viabefore_provider_request gpt-5.3-codex→ adds exact-model compact answer-shape / ambiguity delta
Anthropic Claude
*claude*(any id containingopus,sonnet, orhaiku) → Claude family baseline + coding-agent layer
Layer composition is additive. For example:
mistral-large→HARNESS_COREgpt-5.4→HARNESS_CORE+GPT5_FAMILY+GPT54gpt-5.4-codex→HARNESS_CORE+GPT5_FAMILY+GPT5_CODEX+GPT54gpt-5.3-codex→HARNESS_CORE+GPT5_FAMILY+GPT5_CODEX+GPT53_CODEXclaude-sonnet-4-5→HARNESS_CORE+CLAUDE_FAMILY+CLAUDE_CODING_AGENT
Each layer is appended once via a unique marker, so repeated turns or partial prompt reuse stay idempotent.
Install
From npm
After publishing:
pi install npm:pi-per-model-prompt
From a local checkout
pi install /path/to/pi-per-model-prompt
Temporary loading during development
pi --no-extensions -e /path/to/pi-per-model-prompt/index.ts
Package layout
index.ts # package entry declared in package.json -> pi.extensions
src/index.ts # extension implementation; registers before_agent_start + before_provider_request hooks
src/model-identity.ts # model id parser (gpt-5*, claude-*)
src/resolve.ts # layer resolution by family/version/tags
src/prompt.ts # prompt composition helpers
src/layers/base.ts # all-model harness-core append layer
src/layers/openai/gpt5/family.ts # GPT-5 family baseline
src/layers/openai/gpt5/codex.ts # GPT-5 Codex line layer
src/layers/openai/gpt5/gpt-5.4.ts # GPT-5.4 delta
src/layers/openai/gpt5/gpt-5.3-codex.ts # GPT-5.3-Codex delta
src/layers/anthropic/claude/family.ts # Claude family baseline
src/layers/anthropic/claude/coding-agent.ts # Claude coding-agent layer
test/*.test.ts # coverage for parsing, composition, resolution, extension hook
The package uses a pi manifest in package.json:
{
"keywords": ["pi-package"],
"pi": {
"extensions": ["./index.ts"]
}
}
This matches Pi's package-loading rules from docs/packages.md and docs/extensions.md.
Development
npm install
npm test
Optional packaging check:
npm pack --dry-run
Design guard rails
- Keep the shared harness-core append layer stable, minimal, and model-agnostic.
- Keep family and exact-model rules evidence-driven and narrower than the shared core.
- Prefer additive layers over branching prompt trees.
- Keep exact-model deltas narrow and auditable.
See docs/architecture.md for maintenance guidance.