pi-autocontext
autocontext extension for Pi coding agent — iterative strategy generation, LLM judging, and evaluation tools
Package details
Install pi-autocontext from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-autocontext- Package
pi-autocontext- Version
0.2.4- Published
- May 1, 2026
- Downloads
- 599/mo · 339/wk
- Author
- jayscambler
- License
- MIT
- Types
- extension, skill, prompt
- Size
- 40.9 KB
- Dependencies
- 1 dependency · 4 peers
Pi manifest JSON
{
"extensions": [
"./src/index.ts"
],
"skills": [
"./skills"
],
"prompts": [
"./prompts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-autocontext
Autocontext extension for Pi coding agent — iterative strategy generation, LLM judging, and evaluation tools.
Install
pi install npm:pi-autocontext
Or add to your project's .pi/settings.json:
{
"packages": ["npm:pi-autocontext"]
}
What You Get
Tools
| Tool | Description |
|---|---|
autocontext_judge |
Evaluate agent output against a rubric using LLM-based judging |
autocontext_improve |
Run multi-round improvement loop with judge feedback |
autocontext_status |
Check status of autocontext runs and tasks |
autocontext_scenarios |
List available evaluation scenarios and families |
autocontext_queue |
Enqueue a task for background evaluation |
autocontext_runtime_snapshot |
Inspect run artifacts, package provenance, compaction ledger entries, session branch lineage, and recent events |
Skills
/skill:autocontext— Full instructions for using autocontext tools, running evaluations, and interpreting results
Prompt Templates
/autoctx-status— Quick project status check
Usage
Once installed, the tools are available to the LLM automatically. You can also invoke them directly:
> Evaluate the quality of this code against our coding standards rubric
> Run an improvement loop on this draft with max 5 rounds
> Show me the status of recent autocontext runs
> Inspect the runtime snapshot for run-123 and session sess-123
> List available evaluation scenarios
Or use the skill for guided workflows:
/skill:autocontext
Requirements
- Pi coding agent
- An LLM provider configured in Pi (Anthropic, OpenAI, etc.)
- Optional:
autoctxCLI for standalone usage outside Pi
Configuration
The extension auto-discovers your autocontext configuration:
- Provider: Uses Pi's configured LLM provider
- Database: Looks for
runs/autocontext.sqlite3orAUTOCONTEXT_DB_PATHenv var - Events: Reads
runs/events.ndjsonorAUTOCONTEXT_EVENT_STREAM_PATHfor recent runtime events - Scenarios: Discovers registered scenarios from the
autoctxpackage
Links
- autocontext — Main repository
- autoctx on npm — Core TypeScript package
- Pi coding agent — The Pi agent