@davidorex/pi-behavior-monitors

Behavior monitors for pi that watch agent activity and steer corrections

Package details

extensionskill

Install @davidorex/pi-behavior-monitors from npm and Pi will load the resources declared by the package manifest.

$ pi install npm:@davidorex/pi-behavior-monitors
Package
@davidorex/pi-behavior-monitors
Version
0.14.6
Published
Apr 27, 2026
Downloads
1,620/mo · 242/wk
Author
davidorex
License
MIT
Types
extension, skill
Size
251.4 KB
Dependencies
7 dependencies · 0 peers
Pi manifest JSON
{
  "extensions": [
    "./dist/index.js"
  ],
  "skills": [
    "./skills"
  ]
}

Security note

Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.

README

pi-behavior-monitors

Behavior monitors for pi that watch agent activity, classify against pattern libraries, steer corrections, and write structured findings to JSON files.

Monitors are JSON files (.monitor.json) with typed blocks: classify (LLM side-channel), patterns (JSON library), actions (steer + write to JSON), and scope (main/subagent/workflow targeting).

Install

pi install npm:@davidorex/pi-behavior-monitors

Or install all three extensions at once: pi install npm:@davidorex/pi-project-workflows

Five example monitors ship bundled with the package and load by default from a three-tier search (project .pi/monitors/ > global ~/.pi/agent/monitors/ > bundled examples/); first match by name wins. To customize one, create .pi/monitors/<name>.monitor.json in your project — the override fully replaces the bundled version by name. Delete the override to revert and pick up bundled updates automatically.

Bundled Example Monitors

  • fragility — detects when the agent leaves broken state behind (errors it noticed but didn't fix, TODO comments instead of solutions, empty catch blocks). Writes findings to .project/issues.json.
  • hedge — detects when the agent deviates from what the user actually said (rephrasing questions, assuming intent, deflecting with counter-questions)
  • work-quality — on-demand audit of work quality (trial-and-error, not reading before editing, fixing symptoms instead of root causes). Invoked via /work-quality. Writes findings to .project/issues.json.

File Structure

Each monitor is a triad of JSON files:

.pi/monitors/
├── fragility.monitor.json       # Definition (classify + patterns + actions + scope)
├── fragility.patterns.json      # Known patterns (grows automatically)
└── fragility.instructions.json  # User corrections (optional)

Writing Your Own

Create a .monitor.json file in .pi/monitors/ conforming to schemas/monitor.schema.json. Ask the LLM to read the pi-behavior-monitors skill for the full schema and examples.

Commands

Command Description
/monitors List all monitors, scope, and state
/<name> Show monitor patterns and instructions
/<name> <text> Add an instruction to calibrate the monitor

How It Works

  1. A monitor fires on a configured event (e.g., after each assistant message)
  2. It checks scope (main context, subagent, workflow) and activation conditions
  3. It collects relevant conversation context (tool results, assistant text, etc.)
  4. A side-channel LLM call classifies the context against the JSON pattern library
  5. Based on the verdict, the monitor executes actions:
    • steer: inject a correction message into the conversation (main scope only)
    • write: append structured findings to a JSON file (any scope)
    • learn: add new patterns to the library automatically
  6. Downstream workflows can consume the JSON findings (e.g., issues.json → verify step → gate)

Schemas

  • schemas/monitor.schema.json — monitor definition format
  • schemas/monitor-pattern.schema.json — pattern library entry format

Development

Part of the pi-project-workflows monorepo. All three packages (pi-project, pi-workflows, pi-behavior-monitors) are versioned in lockstep at 0.9.1.

npm run build compiles TypeScript to dist/ via tsc. The package ships dist/, not src/. Tests use vitest run (npm test).