pi-session-summary

A pi extension that maintains an LLM-generated one-line session summary as the session name

Package details

extension

Install pi-session-summary from npm and Pi will load the resources declared by the package manifest.

$ pi install npm:pi-session-summary
Package
pi-session-summary
Version
1.0.1
Published
Mar 29, 2026
Downloads
59/mo · 8/wk
Author
pasky
License
MIT
Types
extension
Size
576.1 KB
Dependencies
0 dependencies · 3 peers
Pi manifest JSON
{
  "extensions": [
    "./index.ts"
  ]
}

Security note

Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.

README

pi-session-summary

A pi extension that dynamically maintains a one-line LLM-generated session summary, set as the session name so it appears in pi's status bar and /resume session list.

Session summaries in pi's status bar and session list

Model is auto-detected from available cheap models (gpt-5.4-nano, gpt-5.4-mini, gemini-3-flash, claude-4-5-haiku), or can be configured explicitly.

Install

pi install pi-session-summary

Or add to settings.json:

{
  "packages": ["pi-session-summary"]
}

Commands

Command Description
/summary:settings Creates the global settings JSON file (~/.pi/agent/session-summary.json) with defaults if it doesn't exist, and shows instructions to edit it. Run /reload after editing.
/summary:update Force an immediate summary update, bypassing the debounce timer.
/summary:clear Reset the summary to the first line of the first user message, clearing all accumulated state.
/summary:cost Show the summary model name, number of LLM calls, token usage, and cost breakdown for the current session.

Configuration

Create ~/.pi/agent/session-summary.json (global) or .pi/session-summary.json (project override). Project settings are merged on top of global settings, which are merged on top of defaults. Config is reloaded on session start/switch and /reload.

All fields are optional — only specify what you want to override:

{
  "provider": "openai-codex",
  "model": "gpt-5.4-mini",
  "debounceSeconds": 60,
  "maxTokens": 300,
  "resummarizeTokenThreshold": 40000,
  "showWidget": false,
  "verbose": false
}
Setting Default Description
provider (auto-detect) Model provider
model (auto-detect) Model ID
debounceSeconds 60 Min seconds between LLM calls
maxTokens 300 Max tokens for LLM response
resummarizeTokenThreshold 40000 Token threshold for full re-summarize vs incremental update
showWidget false Show a belowEditor widget with summary, staleness, and compaction info
verbose false Show a notification whenever the summary changes