pi-goal-driven
Goal-Driven template workflow for pi
Package details
Install pi-goal-driven from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-goal-driven- Package
pi-goal-driven- Version
0.5.0- Published
- Apr 22, 2026
- Downloads
- 521/mo · 162/wk
- Author
- vurihuang
- License
- MIT
- Types
- extension
- Size
- 312.7 KB
- Dependencies
- 0 dependencies · 1 peer
Pi manifest JSON
{
"extensions": [
"./index.ts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-goal-driven
A minimal Pi extension for running a Goal-Driven master/worker workflow from a reusable template, with worker execution aligned to the pi-subagents async runtime.
https://github.com/user-attachments/assets/da5a59bd-7ea8-4a65-a9bb-490461cf5daf
Version notes
- Previous version on
origin/master:0.2.0 - Current version in this codebase:
0.5.0 - Detailed release notes:
CHANGELOG.md
What it does
This package provides three focused commands:
/goal-drivencollects Goal and Criteria for Success with Pi's native UI dialogs/goal-driven:brainstormrefines the same template through normal chat/goal-driven:workexecutes the saved Goal-Driven run in the current session
The long prompt lives in goal-driven-template.md instead of being embedded directly in a large hardcoded string.
Commands
/goal-driven
Starts a local overlay wizard.
The extension opens a 3-step flow:
GoalCriteria for SuccessReview
The wizard fills goal-driven-template.md locally, lets you review the final prompt in-place, and saves the completed prompt for the current workspace.
If rich overlay UI is unavailable, it falls back to native Pi editors.
When the filled prompt is ready, run:
/goal-driven:work
/goal-driven:brainstorm
Starts a chat-based refinement flow for the same template.
Use it when the task is still fuzzy and you want Pi to help shape the final prompt before execution.
Examples:
/goal-driven:brainstorm
/goal-driven:brainstorm 新建 results.txt 文件,写入三行 alpha, beta, foo
When Pi has enough information, it returns a completed template prompt, the extension saves it, and you can run:
/goal-driven:work
/goal-driven:work
Loads the latest saved prompt for the current workspace and sends it into the current session.
This is the execution step.
The prompt that gets sent is your filled Goal-Driven template, so Pi can run the master-agent behavior directly in the current conversation.
In 0.5.0, worker execution is aligned to pi-subagents background execution:
- worker
subagentcalls are forced toasync: true - worker
subagentcalls are forced toclarify: false - worker tasks are prefixed with a guard that forbids nested
subagentlaunches and/goal-drivenre-entry inside the worker session - the master agent does not verify immediately after async launch
- verification starts only after the worker completion event arrives
- while one worker is still running in the current Goal-Driven session tree, additional worker launches are blocked
subagent_status listis filtered to the current Goal-Driven session tree instead of showing global async noise from other sessions or projects- the lower
Async subagentspanel is expected to come frompi-subagents
If pi-subagents is not installed or enabled, the prompt can still be sent, but async orchestration will not behave as intended.
/goal-driven stop
Stops the active Goal-Driven flow.
This applies to both active brainstorm flows and active /goal-driven:work runs.
For active /goal-driven:work runs, stop is session-tree scoped:
- it stops the current Goal-Driven run in memory
- it sends SIGTERM to running workers owned by the current Goal-Driven session tree
- it also cleans up nested async workers discovered under that same session tree
- its completion message summarizes what happened, for example: stopped running workers, already-finished workers, missing runs, or cleanup errors
Template file
The canonical template lives in:
goal-driven-template.md
That file is published with the package and read at runtime.
Saved prompts
Filled prompts are stored under:
~/.pi/agent/extensions/pi-goal-driven/prompts/<workspace>/latest-prompt.md
Each workspace keeps its own latest saved prompt.
Requirements
- Pi
pi-subagentsinstalled and enabled- provides the
subagenttool - provides background execution support
- provides the lower
Async subagentswidget - provides the
subagent:completelifecycle used by/goal-driven:work
- provides the
Install
pi install npm:pi-subagents
pi install npm:pi-goal-driven
For local development:
pi install /path/to/pi-subagents
pi install /path/to/pi-goal-driven
Recommended usage
The current recommended usage is:
- Define a concrete Goal and strict Criteria for Success
- Save the Goal-Driven prompt with either:
/goal-drivenfor a direct wizard flow/goal-driven:brainstorm ...for a chat-shaped planning flow
- Run
/goal-driven:work - Let the worker run through
pi-subagentsin background - Watch the lower
Async subagentspanel for progress - Wait for the master to verify results after worker completion
- If the criteria are still not met, the master launches another background worker attempt automatically
Important runtime note:
- the master only treats workers from the current Goal-Driven session tree as relevant
- other async runs from unrelated sessions or projects are ignored for waiting, blocking, recovery, and verification decisions
- when the master checks worker status, the session-scoped
subagent_status listview is the source of truth
In short:
- use
/goal-drivenwhen you already know the task and checks - use
/goal-driven:brainstormwhen the task is still fuzzy - use
/goal-driven:workonly after the prompt is saved and ready
Examples
Example 1: simple
Use this when the task is small, concrete, and already well-specified.
/goal-driven
Then fill the wizard with something like:
Goal
Create a script that reads all CSV files under data/ and writes leaderboard.txt with totals per user.
Criteria for Success
1. Running `python3 build_leaderboard.py` exits successfully.
2. `leaderboard.txt` is created in the project root.
3. The output is sorted by total score descending.
4. Only `.csv` files are processed.
5. The master agent verifies the output after worker completion.
Then execute:
/goal-driven:work
What happens next:
- the worker starts in background via
pi-subagents - the lower
Async subagentspanel shows progress - when the worker finishes, the master verifies the criteria
- if anything is still missing, another worker attempt is launched automatically
Example 2: brainstormed
Use this when you want chat-based refinement to turn a short request into a precise Goal-Driven prompt before execution.
/goal-driven:brainstorm 新建 results.txt 文件,写入三行 alpha, beta, foo
Typical result of the brainstorm phase:
- Pi rewrites the request into a concrete Goal
- Pi expands the task into explicit, verifiable success criteria
- the prompt is saved for the current workspace
For this example, a typical generated prompt looks like:
Goal: 在仓库根目录新建 `results.txt` 文件,并使其内容恰好为三行:`alpha`、`beta`、`foo`,每个值各占一行且顺序一致。
Criteria for success:
1. 工作区中存在文件 `results.txt`。
2. `results.txt` 的第 1 行是 `alpha`。
3. `results.txt` 的第 2 行是 `beta`。
4. `results.txt` 的第 3 行是 `foo`。
5. `results.txt` 除这三行外不包含任何额外内容。
6. 主代理亲自读取并验证 `results.txt` 的内容与以上要求完全一致后,才可判定完成。
Then execute:
/goal-driven:work
What happens next:
- the worker starts in background via
pi-subagents - the master waits for that worker to finish
- the master reads
results.txtdirectly instead of trusting the worker's self-report - the run ends only after the master can output
GOAL_DRIVEN_VERDICT: MET
Design goal
Keep the package thin.
- Template in a file
- Prompt generation in the current session
- Execution in the current session
- Worker runtime delegated to
pi-subagents - Async progress UI delegated to
pi-subagents - No ask-user extension dependency
- No embedded subagent runtime in this package
Comparison with snarktank/ralph
Both projects aim to make long-running AI-assisted work more reliable, but they solve different layers of the problem.
High-level positioning
pi-goal-drivenis a Pi-native extension.- It stays inside the current Pi session.
- It collects a Goal and Criteria for Success.
- It lets a master agent coordinate background worker attempts through
pi-subagents.
snarktank/ralphis a repository-level autonomous loop.- It runs as a shell script.
- It repeatedly launches fresh Amp or Claude Code sessions.
- It advances work story by story from a structured
prd.jsonbacklog.
Core execution model
| Dimension | pi-goal-driven |
ralph |
|---|---|---|
| Main runtime | Pi extension command flow | Bash loop (ralph.sh) |
| Agent topology | 1 master agent + 1 background worker at a time | Repeated fresh single-agent iterations |
| Execution boundary | Inside the current Pi conversation | Outside the chat, via repeated CLI invocations |
| Retry model | Master verifies after worker completion, then relaunches if criteria are not met | Loop picks next failing story and starts another clean iteration |
| State continuity | Current session context + saved prompt + async run state | Fresh context every iteration, with persistence via git, progress.txt, and prd.json |
| Progress UI | Delegated to Pi / pi-subagents async panel |
CLI / git / files |
Planning input and task framing
pi-goal-driven is centered around a single goal-oriented prompt:
- the user defines one Goal
- the user defines explicit Criteria for Success
- the extension fills
goal-driven-template.md /goal-driven:workexecutes that saved prompt
ralph is centered around a task backlog:
- a PRD is created first
- the PRD is converted into
prd.json - work is broken into multiple user stories
- each story is tracked with
passes: true/false - the loop completes the highest-priority unfinished story each iteration
So the practical difference is:
pi-goal-drivenis optimized for goal verificationralphis optimized for backlog traversal across many small stories
Dependency model
pi-goal-driven deliberately stays thin and depends on Pi capabilities:
- Pi
pi-subagents- Pi UI/editor/runtime features
ralph is more toolchain-oriented and depends on external CLI setup:
- Amp or Claude Code
jq- git repository workflow
- prompt files and optional skills installation
This means:
pi-goal-drivenfits best when your team is already committed to the Pi extension ecosystemralphfits best when you want a portable repo script that can run across projects with minimal framework coupling beyond the chosen coding CLI
Memory and continuity strategy
This is one of the biggest architectural differences.
pi-goal-driven
- keeps the master workflow in the current Pi session
- saves the filled prompt per workspace
- tracks async worker runs per Goal-Driven session
- restores session-owned worker knowledge from persisted session entries
- filters worker status to the current Goal-Driven session tree
- relies on master-side verification plus watchdog logic for inactive workers
ralph
- intentionally starts each iteration with fresh context
- treats context reset as a feature, not a bug
- preserves continuity through:
- commit history
progress.txtprd.json- optional AGENTS.md updates
In short:
- choose
pi-goal-drivenwhen maintaining a continuous supervisory session is useful - choose
ralphwhen you want hard context resets between iterations to reduce drift and prompt bloat
Verification philosophy
pi-goal-driven emphasizes a master verifies after worker completion pattern:
- worker runs in background
- master waits for completion event
- master checks Criteria for Success
- master relaunches the worker if the result is still insufficient
ralph emphasizes a story-by-story shipping loop:
- implement one story
- run quality checks
- commit passing work
- mark the story complete in
prd.json - append learnings to
progress.txt - continue until all stories pass
That leads to different strengths:
pi-goal-drivenis stronger when success is best expressed as a single end-state contractralphis stronger when success is best expressed as a sequence of small independently shippable units
Operational behavior
pi-goal-driven currently includes Pi-specific operational behavior such as:
- forcing worker
subagentcalls toasync: true - forcing worker
subagentcalls toclarify: false - injecting a worker guard that forbids nested
subagentlaunches inside worker sessions - blocking additional worker launches while one worker in the same Goal-Driven session tree is still active
- filtering status checks to the current Goal-Driven session tree instead of the global async run pool
- using an inactivity watchdog to stop and replace stale workers
ralph currently includes repo-loop behavior such as:
- feature-branch tracking from
prd.json - run archiving when branch context changes
- support for both Amp and Claude Code
- optional PRD/skill workflow for generating structured backlog input
So pi-goal-driven is closer to runtime orchestration inside an agent platform, while ralph is closer to automation glue around coding agents.
Ergonomics
pi-goal-driven
Best when you want:
- native Pi commands like
/goal-driven,/goal-driven:brainstorm,/goal-driven:work - a lightweight setup
- quick transition from fuzzy task → explicit goal → execution
- built-in awareness of Pi async workers
ralph
Best when you want:
- a scriptable repo workflow
- explicit PRD-driven decomposition
- durable iteration logs in files committed with the project
- a model where each new run starts from a clean agent context
Trade-offs at a glance
| If you care most about... | Better fit |
|---|---|
| Pi-native UX and extension integration | pi-goal-driven |
| Fresh-agent iterations with durable file-based memory | ralph |
| A single goal with strict success criteria | pi-goal-driven |
| Multi-story execution from a PRD backlog | ralph |
| Async worker supervision inside one ongoing session | pi-goal-driven |
| Portable shell-based orchestration across repos | ralph |
Bottom line
The two projects are not direct clones of each other.
pi-goal-drivenpackages the Goal-Driven master/worker pattern as a thin Pi extension.ralphpackages an autonomous iteration loop as a repo-level script plus PRD workflow.
If your preferred operating model is "stay inside Pi, supervise one goal until verified", pi-goal-driven is the more natural fit.
If your preferred operating model is "translate a PRD into many small stories and let fresh agent runs chip away at them one by one", ralph is the more natural fit.
They are complementary ideas with different centers of gravity: Pi-native supervised execution vs. repo-native autonomous iteration.