pi-turtle-rlm
pi-turtle-rlm — RLM for Pi: persistent runtime, llmQuery recursion (it's models all the way down)
Package details
Install pi-turtle-rlm from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-turtle-rlm- Package
pi-turtle-rlm- Version
0.1.5- Published
- Apr 8, 2026
- Downloads
- 551/mo · 18/wk
- Author
- jpstrikesback
- License
- Apache-2.0
- Types
- extension
- Size
- 284.7 KB
- Dependencies
- 0 dependencies · 3 peers
Pi manifest JSON
{
"extensions": [
"./dist/index.js"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-turtle-rlm
Recursive language model runtime for Pi — a persistent JS workspace inside the agent, with structured child calls via llmQuery, prompt modes, and session stats.
Turtles all the way down — each turtle is an
llmQuerycall, each shell isglobalThis, base casemaxDepth, or just run out of tokens.
Install
Recommended — Pi package manager (see Pi packages):
pi install npm:pi-turtle-rlm
Pin a version:
pi install npm:pi-turtle-rlm@0.1.0
From git:
pi install git:github.com/jpstrikesback/pi-rlm
From a local clone (contributors or vendoring):
git clone https://github.com/jpstrikesback/pi-rlm.git
cd pi-rlm
npm install
npm run build
pi install ./
# or one-off during development: pi -e ./index.ts
Project-local Pi config (.pi/settings.json or pi install -l):
{
"packages": ["npm:pi-turtle-rlm"]
}
After install, start Pi as usual from your repo; the extension loads from Pi’s package resolution. Use /reload after upgrading the package.
Quick start
- Turn RLM on:
/rlm - Same command takes subcommands:
/rlm balanced|/rlm coordinator|/rlm aggressive— prompt mode/rlm inspect— runtime globals/rlm reset— clear runtime
When RLM is on you get a pink RLM MODE widget (with mode label) and footer stats: depth, rlm_exec count, child queries / turns, runtime variable count, and non-RLM tool calls (“leaf” count).
Why RLM?
Large refactors need more room for context than a single chat transcript. This extension gives the model a persistent workspace to keep intermediate state, recurse with llmQuery, and avoid re-deriving the same context over and over.
Tools
rlm_exec— run JS in the persistent runtimerlm_inspect— inspect runtime globalsrlm_reset— clear the runtime
Inside rlm_exec you can use:
inspectGlobals()final(value)await llmQuery(request)
Use in an extension
import rlmExtension from "pi-turtle-rlm";
export default rlmExtension;
Or configure defaults:
import { createRlmExtension } from "pi-turtle-rlm";
export default createRlmExtension({
maxDepth: 3,
promptMode: "coordinator",
});
Safety
The worker uses node:vm with several globals stripped. It is not a security sandbox — treat it like running code in your user account.
Advanced runtime usage
Inside rlm_exec, the runtime also exposes llmQuery(...) for recursive child calls.
await llmQuery({
prompt: "Analyze the auth module",
state: { files: globalThis.authFiles },
tools: "read-only",
budget: "medium",
});
Inspiration
This project is inspired in part by AxLLM’s RLM ideas.
Development
git clone https://github.com/jpstrikesback/pi-rlm.git
cd pi-rlm
npm install
npm test
npm run build
npm run smoke
License
Apache-2.0 — see LICENSE.