pi-research
Pi extension for web research.
Package details
Install pi-research from npm and Pi will load the resources declared by the package manifest.
$ pi install npm:pi-research- Package
pi-research- Version
1.1.0- Published
- May 3, 2026
- Downloads
- 354/mo · 354/wk
- Author
- black-knight.dev
- License
- MIT
- Types
- extension
- Size
- 89.3 KB
- Dependencies
- 3 dependencies · 3 peers
Pi manifest JSON
{
"extensions": [
"./extensions/pi-research.ts"
]
}Security note
Pi packages can execute code and influence agent behavior. Review the source before installing third-party packages.
README
pi-research
pi-research is a Pi extension for fast, local-first web research inside the agent.
It searches the live web, ranks sources, reads the most relevant pages, and synthesizes a grounded answer with citations. It does not require an external research API or API key, and it is not a browser automation tool.
Why it exists
Agents usually need two things to answer well:
- a way to search the web efficiently
- a way to turn sources into a usable answer
pi-research does both inside Pi, so the agent can research topics without relying on a separate hosted research service.
What it does
- searches the live web
- scores and deduplicates sources
- prefers official docs, READMEs, and papers when relevant
- follows up when the first pass is not enough
- extracts code blocks for code-focused questions
- supports local files as additional sources
- returns a structured result with citations and confidence metadata
What it is not
- not a browser interaction tool
- not an offline knowledge base
- not a replacement for page navigation
Install
For Pi
pi install npm:pi-research
For npm-based workflows
npm install pi-research
GitHub repository: https://github.com/endgegnerbert-tech/pi-research
Quick start
What are the trade-offs between B-trees and LSM-trees?
Show me the best way to add health checks to Docker Compose.
Compare React Server Components with traditional SSR.
Modes
| Mode | Best for |
|---|---|
fast |
quick answers with a quality floor |
deep |
broader retrieval with follow-up rounds |
code |
docs, READMEs, repositories, and code snippets |
academic |
scholarly sources and paper-heavy topics |
Public tool parameters
query— research question to answermode—fast,deep,code, oracademicforce— bypass cached sufficiency checksisolate— run without session/query cache reuseoptions.allowedSources— prefer only the listed source hintsoptions.requireAuthoritative— bias toward authoritative sourcesoptions.maxTurns— limit follow-up roundsoptions.maxSites— limit how many sources are readoptions.minYear/options.maxYear— constrain source datesoptions.preferRecent— prefer newer sourcesoptions.files— include local files as sourcesoptions.format— output format:markdown,json,table, orlatexoptions.deepResearchConfig— depth/breadth/concurrency tuning for deeper runs
Example calls
Fast mode
query: What is the difference between HTTP and HTTPS?
mode: fast
Deep mode
query: Compare PostgreSQL and MySQL for multi-tenant SaaS
mode: deep
options:
preferRecent: true
maxTurns: 2
Code mode
query: How do I add retries to a Node.js fetch wrapper?
mode: code
Academic mode
query: Retrieval augmented generation evaluation methods
mode: academic
Local files as sources
query: Summarize the key points from these notes
mode: fast
options:
files:
- ./notes/project-notes.md
- ./docs/spec.md
Output
The tool returns structured data including:
answerbulletssourcescitationscodeBlocksconfidenceconfidenceScoresufficientauthoritativeSourcesFoundopenSubQuestionsmissingAspectsconflictSummaryunverifiedClaimssourceTypesmeta
How it works
- query-isolated caching: repeated identical research can be skipped when the previous result was already sufficient
- source scoring: official docs, READMEs, papers, and local files are preferred over weak sources
- follow-up planning: unclear or conflicting results trigger another round of research
- conflict detection: opposing claims are surfaced explicitly
- fact checking: unsupported answer sentences are marked as unverified
- local source input: files can be added directly to the research context
Limits
- it still depends on live web access for web research
- it does not browse pages like a human user
- it is not fully offline unless you only use local files
- it is not a browser interaction tool
Domain packs
webgithubsecuritypapersspecschangelogforumspackage-registryvendor-status
Community packs
You can add your own domain pack by copying lib/domains/template.js, adapting the run() function, and registering it in lib/domains/index.js.
Minimal starter example:
export default {
name: "boxing-training",
sourceHints: ["web"],
async run(question) {
return {
claims: [
{
text: `Starter pack example for ${question}`,
evidence: [{ type: "web", source: "https://example.com", snippet: "Example" }],
confidence: "medium",
},
],
};
},
};
Eval
Run npm run eval to execute the eval harness.
Package info
- Package name:
pi-research - Entry point:
extensions/pi-research.ts - Tool name:
pi-research - License: MIT
Release notes
- Pi install:
pi install npm:pi-research - npm install:
npm install pi-research - GitHub:
https://github.com/endgegnerbert-tech/pi-research - Community packs: copy the template pack and register it in
lib/domains/index.js