Skip to content
Now live — open CLI · v1

Lands your AI agent at the right code in fewer turns, tokens, & breakages.

A local intelligence layer that sits between your AI agent and your codebase — indexes every call, remembers every decision, and gets sharper the longer you use it.

One global install, then unerr install <agent> per repo. No account. No keys.

Node ≥ 20MCPLocal-first · no cloudELv2
localhost:9315 — unerr dashboard
unerr live dashboard — token savings, reasoning quality, codebase map, and project memory

The problem

The agent isn't stupid. It's flying blind.

Watch any AI coding session for ten minutes and you'll see the same loop.

  • Reads 30 files to find one function

    Burns the context window before it writes a line.

  • Edits something with 40 callers

    Never knows it just broke three services.

  • Re-derives conventions you taught yesterday

    And this morning. And an hour ago.

  • Forgets the entire session

    The moment the context window closes.

Every one is the same root cause: no persistent memory of your code, your team's style, or its own past mistakes.

unerr is that memory.

What unerr changes

One process, fully local — indexed in seconds.

Six mechanisms run in the background the moment your agent connects via MCP. No agent re-training, no prompt tweaks, no signups.

Graph-guided navigation

get_entity · get_references · get_imports · search_code in <5ms. The agent stops reading 30 files to find one function.

Reduces wasted file reads · ~70% fewer turns to land

Targeted file reads

file_read({ entity: "fnName" }) returns just that function plus relevant conventions and facts — never the whole 2,000-line file.

Lower context cost per turn

Shell output compression

11 strategies, 645+ command classifiers. Diffs, errors, logs, test runs, YAML — each compressed differently.

93% avg compression · 2 MB → 138 KB

Persistent memory

record_fact + recall_facts with decay-adjusted confidence. Conventions, decisions, and anti-patterns survive across sessions.

Cross-session memory · no starting from zero

Blast radius before edits

get_references surfaces every caller before a change. No more confident wrong edits that ripple across services.

Fewer breakages · safer refactors

Local-first

Two processes, one local DB. Zero network calls. No API keys. No cloud. Your code never leaves the machine.

0 network calls · ELv2 license

The proof

Every claim is a tool call your agent just made.

Open the dashboard and watch unerr's effect on the current session in real-time. Four live panes, all backed by an append-only ledger.

0%avg shell compression
<0msgraph query latency
0MCP tools
0languages supported
0network calls
Token Trace
Token trace dashboard — aggregate savings across every session
Aggregate savings across every session — broken down by mechanism: graph, file_read, shell, dedup, format.
Reasoning Quality
Reasoning quality dashboard — 4-pillar score
Four-pillar score across cleaner context, fewer wasted turns, fewer breakages, and persistent memory.
Code Intelligence
Codebase intelligence — call graph, fan-in/out chokepoints, surprise links
Call graph, fan-in/out chokepoints, cross-module surprise links, risk grade per file.
Project Memory
Project memory — conventions, anti-patterns, decisions
Conventions, anti-patterns, decisions — with decay-adjusted confidence and reinforcement counts.

Shell compression

11 strategies. 645+ command classifiers.

StrategyTargetsCompression
diffgit diff, patch output99%
structuredJSON APIs, docker inspect97%
progressnpm/pip install95%
log_textbuild logs, cargo build89%
test_resultsvitest, pytest, playwright80%
tabularps aux, docker ps, kubectl get77%
error_diagnostictsc, eslint, rustc72%
tree_pathsfind, tree, ls -R42%

Quick start

From zero to a smarter agent in under a minute.

Four explicit steps. Per-repository. No accounts, no API keys, no external dependencies.

  1. 1

    Install the CLI globally

    Single global install. Verify with unerr --version.

  2. 2

    cd into your repo root

    Everything unerr writes is scoped to the current directory — .mcp.json, .claude/, .unerr/. Run install from the repo root.

  3. 3

    Install for your coding agent

    Writes MCP config, drops bundled skills, injects a tool-preference section, and installs hooks where supported.

    → writes .mcp.json · CLAUDE.md · .claude/skills/ · hooks

  4. 4

    Run unerr — and keep it running

    The daemon owns the graph, file watcher, drift detection, and behavior automation. ~80–200 MB, idles at near-zero CPU. Leave it running.

Need a different agent? Run unerr install --show-instructions <agent> — 16 agents supported (6 fully integrated, 10 in progress).

MCP surface

19 graph-aware tools. One MCP server.

Every tool returns sub-5ms responses with inline ur| signals for drift, blast-radius warnings, and circuit-breaker halts.

Graph Intelligence

8
  • get_entityEntity — signature, body, callers, callees, risk
  • get_fileAll entities in a file with risk summary
  • get_referencesCallers (blast radius) or callees (dependencies)
  • get_importsImport graph for a file
  • search_codeGraph-ranked full-text search
  • get_conventionsNaming, structure, import patterns
  • get_critical_nodesHigh fan-in/fan-out chokepoints
  • get_cross_boundary_linksUnexpected cross-module deps

Structural Analysis

3
  • get_project_statsEntity counts, risk distribution, health grade
  • file_connectionsImports + co-change correlations
  • get_test_coverageDirect + transitive tests for any entity

File Protocol

2
  • file_readContext-aware read — auto-injects conventions and facts
  • file_outlineFile structure without reading the body

Persistent Memory

2
  • record_factPersist a convention, decision, or anti-pattern
  • recall_factsHierarchical scope + decay-adjusted confidence

Session Narrative

4
  • mark_intentOne-sentence task start. Becomes the turn title
  • mark_decisionRecords a chosen approach + alternatives
  • mark_blockerFlags an unresolved obstacle
  • mark_resolutionResolves a prior blocker by marker_id

Every response includes _meta (latency, risk level, drift status). Inline ur|<tag> signals surface high-priority guidance directly in the response body.

Get started

Stop watching your agent read 30 files to find one function.

One install. Per repo. Zero accounts. Your code never leaves the machine.

Fully local · No account · No cloud · Free under ELv2