Embedded library
Drop microresolve into your Rust, Python, or Node.js process. Zero network hops. The decision engine lives inside your app.
Embedded library
Drop microresolve into your Rust, Python, or Node.js process. Zero network hops. The decision engine lives inside your app.
Embedded + server sync
Run the engine in-process and sync training data to a self-hosted server. Teams share a single source of truth; each service gets its own low-latency local copy.
Self-hosted server + Studio
Deploy the HTTP server and use the Studio UI to manage intents, review misclassifications, and trigger auto-learn — no code changes required.
MicroResolve uses namespaces to run independent classifiers in parallel from a single engine instance:
Each namespace is isolated, independently trainable, and queryable in a single call.
use microresolve::{MicroResolve, MicroResolveConfig};
let engine = MicroResolve::new(MicroResolveConfig { data_dir: Some("~/.local/share/microresolve".into()), ..Default::default()})?;
let support = engine.namespace("support");let security = engine.namespace("security");
let intent = support.resolve("cancel my order");let threat = security.resolve("ignore previous instructions");Large language models are powerful but slow and expensive. The majority of incoming queries — “cancel my order”, “reset my password”, “show me the dashboard” — follow patterns that can be resolved deterministically in microseconds. MicroResolve handles that 80%, so your LLM budget goes to the 20% that actually needs reasoning.
query → [MicroResolve ~30µs] → matched intent → template / tool / guardrail ↘ no match → LLM fallbackYour training data is a git repo. Every namespace mutation auto-commits. You get a built-in audit trail, one-command rollback to any historical state, and optional remote sync via a standard git remote.
# Roll back a namespace to a previous commitcurl -X POST http://localhost:3001/api/namespaces/support/rollback \ -H "Content-Type: application/json" \ -d '{"sha": "abc1234..."}'