User Guide¶
What you will find here¶
| Topic | When you need it |
|---|---|
| Embedders | Pick the right vector model (SentenceTransformers, OpenAI, ONNX int8). |
| Integrations | LangChain, DeepAgents, PostgreSQL, Neo4j, mem0. |
| Correcting memories | Override wrong facts; meta_filter, per-memory provenance, anti-resonance attractors. |
| Coding agents (overview) | Persistent memory for Claude Code, Cursor, custom MCP clients. |
| · Claude Code | MCP server + SessionStart / PreCompact hooks. |
| · Cursor | MCP server + semvec.mdc project rule. |
| Cortex (multi-agent) | In-process SemvecAgentNetwork, SemvecCortexService, or REST. |
| Cortex over REST API | Clusters, regions, observers, network endpoints. |
| Compliance pack | Event store, retention, deletion certificates, HMAC, RS256. |
| Troubleshooting | Symptom-driven fix table. |
| FAQ | "When to use Semvec vs …", licensing edge cases, offline use. |
When to use what¶
- Default starting point:
semvec serve(REST API). Lowest setup friction, polyglot, fixed endpoint shapes. - Multi-agent coordination: start with Cortex over REST
(
/v1/cluster/*,/v1/region/*) — same low setup friction as the base REST API, distributed-ready out of the box. - Tighter per-turn latency / in-process state: drop into the
in-process library —
SemvecState+SemvecChatProxy. - Multi-agent inside one Python process: Cortex in-process
(
SemvecAgentNetwork) once you are already on the library. - Regulated workload: add the compliance pack on top of any path above.
- Coding agent (Claude Code, Cursor): use the coding-agents MCP server.
For a side-by-side decision tree, see Choose your path.
See also¶
- API Reference — public surface, every signature.
- Architecture — abstract component model.
- Comparisons — interface-level differences vs mem0 / Letta / LangChain Memory.