Skip to content

Integrations

The pss reference shipped adapters for LangChain, DeepAgents, PostgreSQL and Neo4j inside the library tree. semvec deliberately keeps those in user space — their dependencies are heavy and opinionated, and the PSS API they consume is small enough to wire up in a few lines.

This page shows the recipes.

LangChain

Expose PSS memory as a LangChain retriever by wrapping SemvecState.memory.get_relevant_memories:

from langchain.schema import Document
from langchain_core.retrievers import BaseRetriever
from semvec import SemvecState

class SemvecRetriever(BaseRetriever):
    def __init__(self, state: SemvecState, embedder, top_k: int = 5):
        super().__init__()
        self._state = state
        self._embedder = embedder
        self._top_k = top_k

    def _get_relevant_documents(self, query: str):
        vec = self._embedder.get_embedding(query)
        hits = self._state.memory.get_relevant_memories(vec, top_k=self._top_k)
        return [
            Document(page_content=m.text, metadata={"importance": m.importance})
            for m in hits
        ]

For tools, wrap each PSS method you want the agent to call:

from langchain.tools import tool

@tool
def pss_record_note(text: str) -> str:
    """Fold a note into the persistent semantic state."""
    state.update(embedder.get_embedding(text), text)
    return "OK"

DeepAgents

DeepAgents middleware runs on every step to mutate context. Swap pss's PSSMiddleware for a thin wrapper over SemvecStateSerializer:

from deepagents import AgentMiddleware
from semvec import SemvecState
from semvec.token_reduction import SemvecStateSerializer

class SemvecMiddleware(AgentMiddleware):
    def __init__(self, state: SemvecState, embedder):
        self._state = state
        self._embedder = embedder
        self._serializer = SemvecStateSerializer()

    def before_step(self, context):
        query_vec = self._embedder.get_embedding(context.last_message)
        context.system += "\n\n" + self._serializer.serialize(
            self._state, query_embedding=query_vec
        )
        return context

    def after_step(self, context):
        text = context.last_message + "\n" + context.assistant_reply
        self._state.update(self._embedder.get_embedding(text), text[:500])

PostgreSQL persistence

Store full SemvecState.to_dict() snapshots in a JSONB column. Integrity checking is built-in (SHA-256 checksum).

CREATE TABLE semvec_states (
    session_id  TEXT PRIMARY KEY,
    state       JSONB NOT NULL,
    updated_at  TIMESTAMPTZ DEFAULT now()
);
import json
import psycopg
from semvec import SemvecState

def save(conn, session_id: str, state: SemvecState):
    with conn.cursor() as cur:
        cur.execute(
            "INSERT INTO semvec_states(session_id, state) VALUES (%s, %s)"
            " ON CONFLICT (session_id) DO UPDATE SET state = EXCLUDED.state,"
            " updated_at = now()",
            (session_id, json.dumps(state.to_dict())),
        )
    conn.commit()

def load(conn, session_id: str) -> SemvecState:
    with conn.cursor() as cur:
        cur.execute("SELECT state FROM semvec_states WHERE session_id = %s", (session_id,))
        row = cur.fetchone()
    if row is None:
        raise KeyError(session_id)
    return SemvecState.from_dict(row[0])  # raises StateCorruptionError on checksum mismatch

The from_dict call verifies the embedded SHA-256 checksum — tampered rows surface as StateCorruptionError rather than silently corrupt reads.

Neo4j property graph

Use LiteralCache entities as graph nodes. Typical schema:

  • (:CodeEntity {kind, name, file, signature, semantic_hash})
  • (:Session {id, created_at})
  • (:CodePointer {file, signature, importance})
  • Relationships: (session) -[:TOUCHED]-> (entity), (entity) -[:CALLS]-> (entity).
from neo4j import GraphDatabase
from semvec import LiteralCache  # re-exported from semvec._core (top-level import works)

driver = GraphDatabase.driver("bolt://localhost:7687", auth=("neo4j", "..."))

def sync_cache(cache: LiteralCache, session_id: str):
    with driver.session() as tx:
        for entity in cache.all_entities():  # see LiteralCache API
            tx.run(
                "MERGE (c:CodeEntity {semantic_hash: $hash}) "
                "SET c.name = $name, c.kind = $kind, c.file = $file "
                "MERGE (s:Session {id: $session_id}) "
                "MERGE (s)-[:TOUCHED]->(c)",
                hash=entity.semantic_hash,
                name=entity.name,
                kind=entity.kind.value,
                file=entity.file_path,
                session_id=session_id,
            )

Mem0 head-to-head

The pss reference shipped a Mem0Runner that ran the LongMemEval benchmark through Mem0's LLM-based fact-extraction for a direct accuracy / latency comparison. semvec does not ship this runner by default — Mem0's dependency graph is large and independently versioned.

Opt in via the optional extra:

pip install "semvec[mem0]"

Then run your own runner against the shared EntryResult shape from semvec.benchmarks.longmemeval:

from mem0 import Memory
from semvec.benchmarks.longmemeval import EntryResult, LongMemEvalDataset

def run_entry(entry, mem0: Memory, llm) -> EntryResult:
    # 1. Reset Mem0 for the entry, ingest sessions
    for session in entry.haystack_sessions:
        for turn in session.turns:
            mem0.add(turn["content"], user_id="bench")
    # 2. Query
    hits = mem0.search(entry.question, user_id="bench", limit=10)
    context = "\n".join(h["memory"] for h in hits)
    ...
    return EntryResult(...)  # see semvec.benchmarks.longmemeval.runner

The GroundTruthEvaluator from semvec accepts any EntryResult-shaped input, so the Mem0 verdicts drop into the same judge/majority pipeline.

The pss reference reported Mem0 was 13× slower and 16 pp less accurate than pss Multi-PSS on the same 60 balanced entries. Reproducing that outcome against semvec is left as an opt-in user task because the LLM cost of a full 500-entry Mem0 run is significant.