Memory that returns.
Revien is a graph-based memory engine for AI systems. It gives any AI tool — local models, Claude Code, API-based assistants, agent frameworks — persistent memory across sessions, allowing cross-session AI context. No GPU required. No cloud account needed. Nothing is compacted away by default.
pip install revien
revien connect claude-code
revien startThat's it. Revien starts building persistent memory.
Most AI tools still lose continuity between sessions. Most memory systems either forget too much, hide too much, or force developers to assemble the pipeline themselves.
| Feature | Revien | LangChain Memory | Mem0 | Zep |
|---|---|---|---|---|
| Graph-based retrieval | ✅ | ❌ (linear/compaction) | ❌ | Partial |
| Three-factor scoring | ✅ | ❌ | ❌ | ❌ |
| OpenAI export ingestion | ✅ | ❌ | ❌ | ❌ |
| LangChain drop-in | ✅ | N/A | ✅ | ✅ |
| Ollama integration | ✅ | ❌ | ❌ | ❌ |
| Claude Code integration | ✅ | ❌ | ❌ | ❌ |
| Zero cloud dependency | ✅ | Partial | ❌ | ❌ |
| File watcher auto-ingest | ✅ | ❌ | ❌ | ❌ |
Revien takes a different approach: store everything as a graph, compact nothing, retrieve surgically.
Every piece of context becomes a node in a knowledge graph with typed relationships to other nodes. When your AI needs context, Revien walks the graph and returns only what's relevant — scored by recency, frequency, and relationship proximity. The full history remains preserved in the graph. Nothing is ever summarized away.
When you feed Revien a conversation, it extracts:
- Entities — people, projects, tools, organizations
- Decisions — choices that were made and why
- Facts — specific data points, configurations, values
- Topics — recurring themes across conversations
- Preferences — how you like things done
- Events — things that happened at specific times
These become nodes. Relationships between them become edges. The graph grows over time but retrieval stays fast because you're walking edges, not scanning embeddings.
When you query Revien, every candidate node gets scored on three dimensions:
- Recency — when was this last relevant? (exponential decay)
- Frequency — how often does this come up? (logarithmic, diminishing returns)
- Proximity — how many graph edges from the query anchor? (hop distance)
The composite score determines what gets surfaced. Only the top results are returned — your AI gets a lean, relevant context window instead of a bloated dump.
Every time a node is retrieved, its access count increases. This boosts its frequency score in future queries. Memory that's actually useful becomes easier to find over time — automatically, with no ML model, no training step, no user intervention.
# Core
pip install revien
# With LangChain support
pip install revien[langchain]
# With all adapters
pip install revien[all]revien startThis launches the Revien daemon on localhost:7437. It serves the REST API and runs the auto-sync scheduler.
revien connect claude-codeRevien reads local session logs and indexes them into the graph. Every new session gets ingested automatically.
revien recall "What database did we decide to use?"Returns scored results from your memory graph:
1. [decision] Enterprise tier at $499/month, 20% annual discount, PostgreSQL
Score: 0.89 | Recency: 1.00 | Frequency: 0.63 | Proximity: 1.00
2. [entity] PostgreSQL
Score: 0.84 | Recency: 1.00 | Frequency: 0.46 | Proximity: 1.00
import httpx
# Ingest a conversation
httpx.post("http://localhost:7437/v1/ingest", json={
"source_id": "my-session",
"content": "We decided to use PostgreSQL for the database layer.",
"content_type": "conversation",
})
# Recall relevant memory
response = httpx.post("http://localhost:7437/v1/recall", json={
"query": "What database are we using?"
})
for result in response.json()["results"]:
print(f"[{result['node_type']}] {result['label']} (score: {result['score']:.2f})")Revien integrates with popular AI platforms and frameworks. Migrate existing conversation history, drop in as a replacement memory backend, or bridge to local models — all without losing context.
Migrate conversation history into persistent graph-based AI memory. Export your ChatGPT conversations and transform them into a queryable knowledge graph:
revien ingest --source openai --file conversations.json --bulkConversations become queryable graph nodes with three-factor scoring. Instead of losing old conversations to context limits, they remain accessible and searchable. This is your ChatGPT memory alternative — a true OpenAI conversation persistence layer that doesn't forget.
Drop-in replacement for LangChain's memory backend. Use graph-based AI memory instead of compaction:
from revien.adapters.langchain_adapter import RevienMemory
memory = RevienMemory(graph_path="./my_graph")
chain = ConversationChain(llm=llm, memory=memory)No compaction, no summarization loss. LangChain memory replacement that retrieves what's relevant instead of what's recent. This LangChain memory replacement uses graph-based retrieval, ensuring your agent always has access to the full context it needs.
Zero-cloud persistent memory for local models. Run your own LLM with Revien's local AI memory engine:
revien chat --backend ollama --model llama3Every conversation persists in your local graph. Full privacy, full memory, zero cloud dependency. Deploy Ollama persistent memory that remembers everything without surveillance. This is local AI memory that doesn't forget — no vendor lock-in, no cloud bills, just your models and your graph.
Revien connects to AI systems through adapters. These ship with the package:
| Adapter | What it does |
|---|---|
| Claude Code | Reads Claude Code session logs (JSONL). Auto-syncs on schedule. |
| File Watcher | Watches a directory for new/changed files. Ingests on change. |
| Generic API | Connects to any REST endpoint returning conversation data. |
| OpenAI / ChatGPT | Ingests ChatGPT conversation exports. Supports single and bulk formats. |
| LangChain | Drop-in BaseMemory replacement. Graph-based retrieval instead of compaction. |
| Ollama | Bridges Revien memory to local Ollama models. Zero cloud dependency. |
# Claude Code (auto-detects log location)
revien connect claude-code
# Watch a directory
revien connect file-watcher --path /path/to/conversations/
# Generic API endpoint
revien connect api --url https://your-system.com/api/conversations --header "Authorization: Bearer ..."from revien.adapters.base import RevienAdapter
class MyAdapter(RevienAdapter):
async def fetch_new_content(self, since):
# Return list of {content, content_type, timestamp, metadata}
...
async def health_check(self):
return TrueThe daemon exposes a full REST API on localhost:7437:
| Method | Endpoint | Function |
|---|---|---|
| POST | /v1/ingest |
Ingest raw content into the graph |
| POST | /v1/recall |
Query memory with three-factor scoring |
| GET | /v1/nodes |
List nodes (filter by type, date, source) |
| GET | /v1/nodes/{id} |
Get a specific node with edges |
| PUT | /v1/nodes/{id} |
Update a node |
| DELETE | /v1/nodes/{id} |
Delete a node and its edges |
| GET | /v1/graph |
Export full graph as JSON |
| POST | /v1/graph/import |
Import graph from JSON |
| POST | /v1/sync |
Trigger manual sync |
| GET | /v1/health |
Health check |
Interactive docs at http://localhost:7437/docs when the daemon is running.
entity · topic · decision · fact · preference · event · context
related_to · decided_in · mentioned_by · depends_on · followed_by · contradicts
Every ingestion creates a context node representing the full interaction. All extracted nodes connect back to it. You can always trace any fact or decision back to the conversation where it originated.
Any AI System
│
▼
┌─────────────┐ ┌──────────────┐
│ Revien API │────▶│ Ingestion │──── extract nodes + edges
│ (FastAPI) │ │ Engine │ from raw content
└──────┬──────┘ └──────────────┘
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ Retrieval │◀───│ Graph Store │──── SQLite (local)
│ Engine │ │ (nodes + │ PostgreSQL (hosted)
│ (3-factor) │ │ edges) │
└─────────────┘ └──────────────┘
│
▼
Scored results
(top N nodes)
│
▼
Any AI's context window
(lean, relevant, surgical)
From 5 sample conversations (60 nodes, 147 edges):
| Metric | Value |
|---|---|
| Average retrieval time | 38.75ms |
| Queries under 50ms | 67% |
| Queries under 100ms | 100% |
| Hit rate (relevant results) | 87% (13/15) |
| Zero GPU | ✓ |
| Zero cloud dependency | ✓ |
The two misses were intentionally vague queries with no extractable entities ("Tell me about our architecture").
Config lives at ~/.revien/config.json. Created automatically on first run.
{
"daemon": {
"host": "127.0.0.1",
"port": 7437
},
"sync": {
"interval_hours": 6
},
"retrieval": {
"max_results": 5,
"max_hops": 3,
"recency_weight": 0.35,
"frequency_weight": 0.30,
"proximity_weight": 0.35,
"recency_half_life_days": 7
},
"adapters": []
}All retrieval weights are configurable. Adjust to your use case — boost recency for fast-moving projects, boost frequency for stable knowledge bases.
- Additional adapters for popular AI tools
- Improved vague-query handling
- Hosted deployment option
- Graph visualization and inspection tools
- Performance tuning and scale improvements
See CONTRIBUTING.md for guidelines.
Revien is the open-source memory layer from LKM Constructs.
Apache 2.0 — see LICENSE.
Copyright 2026 LKM Constructs LLC.