Problem
The inbound inbox stores all pending and dead-lettered messages in a single index.json file:
~/.clawdentity/agents/<name>/inbound-inbox/
├── index.json ← entire queue state loaded into memory every operation
├── index.lock ← file-based mutex (lock file with wx flag)
└── events.jsonl ← append-only audit log
Issues:
- Full file load on every operation —
loadIndex() reads and parses the entire JSON on every enqueue, delivery, and failure mark
- File lock hack — using
writeFile(path, data, { flag: 'wx' }) as a mutex with stale lock detection (30s timeout). Works but fragile.
- No crash recovery — if the process dies between
saveIndex() calls, the file could be partially written
- No concurrent reads — write lock blocks reads too
- Scales poorly — hundreds of pending messages means loading/saving increasingly large JSON blobs
Proposal
Replace index.json with SQLite using Node.js built-in node:sqlite (DatabaseSync).
Precedent: OpenClaw already uses this exact approach for its memory system:
node:sqlite DatabaseSync (zero external dependencies, ships with Node.js 22+)
sqlite-vec extension for vector search
- Transactions via
BEGIN/COMMIT/ROLLBACK
- Works great on the same machines the connector runs on
Schema
CREATE TABLE IF NOT EXISTS inbox_pending (
request_id TEXT PRIMARY KEY,
id TEXT NOT NULL,
from_agent_did TEXT NOT NULL,
to_agent_did TEXT NOT NULL,
conversation_id TEXT,
reply_to TEXT,
payload TEXT NOT NULL,
payload_bytes INTEGER NOT NULL,
received_at TEXT NOT NULL,
next_attempt_at TEXT NOT NULL,
attempt_count INTEGER NOT NULL DEFAULT 0,
last_attempt_at TEXT,
last_error TEXT
);
CREATE TABLE IF NOT EXISTS inbox_dead_letter (
request_id TEXT PRIMARY KEY,
id TEXT NOT NULL,
from_agent_did TEXT NOT NULL,
to_agent_did TEXT NOT NULL,
conversation_id TEXT,
reply_to TEXT,
payload TEXT NOT NULL,
payload_bytes INTEGER NOT NULL,
received_at TEXT NOT NULL,
next_attempt_at TEXT NOT NULL,
attempt_count INTEGER NOT NULL,
last_attempt_at TEXT,
last_error TEXT,
dead_lettered_at TEXT NOT NULL,
dead_letter_reason TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS inbox_events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
type TEXT NOT NULL,
request_id TEXT,
details TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now'))
);
CREATE INDEX idx_pending_next_attempt ON inbox_pending(next_attempt_at);
CREATE INDEX idx_pending_conversation ON inbox_pending(conversation_id);
CREATE INDEX idx_dead_letter_at ON inbox_dead_letter(dead_lettered_at);
CREATE INDEX idx_events_type ON inbox_events(type);
Benefits
|
JSON file (current) |
SQLite |
| Enqueue |
Load entire file → parse → modify → serialize → write |
Single INSERT |
| List due pending |
Load all → filter → sort in JS |
SELECT ... WHERE next_attempt_at <= ? ORDER BY ... LIMIT ? |
| Mark delivered |
Load all → delete key → save all |
Single DELETE |
| Concurrency |
File lock (wx flag hack) |
Built-in WAL mode |
| Crash safety |
Partial write = corruption |
Atomic transactions |
| Memory usage |
Entire index in memory |
Query what you need |
| Dedup check |
Load all → check key exists |
SELECT 1 WHERE request_id = ? |
| Dead letter list |
Load all → filter → sort |
SELECT ... ORDER BY dead_lettered_at |
| Capacity check |
Load all → count keys + sum bytes |
SELECT COUNT(*), SUM(payload_bytes) |
| Events log |
Append to JSONL + manual rotation |
INSERT + DELETE WHERE id < (SELECT id FROM ... LIMIT 1 OFFSET max) |
Migration
- On first load, check if
index.json exists
- If yes, import all pending + dead letter items into SQLite
- Rename
index.json → index.json.migrated as backup
- New installs go straight to SQLite
Implementation Notes
- Use
node:sqlite DatabaseSync (same as OpenClaw memory system)
- Enable WAL mode:
PRAGMA journal_mode=WAL
- No external dependencies needed (Node.js 22+ has sqlite built-in)
- The
withWriteLock promise chain can stay for in-process serialization of writes, but file lock is no longer needed
- Events table replaces
events.jsonl — auto-pruning via SQL instead of file rotation
Problem
The inbound inbox stores all pending and dead-lettered messages in a single
index.jsonfile:Issues:
loadIndex()reads and parses the entire JSON on every enqueue, delivery, and failure markwriteFile(path, data, { flag: 'wx' })as a mutex with stale lock detection (30s timeout). Works but fragile.saveIndex()calls, the file could be partially writtenProposal
Replace
index.jsonwith SQLite using Node.js built-innode:sqlite(DatabaseSync).Precedent: OpenClaw already uses this exact approach for its memory system:
node:sqliteDatabaseSync(zero external dependencies, ships with Node.js 22+)sqlite-vecextension for vector searchBEGIN/COMMIT/ROLLBACKSchema
Benefits
SELECT ... WHERE next_attempt_at <= ? ORDER BY ... LIMIT ?SELECT 1 WHERE request_id = ?SELECT ... ORDER BY dead_lettered_atSELECT COUNT(*), SUM(payload_bytes)DELETE WHERE id < (SELECT id FROM ... LIMIT 1 OFFSET max)Migration
index.jsonexistsindex.json→index.json.migratedas backupImplementation Notes
node:sqliteDatabaseSync(same as OpenClaw memory system)PRAGMA journal_mode=WALwithWriteLockpromise chain can stay for in-process serialization of writes, but file lock is no longer neededevents.jsonl— auto-pruning via SQL instead of file rotation