VECLABS / RECALL / ALPHA

Memory layer for AI agents.

The complete memory layer for AI agents.

Two packages. Store, encrypt, verify - then assemble exactly the right context for every agent decision.

@veclabs/solvecStore. Encrypt. Verify.LIVE
@veclabs/recallRetrieve. Assemble. Audit.PHASE 7
Get API KeyRead the docsView Demo
npm install @veclabs/solvec
# or
pip install solvec --pre

Free tier includes 100K vectors. No credit card required.

4.7ms
p99 latency
AES-256-GCM
encrypted
On-chain
Merkle proofs
88%
cheaper than Pinecone
THE PROBLEM

AI agents forget. But forgetting is not the problem.

Every AI agent framework gives you a memory module. Store vectors. Retrieve the top-K most similar results. Inject them into the context window. This works. Most of the time.

The problem is what happens when it does not work. When your agent makes a wrong decision - a medical recommendation, a financial call, a customer-facing action - you need to answer one question.

What did this agent know when it made that decision?

No current vector database can answer this. You can see what is in the database today. You cannot prove what was in it at any specific moment. You cannot audit what was retrieved for a given query. You cannot verify that nothing changed between then and now.

This is not a theoretical concern. Regulators are already asking this question. Most companies cannot answer it. VecLabs is built so you can.

ARCHITECTURE
LAYER 1

Rust HNSW engine

In-process vector search. No network round-trip on the query path. 4.7ms p99 at 100K vectors.

SHIPPED
LAYER 2

SDK & client-side encryption

AES-256-GCM before anything leaves the SDK. The key is derived from your credentials.

ALPHA
LAYER 3

Solana - cryptographic proof

A Merkle root of your collection state is recorded on-chain after every write. Verifiable by anyone.

LIVE
THE STACK
@veclabs/solvecLIVE

The storage engine.

Rust HNSW in-process vector search. AES-256-GCM client-side encryption. Cryptographic proof posted after every write. Answers: what is similar to this?

snippet.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import { SolVec } from '@veclabs/solvec'
const sv = new SolVec({ apiKey: 'vl_live_...' })
const col = sv.collection('agent-memory', { dimensions: 1536 })
await col.upsert([{
id: 'mem_001',
values: embedding,
metadata: { content: 'User prefers dark mode' }
}])
// encrypted to disk
// cryptographic proof recorded
const { matches } = await col.query({ vector: query, topK: 5 })
const proof = await col.verify()
// proof.verified === true
$ npm install @veclabs/solvec

@veclabs/recallPHASE 7

The intelligence layer.

Wraps solvec. Assembles structured context from five retrieval strategies simultaneously. Answers: what should this agent know right now?

snippet.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
// coming in phase 7
import { Recall } from '@veclabs/recall'
const recall = new Recall(collection)
const context = await recall.getContext({
task: queryEmbedding,
strategy: 'balanced',
maxTokens: 2000
})
// context.persistent - always-relevant memories
// context.recent - recency weighted
// context.relevant - semantically close
// context.novel - unseen recently
// context.conflicts - contradicts current task
// context.tokenCount - 1847
WRITE PATH
upsert() called
[HNSW insert]~2ms← returns here
[AES-256-GCM encrypt]~1ms
[disk persist]~1ms
[Shadow Drive upload]~500ms–2sfire and forget
[cryptographic proof]~400msfire and forget
QUERY PATH
query()[HNSW search in RAM]4.7ms p99returns
Verification layer never touched on query path.
PERFORMANCE
4.7ms
P99 LATENCY
VecLabs (in-process)
~30ms
P99 LATENCY
Pinecone
~15ms
P99 LATENCY
Qdrant
Apple M3 · 100K vectors · 1536 dimensions · cosine similarity
Reproduce: cargo run --release --example percentile_bench
COMPARISON
Feature
VecLabs
Pinecone
Qdrant
P99 query latency
4.7ms
~30ms
~15ms
Architecture
in-process
managed API
server
Cryptographic proof
-
-
Client-side encryption
-
-
Memory Inspector
-
-
Open source
MIT
-
Apache 2
ROADMAP
PHASE 1SHIPPED
Rust HNSW Core
57 tests. 4.7ms p99 at 100K vectors. In-process. No network round-trip on the query path.
PHASE 2SHIPPED
Anchor Program
Cryptographic proof of collection state after every write. One transaction. $0.00025. Permanent.
PHASE 3SHIPPED
WASM Bridge
Rust core compiled to WebAssembly. TypeScript SDK runs real Rust - not a JavaScript fallback.
PHASE 4SHIPPED
Encrypted Disk Persistence
AES-256-GCM. Key derived from your credentials. Collection survives server restarts.
PHASE 5SHIPPED
Decentralized Storage
Encrypted collection backed up automatically after every write. Fire-and-forget. Never blocks upsert().
PHASE 6SHIPPED
Memory Inspector
Full audit trail of every memory operation. What the agent stored, retrieved, and deleted - with timestamps and proof.
PHASE 7PLANNED
@veclabs/recall
The intelligence layer. Five retrieval strategies assembled simultaneously. Token-budget-aware context for every decision.
In research
Something at the intersection of vector search and graph structures - not RAG on top of a graph, but a fundamentally different architecture where the graph is the index. Very early. Nothing to ship yet.
DESIGN DECISIONS
SPEED

No network round-trip on the query path. The HNSW index lives in RAM. Every query is pure memory access. Rust means no garbage collector - no pause spikes at p99. The gap between p50 and p99.9 is 2.7ms. That gap is the whole story.

PRIVACY

AES-256-GCM before anything leaves the SDK. The key is derived from your credentials. VecLabs cannot read your vectors. This is not a privacy policy that could change. It is the encryption architecture. Those are different things.

PROOF

A cryptographic fingerprint of your collection is recorded after every write. Anyone can verify your collection's integrity without trusting VecLabs. No other vector database gives you this. This is the layer that makes AI agents auditable.

PINECONE-COMPATIBLE API

Migrate in 3 lines.

Everything else stays identical.

import { SolVec } from '@veclabs/solvec';
const sv = new SolVec({ apiKey: 'vl_live_...' });
const collection = sv.collection('agent-memory', { dimensions: 1536 });
await collection.upsert([{
id: 'mem_001',
values: embedding,
metadata: { text: 'User prefers dark mode' }
}]);
const results = await collection.query({ vector: queryEmbedding, topK: 5 });
const proof = await collection.verify(); // on-chain Merkle proof
PRICING

Simple, transparent pricing

Pay for what you use. No hidden fees. No vendor lock-in.

Features
Free
$0/mo
Pro
$25/mo
Most popular
Business
$199/mo
Enterprise
Custom
Vectors100K2M20MUnlimited
Writes / month10K500K5MUnlimited
Queries / month50K1M10MUnlimited
Collections325UnlimitedUnlimited
Merkle proofs
Memory Inspector
Shadow Drive-
Email support-
Priority support--
99.9% SLA--
Dedicated Slack---
On-premise option---
Custom contract---
Get startedStart ProStart BusinessContact us

Start building in 2 minutes

Free tier. No credit card. Get your API key and ship.

Get API Key
QUICK START
snippet.ts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import { SolVec } from '@veclabs/solvec'
const sv = new SolVec({ apiKey: 'vl_live_...' })
const memory = sv.collection('agent-memory', { dimensions: 1536 })
// store
await memory.upsert([{
id: 'mem_001',
values: embedding,
metadata: { content: 'User prefers dark mode' }
}])
// retrieve
const { matches } = await memory.query({
vector: queryEmbedding,
topK: 5
})
// verify
const proof = await memory.verify()
// proof.verified === true