Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ Drop-in wrappers for every major AI framework:
```typescript
// Vercel AI SDK
import { withSupermemory } from "@supermemory/tools/ai-sdk";
const model = withSupermemory(openai("gpt-4o"), "user_123");
const model = withSupermemory(openai("gpt-4o"), { containerTag: "user_123", customId: "conv-1" });

// Mastra
import { withSupermemory } from "@supermemory/tools/mastra";
Expand Down
11 changes: 8 additions & 3 deletions apps/docs/ai-sdk/overview.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,10 @@ import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Wrap your model with Supermemory - profiles are automatically injected
const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123")
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
})

const result = await generateText({
model: modelWithMemory,
Expand All @@ -39,8 +42,10 @@ const result = await generateText({
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, enable it explicitly:

```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
})
```
</Note>
Expand Down
74 changes: 49 additions & 25 deletions apps/docs/ai-sdk/user-profiles.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

// Wrap any model with Supermemory middleware
const modelWithMemory = withSupermemory(
openai("gpt-4"), // Your base model
"user-123" // Container tag (user ID)
)
const modelWithMemory = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conversation-456",
})

// Use normally - profiles are automatically injected!
const result = await generateText({
Expand All @@ -49,8 +49,10 @@ All of this happens transparently - you write code as if using a normal model, b
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories from conversations, set `addMemory: "always"`:

```typescript
const model = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
const model = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
})
```
</Note>
Expand All @@ -65,11 +67,16 @@ Retrieves the user's complete profile without query-specific search. Best for ge

```typescript
// Default behavior - profile mode
const model = withSupermemory(openai("gpt-4"), "user-123")
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
})

// Or explicitly specify
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "profile"
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
mode: "profile",
})

const result = await generateText({
Expand All @@ -84,8 +91,10 @@ const result = await generateText({
Searches memories based on the user's specific message. Best for finding relevant information.

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "query"
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
mode: "query",
})

const result = await generateText({
Expand All @@ -103,8 +112,10 @@ const result = await generateText({
Combines profile AND query-based search for comprehensive context. Best for complex interactions.

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
mode: "full"
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
mode: "full",
})

const result = await generateText({
Expand Down Expand Up @@ -137,9 +148,11 @@ ${data.generalSearchMemories}
</user_memories>
`.trim()

const model = withSupermemory(openai("gpt-4"), "user-123", {
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
mode: "full",
promptTemplate: customPrompt
promptTemplate: customPrompt,
})

const result = await generateText({
Expand Down Expand Up @@ -174,9 +187,11 @@ const claudePrompt = (data: MemoryPromptData) => `
Use the above context to provide personalized responses.
`.trim()

const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
const model = withSupermemory(anthropic("claude-3-sonnet"), {
containerTag: "user-123",
customId: "conv-1",
mode: "full",
promptTemplate: claudePrompt
promptTemplate: claudePrompt,
})
```

Expand All @@ -199,9 +214,11 @@ ${relevant.map((r) => `- ${r.memory}`).join("\n")}
`.trim()
}

const model = withSupermemory(openai("gpt-4"), "user-123", {
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
mode: "full",
promptTemplate: selectivePrompt
promptTemplate: selectivePrompt,
})
```

Expand All @@ -222,8 +239,10 @@ ${data.generalSearchMemories}
Use this information to provide personalized and contextually relevant responses.
`.trim()

const model = withSupermemory(openai("gpt-4"), "user-123", {
promptTemplate: brandedPrompt
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
promptTemplate: brandedPrompt,
})
```

Expand All @@ -241,8 +260,10 @@ const defaultPrompt = (data: MemoryPromptData) =>
Enable detailed logging to see exactly what's happening:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true // Enable detailed logging
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
verbose: true, // Enable detailed logging
})

const result = await generateText({
Expand All @@ -266,8 +287,11 @@ The AI SDK middleware abstracts away the complexity of manual profile management
<Tabs>
<Tab title="With AI SDK (Simple)">
```typescript
// One line setup
const model = withSupermemory(openai("gpt-4"), "user-123")
// Simple setup
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
})

// Use normally
const result = await generateText({
Expand Down
5 changes: 4 additions & 1 deletion apps/docs/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,10 @@ const result = await streamText({

// Option 2: Profile middleware (automatic context injection)
import { withSupermemory } from '@supermemory/tools/ai-sdk'
const modelWithMemory = withSupermemory(anthropic('claude-3-5-sonnet-20241022'), userId)
const modelWithMemory = withSupermemory(anthropic('claude-3-5-sonnet-20241022'), {
containerTag: userId,
customId: 'conversation-1',
})

const result = await generateText({
model: modelWithMemory,
Expand Down
35 changes: 23 additions & 12 deletions apps/docs/integrations/ai-sdk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,10 @@ import { generateText } from "ai"
import { withSupermemory } from "@supermemory/tools/ai-sdk"
import { openai } from "@ai-sdk/openai"

const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123")
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
})

const result = await generateText({
model: modelWithMemory,
Expand All @@ -47,8 +50,10 @@ const result = await generateText({
**Memory saving is disabled by default.** The middleware only retrieves existing memories. To automatically save new memories:

```typescript
const modelWithMemory = withSupermemory(openai("gpt-5"), "user-123", {
addMemory: "always"
const modelWithMemory = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conversation-456",
addMemory: "always",
})
```
</Note>
Expand All @@ -58,19 +63,19 @@ const result = await generateText({
**Profile Mode (Default)** - Retrieves the user's complete profile:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "profile" })
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "profile" })
```

**Query Mode** - Searches memories based on the user's message:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "query" })
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "query" })
```

**Full Mode** - Combines profile AND query-based search:

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", { mode: "full" })
const model = withSupermemory(openai("gpt-4"), { containerTag: "user-123", customId: "conv-1", mode: "full" })
```

### Custom Prompt Templates
Expand All @@ -91,17 +96,21 @@ const claudePrompt = (data: MemoryPromptData) => `
</context>
`.trim()

const model = withSupermemory(anthropic("claude-3-sonnet"), "user-123", {
const model = withSupermemory(anthropic("claude-3-sonnet"), {
containerTag: "user-123",
customId: "conv-1",
mode: "full",
promptTemplate: claudePrompt
promptTemplate: claudePrompt,
})
```

### Verbose Logging

```typescript
const model = withSupermemory(openai("gpt-4"), "user-123", {
verbose: true
const model = withSupermemory(openai("gpt-4"), {
containerTag: "user-123",
customId: "conv-1",
verbose: true,
})
// Console output shows memory retrieval details
```
Expand All @@ -113,8 +122,10 @@ If the Supermemory API returns an error, is unreachable, or retrieval hits the i
To **fail the call** when memory retrieval fails instead, set `skipMemoryOnError: false`:

```typescript
const model = withSupermemory(openai("gpt-5"), "user-123", {
skipMemoryOnError: false
const model = withSupermemory(openai("gpt-5"), {
containerTag: "user-123",
customId: "conv-1",
skipMemoryOnError: false,
})
```

Expand Down
Loading
Loading