-
|
Hello! I really love the package for the back-end, but in the front-end I would LOVE to utilise Vercel AI SDK package, which comes with a preset of already-tested / build hooks and components. But I don't know how to face an issue. With Vercel AI SDK package, the useMessage hook sends a JSON that looks like: {
"id": "CuwuIFH0vQiuEepW",
"messages": [
{
"role": "user",
"parts": [
{
"type": "text",
"text": "Hello!"
}
],
"id": "wxTWpMq5yiBXEhPO"
}
],
"trigger": "submit-message"
}And the parts keep expanding with new items as the conversation goes on. If I were to build like a simple /chat endpoint, what's the best way to integrate the best of both worlds? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
|
Have you reviewed this section of the docs? https://prismphp.com/core-concepts/streaming-output.html#vercel-ai-sdk-integration |
Beta Was this translation helpful? Give feedback.
Hey @JorgeRui - I'm running this exact stack in production (Prism + Vercel AI SDK v6 + Laravel + Inertia/React), so hopefully this helps clear things up.
The Key Insight
Prism's
asDataStreamResponse()speaks the exact same data stream protocol that the Vercel AI SDK expects. So the wiring is actually straightforward once you know the pieces.Backend: The Controller
The AI SDK sends a
messagesarray where each message has aroleandpartsarray. You don't need to process the full message history on every request — just extract what you need from the latest message: