AI SDK 5 Migration "Failed to parse stream string. No separator found" #7918
Replies: 2 comments
-
|
Experiencing the same issue for all models (azure openai models & bedrock claude models) |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
Moved to #8145 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
AI SDK v5 Chat Streaming Issue Report
Setup
Model: gemini-2.5-flash
Provider: @ai-sdk/google (v2.0.0)
Vercel AI SDK Version: 5.0.0
Framework: Express.js with TypeScript
Frontend: React with useChat hook (@ai-sdk/react v2.0.0)
Implementation
Backend (Express Route) - Current Implementation
typescript// chat.ts - Express route handler
router.post('/chat', optionalAuthMiddleware, async (req: Request, res: Response) => {
try {
const { messages, model, pdfId } = req.body;
} catch (error) {
console.error('[API_CHAT_ERROR]', error);
res.status(500).json({ error: 'An unexpected error occurred.' });
}
});
Frontend (React Component)
typescript// ChatPanel.tsx
const { messages, input, isLoading, append, handleInputChange } = useChat({
streamProtocol: 'data', // AI SDK v5 default SSE protocol
experimental_throttle: 50,
api: "/api/v1/chat",
body: { pdfId, page, model, enableTools: false },
headers: {
"Content-Type": "application/json",
Authorization:
Bearer ${session?.access_token},},
onError: (err) => {
console.error("[ChatPanel] full error", err);
},
onData: (data: any) => {
console.log('[STREAMING_DATA]', data);
},
onFinish: (message: any, { usage, finishReason }: any) => {
console.log('[STREAM_FINISHED]', { usage, finishReason });
},
});
Console Output
CONTEXT LENGTH: 8883 characters
CHUNKS INCLUDED: 10
[CHAT_ROUTE] All setup complete. About to call streamText...
Request hangs at this point - no further console output or network activity.
Expected Behavior
After calling streamText(), the response should:
Stream back to the client using Server-Sent Events (SSE)
Display incrementally in the chat UI
Complete successfully with proper message handling
Real Behavior
Request reaches streamText() call but appears to hang
No streaming response is sent to the client
Frontend remains in loading state indefinitely
No error messages in console (backend or frontend)
Attempted Solutions
(Using Latest AI SDK V5 docs)
Verified all dependencies are AI SDK v5 compatible
Changed pipeTextStreamToResponse() (v4) to pipeUIMessageStreamToResponse() (v5)
Ensured streamProtocol: 'data' is set in useChat (v5 default)
API keys and model access are working
Verified RAG context retrieval completes successfully before streaming
According to AI SDK v5 documentation for Express.js: "You can use the pipeDataStreamToResponse method to pipe the stream data to the server response." The examples show Express should use pipeDataStreamToResponse, not pipeUIMessageStreamToResponse:
typescript// Documented Express.js approach
result.pipeDataStreamToResponse(res);
I can't seem to find the solution to getting the chat streaming. with extra data like reasoning, to work properly with SDK V5. Chat was working prior to migration. I am wondering if there is something key wrong or is there a bug with express and SDK V5?
Only time I get response back from the model is using the simple raw text stream in which the chat would respond like this ""hello
Gemini 2.5 Flash
data: {"type":"start"}
data: {"type":"start-step"}
data: {"type":"text-start","id":"0"}
data: {"type":"text-delta","id":"0","delta":"Hello! How can"}
data: {"type":"text-delta","id":"0","delta":" I help you today?"}
data: {"type":"text-end","id":"0"}
data: {"type":"finish-step"}
data: {"type":"finish"}
data: [DONE]""
Beta Was this translation helpful? Give feedback.
All reactions