-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3.0.20 create a breaking change for StreamingTextResponse
#1316
Comments
StreamingTextResponse
Solutions
(old)
|
this is how I use the Response of /**
* Fetch data using stream method
*/
export const fetchSSE = async (fetchFn: () => Promise<Response>, options: FetchSSEOptions = {}) => {
const response = await fetchFn();
if (!response.ok) {
const chatMessageError = await getMessageError(response);
options.onErrorHandle?.(chatMessageError);
return;
}
const returnRes = response.clone();
const data = response.body;
if (!data) return;
let output = '';
const reader = data.getReader();
const decoder = new TextDecoder();
let done = false;
let finishedType: SSEFinishType = 'done';
while (!done) {
try {
const { value, done: doneReading } = await reader.read();
done = doneReading;
const chunkValue = decoder.decode(value, { stream: true });
output += chunkValue;
options.onMessageHandle?.(chunkValue);
} catch (error) {
done = true;
if ((error as TypeError).name === 'AbortError') {
finishedType = 'abort';
options?.onAbort?.(output);
} else {
finishedType = 'error';
console.error(error);
}
}
}
await options?.onFinish?.(output, { observationId, traceId, type: finishedType });
return returnRes;
}; I don't use |
I see. We have switched to a new protocol that supports different types of messages (tool calls, etc). This provides the foundation to robustly support richer LLM functionality such as tool calling, annotations, etc. If you use In the stream data protocol, each message is a line with a type (indicated by a number) separated from the value. E.g. With Example usage: https://github.com/vercel/ai/blob/main/packages/core/shared/parse-complex-response.ts#L52 For the basic use case that resembles the original text streaming, you just need to process text parts.
|
I agree with the new protocol to support more types of messages and actually LobeChat needs it too. But I don't think the Even use major version, I still think there is demand on pure text streaming response. Other library even has document how to intergate with I really suggestion another method to add with the new protocol, like |
I agree. This was not intended to be a breaking change. StreamingTextResponse is meant to be used by useChat/useCompletion, and your use case unfortunately is outside the envisioned use. Because of some other changes that we are currently making (splitting out packages), it is sadly not possible to revert at the moment. I suggest either pinning We'll include the simple text stream use case in the new |
Ok, I will pin to |
There is no documentation yet other than the code. It was intended to be internal - so pinning the version will be important if you use it. Given your use case, it makes sense to standardize and document it, so we'll look into it. PR for future raw text stream response: #1318 |
That's a nice direction to standardize the data stream protocol. I had just used regex to match the function call output and it can't be extandible. I'm glad if there is something more powerful then pure text. You can check our document If you are interested in the LobeChat Plugin. And the function call I implemented is here: https://github.com/lobehub/lobe-chat/blob/main/src/store/chat/slices/plugin/action.ts#L181-L264 |
Interesting - love the store! Not sure how well this suites your use case, and it is not fully integrated with the stream data protocol yet, but in the future you can also define tools with the new streamText API: https://sdk.vercel.ai/docs/ai-core/stream-text#terminal-chatbot-with-tools |
For what it's worth, I actually had the same issue here as well. We parse the output of the stream ourselves and the update to 3.0.20 broke things for us as well which led me here 😄. If it's any help, this is the code we have:
It sounds like the right move is to switch to |
I'm having a same problem. Frontend: `'use client';
} Backend: // Create an OpenAI API client (that's edge friendly!) // IMPORTANT! Set the runtime to edge export async function POST(req) {
} |
Update on solutions:
|
Hi @lgrammel Would be great to have some updated documentation on this. I struggle with the AIStream - as it stopped working when moving to 3.0.21. I use useChat. |
The documentation is up-to-date. Can you elaborate what stopped working? Are you using the same version of the AI SDK for client & server and have rebuilt/refreshed? |
For me the stream is no longer displayed in the UI. I use
I can elaborate: I have a custom parser that looks like: function parseMessageStream(): AIStreamParser {
return (data: string): string => {
const json = JSON.parse(data)
let delta: string
if (json.data && json.data.message && json.data.message.content) {
delta = json.data.message.content
} else {
delta = ''
}
return delta
}
} When I look at tokens with the callbacks it parses my stream. However useChat is unable to display the tokens or completion client side. I suspect there is something in the new data protocol I am missing. So instead of head scratching for an extended period of time I was hoping you had some magic for me 🪄 |
@martgra If I understand correctly, you are returning custom text chunks. In the new protocol, text chunks are lines that look like this |
Suspected something like this :) Now it streams! |
Matching the versions to '3.0.21' on both client and server SDK did fix the formatting issue. Thanks! |
IIUC it's not possible to use |
@lgrammel may I ask if |
Thanks - I'll work on getting it exported. Update: #1334 (published in |
Possibly. For the time being we have no such plans though. |
Allright - thanks for answer. I do actually like this useChat better than the GenAI aiState and UIState framework. Its so lightweight and easy to use. |
Hi Lars, I came across this. It still works but not streaming - the response is not shown until it's done. Versions matched on First of all, I am frustrated that a breaking change did not cause a major version change. I really believe that this is not a good practice. Also, I cannot seem to find any upgrade guide. This leaves me wondering whether it's my fault or a fault at the library. Nevertheless, thank you for working on this. UPDATE: I end up discovering that I would still be interested in an upgrade guide. It seems to me that either I am confused (could be) or the official docs don't reflect the changes - I still see |
@flexchar I understand that this is frustrating, and I agree it's unfortunate. It was not an intentional breakage, and unfortunately it happened during a split into multiple packages, which made the immediate revert impossible. By now, reverting would break the code of people who have already upgraded. Here is something close to an upgrade guide: #1316 (comment) The docs should be updated, e.g. https://sdk.vercel.ai/docs/api-reference/stream-data - where did you still see |
- Replaced generated summary with last working static due to unexpected breaking change - vercel/ai#1316 Link #231
Ahh, I may have mixed up with another Thanks for fast reply. Shit happens. Good thing, we don't have to revert since it works on the latest versions if both are the same. But in the future, a greater communication - perhaps a pinned issue or README block would be super nice. Ideally on the release part on Github. :) |
Update on solutions:
|
Difficult to understand, even if it should be used in useChat, I also think this is a bug because from the literal meaning, we would think that |
@lgrammel, my API endpoint (with |
Could you upgrade both parts to the latest version? @GorvGoyl |
@GorvGoyl if your endpoint send a text stream (not an ai stream), then you could use |
toTextStreamResponse() lacks callback. Is any option to use it with cache like described in the docs (https://sdk.vercel.ai/docs/advanced/caching)? |
Closing as Lars provided some great solutions. We'll be more careful about breaking the internal APIs in minor and patch releases to keep custom server/client logic maintainable. |
Can anyone help with this? I wanna return proper streaming. Currently it works but not streaming. const response = await groq.chat.completions.create({
|
streaming zeros in response before correcting at the end of the stream similar issue vercel/ai#1316 (comment)
Description
the correct response with [email protected]:
the error response with [email protected]:
it cause an issue with it: lobehub/lobe-chat#1945
Code example
Additional context
I check the update code, I think it's a breaking change for
StreamingTextResponse
:please consider to revert it. Or I have to stay with
3.0.19
.a6b2500#diff-ee443c120877f90b068a98d03b338a35958a3953db7e0159035ae060b5b9052b
The text was updated successfully, but these errors were encountered: