How to forward reasoning in multi-turn (LLM agents) #7859
-
I was wondering if there's any docs on how to forward reasoning to LLM's as you would in agents. I tried recreating the reasoningpart i get back from the model e.g. i'm using ai sdk v. 5.0.7 and @ai-sdk/openai 2.0.5 as released earlier today. This would not just be for openai though, but for anthropic and google providers too. I don't have a lot of feedback to go on, so need ot make sure i construct my messages correctly to feed forward thinking/reasoning |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Providers typically sign their reasoning with a cryptographic signature and do not accept other reasoning (at least not google, openai, anthropic) |
Beta Was this translation helpful? Give feedback.
-
So for anyone in the future: We had stored messages from the provider in our db and was assuming a similar schema was needed to send back to the provider. E.g. we had content, role and then stored reasoning signature and reasoning text as a singular first-party citizen. However that wasn't sufficient. E.g. openai has an itemId for the message as well as all reasoning parts, and there can be multiple reasoningparts per message. meaning multiple signatures. So our db design was flawed. To solve it, we simply store the raw ModelMessages we get back from the provider on all our assistant messages, such that we can forward them back in on next turn exactly as we received them. That retains the data structure the provider expects without having to do a lot of advanced data modelling. To combat the issue mentioned by lgrammel, we also store the modelId on the messages such that if the messageId of a given message doesnt match the messageId of the model we're trying to produce outputs for, we dont include any reasoning at all, we just get the raw content and treat it like a regular message. This allows for swapping provider mid-work without hiccups. The only caveat is that you obviously lose all the reasoning that happened between before the swap |
Beta Was this translation helpful? Give feedback.
So for anyone in the future: We had stored messages from the provider in our db and was assuming a similar schema was needed to send back to the provider. E.g. we had content, role and then stored reasoning signature and reasoning text as a singular first-party citizen. However that wasn't sufficient. E.g. openai has an itemId for the message as well as all reasoning parts, and there can be multiple reasoningparts per message. meaning multiple signatures. So our db design was flawed. To solve it, we simply store the raw ModelMessages we get back from the provider on all our assistant messages, such that we can forward them back in on next turn exactly as we received them. That retains the…