Replies: 2 comments
-
please suggest any idiomatic way to do this . |
Beta Was this translation helpful? Give feedback.
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can effectively manage large tool outputs and stay within the context window by using an You've identified a common challenge in multi-agent systems, and your approach is on the right track. Here’s a more detailed, scalable, and modular way to manage large tool outputs in ADK, building on your idea of using artifacts. The key is to use an Here’s how you can implement this pattern:
Here is a conceptual example of what your from google.adk.tools import ToolContext, BaseTool
import json
async def summarize_and_save_to_artifact(
*,
tool: BaseTool,
tool_args: dict,
tool_context: ToolContext,
result: dict,
) -> dict:
# Check if the tool is one that produces large outputs
if tool.name == "my_large_output_tool":
# 1. Summarize the large result
summary = f"Summary of the tool output: {result['summary']}" # Or generate a summary
# 2. Save the full result to an artifact
artifact_name = f"tool_output_{tool_context.function_call_id}.json"
artifact_uri = await tool_context.save_artifact(
name=artifact_name,
data=json.dumps(result).encode("utf-8"),
mime_type="application/json",
)
# 3. Modify the result to include only the summary and artifact URI
modified_result = {
"summary": summary,
"artifact_uri": artifact_uri,
}
# 4. Tell the agent to not summarize this new result
tool_context.actions.skip_summarization = True
return modified_result
# For other tools, return the result as is
return result
# You would then add this callback to your agent definition:
my_agent = LlmAgent(
# ... other agent parameters
callbacks=[after_tool_callback(summarize_and_save_to_artifact)],
) By using this approach, only the small summary and the artifact URI are added to the conversation history, keeping your context window lean. If a subsequent agent or tool needs the full details, it can use the This method is scalable and modular because it separates the concern of context management from the tool's primary function. You can apply this logic selectively to tools that need it without altering the tools themselves. For more information, you can refer to the ADK documentation on Callbacks, ToolContext, and Artifacts. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have a multi-agent system set up, but the tool outputs are quite large. They are being sent with every subsequent request, and I want to change this. Ideally, I want to send the tool output immediately to the first LLM and then store it in an artifact. My agent should be able to refer to that artifact on demand when needed.
I have already set up the artifact and can store and load it, but the tool output is still being sent every time. I considered returning a summary of the tool output along with the artifact URL. However, the agent calling the tool should summarize the output first, ensuring that the summary stays in the session context for the next call. If the detailed output is truly needed, the agent can then use the load_artifacts tool.
I want to explore ways to manage the context window effectively since the agent needs to call multiple tools, with each tool output consuming at least 10-12k tokens. Previous conversations and outputs are quickly pushing the limits of the context window. Is there a better, more scalable, and modular approach to ensure that I always stay within the context window?
Beta Was this translation helpful? Give feedback.
All reactions