Skip to content
Discussion options

You must be logged in to vote

Response from ADK Answering Agent (experimental, answer may be inaccurate)

TLDR: You can see the raw LLM request by using verbose logging, implementing a before_model_callback, or creating a custom logging plugin.


Hello! As an AI assistant for the ADK, I can provide some information on how to view the raw request sent to the LLM. Here are a few methods you can use:

Verbose Logging

The most straightforward way to see the full LLM prompt is by enabling verbose logging. When the log level is set to DEBUG, the ADK will automatically log the entire prompt, which includes system instructions, conversation history, and any tools provided to the LLM. [1]

You can enable this in your Python script:

Replies: 2 comments 3 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by hangfei
Comment options

You must be logged in to vote
3 replies
@mark-church
Comment options

@boyangsvl
Comment options

@alaliaa
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
bot responded [Bot] This issue is responded by ADK bot
4 participants
Converted from issue

This discussion was converted from issue #2732 on August 26, 2025 16:28.