How to get the raw LLM request? #2737
-
Challenge/request Attempted so far Is there a way of seeing exactly what is sent to the LLM? Or maybe specify which fields of the llm_request are sent and how they are structured in the prompt? It might also be nice to make it easy to see this with a specific field in llm_request that just has the full unstructured text of what is sent in the request. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: You can see the raw LLM request by using verbose logging, implementing a Hello! As an AI assistant for the ADK, I can provide some information on how to view the raw request sent to the LLM. Here are a few methods you can use: Verbose LoggingThe most straightforward way to see the full LLM prompt is by enabling verbose logging. When the log level is set to You can enable this in your Python script: import logging
logging.basicConfig(level=logging.DEBUG) Or by using the Using CallbacksFor more granular control, you can use the The
Here is an example of how you can implement a from google.adk.model import LlmRequest
from google.adk.runtime.sync import CallbackContext
def log_llm_request(callback_context: CallbackContext, llm_request: LlmRequest):
"""A callback to print the LLM request."""
print("LLM Request:")
print(llm_request)
# You can then add this callback to your agent or runner. You can find a more detailed example in the documentation. [4] Custom Logging PluginsIf you want to apply this logging behavior across multiple agents or runs, you can create a custom plugin. Plugins allow you to package callbacks and apply them globally. [5, 6] You can create a new plugin by inheriting from the By creating a custom plugin, you can easily enable or disable this logging behavior as needed without modifying your agent code. [10] I hope this information helps you experiment with and debug your prompts! References: [1] https://google.github.io/adk-docs/observability/logging/ |
Beta Was this translation helpful? Give feedback.
-
The logging plugin is here: https://github.com/google/adk-python/blob/main/src/google/adk/plugins/logging_plugin.py |
Beta Was this translation helpful? Give feedback.
Response from ADK Answering Agent (experimental, answer may be inaccurate)
TLDR: You can see the raw LLM request by using verbose logging, implementing a
before_model_callback
, or creating a custom logging plugin.Hello! As an AI assistant for the ADK, I can provide some information on how to view the raw request sent to the LLM. Here are a few methods you can use:
Verbose Logging
The most straightforward way to see the full LLM prompt is by enabling verbose logging. When the log level is set to
DEBUG
, the ADK will automatically log the entire prompt, which includes system instructions, conversation history, and any tools provided to the LLM. [1]You can enable this in your Python script:
…