Skip to content

Conversation

@murugnn
Copy link

@murugnn murugnn commented Oct 3, 2025

This enhancement focuses on improving the readability of LLM conversation history within trace and span inputs/outputs.

Previously, the system only extracted and displayed the last message from conversation arrays (like those used by OpenAI and LangChain). This made it difficult to review the full context of multi-turn interactions.

The update to the prettifyMessage utility now extracts all individual messages from recognized array formats. It structures them into a single string using:

A Message Header (---[ ROLE MESSAGE ]---) to identify the message type (e.g., HUMAN, AI, tool_call).

A Message Divider (\n\n ----------------- \n\n) to clearly separate each message.

This structured string is consumed by the front-end rendering components, which interpret the delimiters to render each message as a separate, collapsible section. This allows users to easily navigate and review the complete conversational flow within a single trace or span.

… as structured strings for collapsible rendering
@murugnn murugnn requested a review from a team as a code owner October 3, 2025 08:07
cursor[bot]

This comment was marked as outdated.

cursor[bot]

This comment was marked as outdated.

@dsblank
Copy link
Contributor

dsblank commented Oct 21, 2025

This has been fixed with recent updates to the Pretty mode. Please reopen if you find any issues. Thanks for reporting!

@dsblank dsblank closed this Oct 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants