Skip to content

Commit e5dbf09

Browse files
Docs: Make error handling guide conversation-first; keep llm- prefix
- Lead with Agent/Conversation examples and bubbling behavior - Move LLM examples (completion/responses) into secondary section Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands <[email protected]>
1 parent 420032d commit e5dbf09

File tree

1 file changed

+69
-44
lines changed

1 file changed

+69
-44
lines changed

sdk/guides/llm-error-handling.mdx

Lines changed: 69 additions & 44 deletions
Original file line numberDiff line numberDiff line change
@@ -15,12 +15,13 @@ LLM providers format errors differently (status codes, messages, exception class
1515
- Clear behavior when conversation history exceeds the context window
1616
- Backward compatibility when you switch providers or SDK versions
1717

18-
## Quick start: handle errors around LLM calls
18+
## Quick start: Using agents and conversations
19+
20+
Agent-driven conversations are the common entry point. Exceptions from the underlying LLM calls bubble up from `conversation.run()` and `conversation.send_message(...)` when a condenser is not configured.
1921

2022
```python icon="python"
2123
from pydantic import SecretStr
22-
from openhands.sdk import LLM
23-
from openhands.sdk.llm import Message, TextContent
24+
from openhands.sdk import Agent, Conversation, LLM
2425
from openhands.sdk.llm.exceptions import (
2526
LLMError,
2627
LLMAuthenticationError,
@@ -32,19 +33,19 @@ from openhands.sdk.llm.exceptions import (
3233
)
3334

3435
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
36+
agent = Agent(llm=llm, tools=[])
37+
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
3538

3639
try:
37-
response = llm.completion([
38-
Message.user([TextContent(text="Summarize our design doc")])
39-
])
40-
print(response.message)
40+
conversation.send_message("Continue the long analysis we started earlier…")
41+
conversation.run()
4142

4243
except LLMContextWindowExceedError:
4344
# Conversation is longer than the model’s context window
4445
# Options:
4546
# 1) Enable a condenser (recommended for long sessions)
4647
# 2) Shorten inputs or reset conversation
47-
print("Context window exceeded. Consider enabling a condenser.")
48+
print("Hit the context limit. Consider enabling a condenser.")
4849

4950
except LLMAuthenticationError:
5051
print("Invalid or missing API credentials. Check your API key or auth setup.")
@@ -66,68 +67,92 @@ except LLMError as e:
6667
print(f"Unhandled LLM error: {e}")
6768
```
6869

69-
The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths.
7070

71-
### Example: Using the Responses API
71+
72+
### Avoiding context‑window errors with a condenser
73+
74+
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
75+
76+
```python icon="python" highlight={5-10}
77+
from openhands.sdk.context.condenser import LLMSummarizingCondenser
78+
79+
condenser = LLMSummarizingCondenser(
80+
llm=llm.model_copy(update={"usage_id": "condenser"}),
81+
max_size=10,
82+
keep_first=2,
83+
)
84+
85+
agent = Agent(llm=llm, tools=[], condenser=condenser)
86+
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
87+
```
88+
89+
See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser).
90+
91+
## Handling errors with direct LLM calls
92+
93+
The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths, so you can share handlers.
94+
95+
### Example: Using completion()
7296

7397
```python icon="python"
7498
from pydantic import SecretStr
7599
from openhands.sdk import LLM
76100
from openhands.sdk.llm import Message, TextContent
77-
from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError
101+
from openhands.sdk.llm.exceptions import (
102+
LLMError,
103+
LLMAuthenticationError,
104+
LLMRateLimitError,
105+
LLMTimeoutError,
106+
LLMServiceUnavailableError,
107+
LLMBadRequestError,
108+
LLMContextWindowExceedError,
109+
)
78110

79111
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
80112

81113
try:
82-
resp = llm.responses([
83-
Message.user([TextContent(text="Write a one-line haiku about code.")])
114+
response = llm.completion([
115+
Message.user([TextContent(text="Summarize our design doc")])
84116
])
85-
print(resp.message)
117+
print(response.message)
118+
86119
except LLMContextWindowExceedError:
87120
print("Context window exceeded. Consider enabling a condenser.")
121+
except LLMAuthenticationError:
122+
print("Invalid or missing API credentials.")
123+
except LLMRateLimitError:
124+
print("Rate limit exceeded. Back off and retry later.")
125+
except LLMTimeoutError:
126+
print("Request timed out. Consider increasing timeout or retrying.")
127+
except LLMServiceUnavailableError:
128+
print("Service unavailable or connectivity issue. Retry with backoff.")
129+
except LLMBadRequestError:
130+
print("Bad request to provider. Validate inputs and arguments.")
88131
except LLMError as e:
89-
print(f"LLM error: {e}")
132+
print(f"Unhandled LLM error: {e}")
90133
```
91134

92-
## Using agents and conversations
93-
94-
When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present.
135+
### Example: Using responses()
95136

96137
```python icon="python"
97138
from pydantic import SecretStr
98-
from openhands.sdk import Agent, Conversation, LLM
99-
from openhands.sdk.llm.exceptions import LLMContextWindowExceedError
139+
from openhands.sdk import LLM
140+
from openhands.sdk.llm import Message, TextContent
141+
from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError
100142

101143
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
102-
agent = Agent(llm=llm, tools=[])
103-
conv = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
104144

105145
try:
106-
conv.send_message("Continue the long analysis we started earlier…")
107-
conv.run()
146+
resp = llm.responses([
147+
Message.user([TextContent(text="Write a one-line haiku about code.")])
148+
])
149+
print(resp.message)
108150
except LLMContextWindowExceedError:
109-
print("Hit the context limit. Add a condenser to avoid this in long sessions.")
110-
```
111-
112-
### Avoiding context‑window errors with a condenser
113-
114-
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
115-
116-
```python icon="python" highlight={5-10}
117-
from openhands.sdk.context.condenser import LLMSummarizingCondenser
118-
119-
condenser = LLMSummarizingCondenser(
120-
llm=llm.model_copy(update={"usage_id": "condenser"}),
121-
max_size=10,
122-
keep_first=2,
123-
)
124-
125-
agent = Agent(llm=llm, tools=[], condenser=condenser)
126-
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
151+
print("Context window exceeded. Consider enabling a condenser.")
152+
except LLMError as e:
153+
print(f"LLM error: {e}")
127154
```
128155

129-
See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser).
130-
131156
## Exception reference
132157

133158
All exceptions live under `openhands.sdk.llm.exceptions` unless noted.

0 commit comments

Comments
 (0)