Skip to content

Commit b7b84f7

Browse files
SDK: Document error handling & typed exceptions (#91)
* SDK: Document error handling & typed exceptions - New guide: sdk/guides/error-handling.mdx - Navigation: add under SDK > Guides > LLM Features Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands <[email protected]>
1 parent dce8b12 commit b7b84f7

File tree

2 files changed

+184
-1
lines changed

2 files changed

+184
-1
lines changed

docs.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,8 @@
194194
"sdk/guides/llm-registry",
195195
"sdk/guides/llm-routing",
196196
"sdk/guides/llm-reasoning",
197-
"sdk/guides/llm-image-input"
197+
"sdk/guides/llm-image-input",
198+
"sdk/guides/llm-error-handling"
198199
]
199200
},
200201
{

sdk/guides/llm-error-handling.mdx

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
---
2+
title: Exception Handling
3+
description: Provider‑agnostic exceptions raised by the SDK and recommended patterns for handling them.
4+
---
5+
6+
The SDK normalizes common provider errors into typed, provider‑agnostic exceptions so your application can handle them consistently across OpenAI, Anthropic, Groq, Google, and others.
7+
8+
This guide explains when these errors occur and shows recommended handling patterns for both direct LLM usage and higher‑level agent/conversation flows.
9+
10+
## Why typed exceptions?
11+
12+
LLM providers format errors differently (status codes, messages, exception classes). The SDK maps those into stable types so client apps don’t depend on provider‑specific details. Typical benefits:
13+
14+
- One code path to handle auth, rate limits, timeouts, service issues, and bad requests
15+
- Clear behavior when conversation history exceeds the context window
16+
- Backward compatibility when you switch providers or SDK versions
17+
18+
## Quick start: Using agents and conversations
19+
20+
Agent-driven conversations are the common entry point. Exceptions from the underlying LLM calls bubble up from `conversation.run()` and `conversation.send_message(...)` when a condenser is not configured.
21+
22+
```python icon="python"
23+
from pydantic import SecretStr
24+
from openhands.sdk import Agent, Conversation, LLM
25+
from openhands.sdk.llm.exceptions import (
26+
LLMError,
27+
LLMAuthenticationError,
28+
LLMRateLimitError,
29+
LLMTimeoutError,
30+
LLMServiceUnavailableError,
31+
LLMBadRequestError,
32+
LLMContextWindowExceedError,
33+
)
34+
35+
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
36+
agent = Agent(llm=llm, tools=[])
37+
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
38+
39+
try:
40+
conversation.send_message("Continue the long analysis we started earlier…")
41+
conversation.run()
42+
43+
except LLMContextWindowExceedError:
44+
# Conversation is longer than the model’s context window
45+
# Options:
46+
# 1) Enable a condenser (recommended for long sessions)
47+
# 2) Shorten inputs or reset conversation
48+
print("Hit the context limit. Consider enabling a condenser.")
49+
50+
except LLMAuthenticationError:
51+
print("Invalid or missing API credentials. Check your API key or auth setup.")
52+
53+
except LLMRateLimitError:
54+
print("Rate limit exceeded. Back off and retry later.")
55+
56+
except LLMTimeoutError:
57+
print("Request timed out. Consider increasing timeout or retrying.")
58+
59+
except LLMServiceUnavailableError:
60+
print("Service unavailable or connectivity issue. Retry with backoff.")
61+
62+
except LLMBadRequestError:
63+
print("Bad request to provider. Validate inputs and arguments.")
64+
65+
except LLMError as e:
66+
# Fallback for other SDK LLM errors (parsing/validation, etc.)
67+
print(f"Unhandled LLM error: {e}")
68+
```
69+
70+
71+
72+
### Avoiding context‑window errors with a condenser
73+
74+
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
75+
76+
```python icon="python" highlight={5-10}
77+
from openhands.sdk.context.condenser import LLMSummarizingCondenser
78+
79+
condenser = LLMSummarizingCondenser(
80+
llm=llm.model_copy(update={"usage_id": "condenser"}),
81+
max_size=10,
82+
keep_first=2,
83+
)
84+
85+
agent = Agent(llm=llm, tools=[], condenser=condenser)
86+
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
87+
```
88+
89+
See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser).
90+
91+
## Handling errors with direct LLM calls
92+
93+
The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths, so you can share handlers.
94+
95+
### Example: Using completion()
96+
97+
```python icon="python"
98+
from pydantic import SecretStr
99+
from openhands.sdk import LLM
100+
from openhands.sdk.llm import Message, TextContent
101+
from openhands.sdk.llm.exceptions import (
102+
LLMError,
103+
LLMAuthenticationError,
104+
LLMRateLimitError,
105+
LLMTimeoutError,
106+
LLMServiceUnavailableError,
107+
LLMBadRequestError,
108+
LLMContextWindowExceedError,
109+
)
110+
111+
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
112+
113+
try:
114+
response = llm.completion([
115+
Message.user([TextContent(text="Summarize our design doc")])
116+
])
117+
print(response.message)
118+
119+
except LLMContextWindowExceedError:
120+
print("Context window exceeded. Consider enabling a condenser.")
121+
except LLMAuthenticationError:
122+
print("Invalid or missing API credentials.")
123+
except LLMRateLimitError:
124+
print("Rate limit exceeded. Back off and retry later.")
125+
except LLMTimeoutError:
126+
print("Request timed out. Consider increasing timeout or retrying.")
127+
except LLMServiceUnavailableError:
128+
print("Service unavailable or connectivity issue. Retry with backoff.")
129+
except LLMBadRequestError:
130+
print("Bad request to provider. Validate inputs and arguments.")
131+
except LLMError as e:
132+
print(f"Unhandled LLM error: {e}")
133+
```
134+
135+
### Example: Using responses()
136+
137+
```python icon="python"
138+
from pydantic import SecretStr
139+
from openhands.sdk import LLM
140+
from openhands.sdk.llm import Message, TextContent
141+
from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError
142+
143+
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
144+
145+
try:
146+
resp = llm.responses([
147+
Message.user([TextContent(text="Write a one-line haiku about code.")])
148+
])
149+
print(resp.message)
150+
except LLMContextWindowExceedError:
151+
print("Context window exceeded. Consider enabling a condenser.")
152+
except LLMError as e:
153+
print(f"LLM error: {e}")
154+
```
155+
156+
## Exception reference
157+
158+
All exceptions live under `openhands.sdk.llm.exceptions` unless noted.
159+
160+
- Provider/transport mapping (provider‑agnostic):
161+
- `LLMContextWindowExceedError` — Conversation exceeds the model’s context window. Without a condenser, thrown for both Chat and Responses paths.
162+
- `LLMAuthenticationError` — Invalid or missing credentials (401/403 patterns).
163+
- `LLMRateLimitError` — Provider rate limit exceeded.
164+
- `LLMTimeoutError` — SDK/lower‑level timeout while waiting for the provider.
165+
- `LLMServiceUnavailableError` — Temporary connectivity/service outage (e.g., 5xx, connection issues).
166+
- `LLMBadRequestError` — Client‑side request issues (invalid params, malformed input).
167+
168+
- Response parsing/validation:
169+
- `LLMMalformedActionError` — Model returned a malformed action.
170+
- `LLMNoActionError` — Model did not return an action when one was expected.
171+
- `LLMResponseError` — Could not extract an action from the response.
172+
- `FunctionCallConversionError` — Failed converting tool/function call payloads.
173+
- `FunctionCallValidationError` — Tool/function call arguments failed validation.
174+
- `FunctionCallNotExistsError` — Model referenced an unknown tool/function.
175+
- `LLMNoResponseError` — Provider returned an empty/invalid response (seen rarely, e.g., some Gemini models).
176+
177+
- Cancellation:
178+
- `UserCancelledError` — A user aborted the operation.
179+
- `OperationCancelled` — A running operation was cancelled programmatically.
180+
181+
All of the above (except the explicit cancellation types) inherit from `LLMError`, so you can implement a catch‑all for unexpected SDK LLM errors while still keeping fine‑grained handlers for the most common cases.
182+

0 commit comments

Comments
 (0)