Skip to content

Commit d490eb5

Browse files
SDK: Document error handling & typed exceptions
- New guide: sdk/guides/error-handling.mdx - Navigation: add under SDK > Guides > LLM Features Refs OpenHands/software-agent-sdk#980 Co-authored-by: openhands <[email protected]>
1 parent dce8b12 commit d490eb5

File tree

2 files changed

+142
-1
lines changed

2 files changed

+142
-1
lines changed

docs.json

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -194,7 +194,8 @@
194194
"sdk/guides/llm-registry",
195195
"sdk/guides/llm-routing",
196196
"sdk/guides/llm-reasoning",
197-
"sdk/guides/llm-image-input"
197+
"sdk/guides/llm-image-input",
198+
"sdk/guides/error-handling"
198199
]
199200
},
200201
{

sdk/guides/error-handling.mdx

Lines changed: 140 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,140 @@
1+
---
2+
title: Error Handling & SDK Exceptions
3+
description: Provider‑agnostic exceptions raised by the SDK and recommended patterns for handling them.
4+
---
5+
6+
The SDK normalizes common provider errors into typed, provider‑agnostic exceptions so your application can handle them consistently across OpenAI, Anthropic, Groq, Google, and others.
7+
8+
This guide explains when these errors occur and shows recommended handling patterns for both direct LLM usage and higher‑level agent/conversation flows.
9+
10+
## Why typed exceptions?
11+
12+
LLM providers format errors differently (status codes, messages, exception classes). The SDK maps those into stable types so client apps don’t depend on provider‑specific details. Typical benefits:
13+
14+
- One code path to handle auth, rate limits, timeouts, service issues, and bad requests
15+
- Clear behavior when conversation history exceeds the context window
16+
- Backward compatibility when you switch providers or SDK versions
17+
18+
## Quick start: handle errors around LLM calls
19+
20+
```python icon="python"
21+
from pydantic import SecretStr
22+
from openhands.sdk import LLM
23+
from openhands.sdk.llm import Message, TextContent
24+
from openhands.sdk.llm.exceptions import (
25+
LLMError,
26+
LLMAuthenticationError,
27+
LLMRateLimitError,
28+
LLMTimeoutError,
29+
LLMServiceUnavailableError,
30+
LLMBadRequestError,
31+
LLMContextWindowExceedError,
32+
)
33+
34+
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
35+
36+
try:
37+
response = llm.completion([
38+
Message.user([TextContent(text="Summarize our design doc")])
39+
])
40+
print(response.message)
41+
42+
except LLMContextWindowExceedError:
43+
# Conversation is longer than the model’s context window
44+
# Options:
45+
# 1) Enable a condenser (recommended for long sessions)
46+
# 2) Shorten inputs or reset conversation
47+
print("Context window exceeded. Consider enabling a condenser.")
48+
49+
except LLMAuthenticationError:
50+
print("Invalid or missing API credentials. Check your API key or auth setup.")
51+
52+
except LLMRateLimitError:
53+
print("Rate limit exceeded. Back off and retry later.")
54+
55+
except LLMTimeoutError:
56+
print("Request timed out. Consider increasing timeout or retrying.")
57+
58+
except LLMServiceUnavailableError:
59+
print("Service unavailable or connectivity issue. Retry with backoff.")
60+
61+
except LLMBadRequestError:
62+
print("Bad request to provider. Validate inputs and arguments.")
63+
64+
except LLMError as e:
65+
# Fallback for other SDK LLM errors (parsing/validation, etc.)
66+
print(f"Unhandled LLM error: {e}")
67+
```
68+
69+
The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths.
70+
71+
## Using agents and conversations
72+
73+
When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present.
74+
75+
```python icon="python"
76+
from pydantic import SecretStr
77+
from openhands.sdk import Agent, Conversation, LLM
78+
from openhands.sdk.llm.exceptions import LLMContextWindowExceedError
79+
80+
llm = LLM(model="claude-sonnet-4-20250514", api_key=SecretStr("your-key"))
81+
agent = Agent(llm=llm, tools=[])
82+
conv = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
83+
84+
try:
85+
conv.send_message("Continue the long analysis we started earlier…")
86+
conv.run()
87+
except LLMContextWindowExceedError:
88+
print("Hit the context limit. Add a condenser to avoid this in long sessions.")
89+
```
90+
91+
### Avoiding context‑window errors with a condenser
92+
93+
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
94+
95+
```python icon="python" highlight={5-10}
96+
from openhands.sdk.context.condenser import LLMSummarizingCondenser
97+
98+
condenser = LLMSummarizingCondenser(
99+
llm=llm.model_copy(update={"usage_id": "condenser"}),
100+
max_size=10,
101+
keep_first=2,
102+
)
103+
104+
agent = Agent(llm=llm, tools=[], condenser=condenser)
105+
conversation = Conversation(agent=agent, persistence_dir="./.conversations", workspace=".")
106+
```
107+
108+
See the dedicated guide: [Context Condenser](/sdk/guides/context-condenser).
109+
110+
## Exception reference
111+
112+
All exceptions live under `openhands.sdk.llm.exceptions` unless noted.
113+
114+
- Provider/transport mapping (provider‑agnostic):
115+
- `LLMContextWindowExceedError` — Conversation exceeds the model’s context window. Without a condenser, thrown for both Chat and Responses paths.
116+
- `LLMAuthenticationError` — Invalid or missing credentials (401/403 patterns).
117+
- `LLMRateLimitError` — Provider rate limit exceeded.
118+
- `LLMTimeoutError` — SDK/lower‑level timeout while waiting for the provider.
119+
- `LLMServiceUnavailableError` — Temporary connectivity/service outage (e.g., 5xx, connection issues).
120+
- `LLMBadRequestError` — Client‑side request issues (invalid params, malformed input).
121+
122+
- Response parsing/validation:
123+
- `LLMMalformedActionError` — Model returned a malformed action.
124+
- `LLMNoActionError` — Model did not return an action when one was expected.
125+
- `LLMResponseError` — Could not extract an action from the response.
126+
- `FunctionCallConversionError` — Failed converting tool/function call payloads.
127+
- `FunctionCallValidationError` — Tool/function call arguments failed validation.
128+
- `FunctionCallNotExistsError` — Model referenced an unknown tool/function.
129+
- `LLMNoResponseError` — Provider returned an empty/invalid response (seen rarely, e.g., some Gemini models).
130+
131+
- Cancellation:
132+
- `UserCancelledError` — A user aborted the operation.
133+
- `OperationCancelled` — A running operation was cancelled programmatically.
134+
135+
All of the above (except the explicit cancellation types) inherit from `LLMError`, so you can implement a catch‑all for unexpected SDK LLM errors while still keeping fine‑grained handlers for the most common cases.
136+
137+
## Notes for advanced users
138+
139+
- The SDK performs centralized exception mapping, translating provider/LiteLLM exceptions into the types above. This keeps your app free from provider‑specific exception imports.
140+
- For long‑running sessions, we strongly recommend configuring a condenser to avoid context‑window interruptions. See the [Context Condenser](/sdk/guides/context-condenser) guide for details and examples.

0 commit comments

Comments
 (0)