You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: sdk/guides/llm-error-handling.mdx
+69-44Lines changed: 69 additions & 44 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,12 +15,13 @@ LLM providers format errors differently (status codes, messages, exception class
15
15
- Clear behavior when conversation history exceeds the context window
16
16
- Backward compatibility when you switch providers or SDK versions
17
17
18
-
## Quick start: handle errors around LLM calls
18
+
## Quick start: Using agents and conversations
19
+
20
+
Agent-driven conversations are the common entry point. Exceptions from the underlying LLM calls bubble up from `conversation.run()` and `conversation.send_message(...)` when a condenser is not configured.
19
21
20
22
```python icon="python"
21
23
from pydantic import SecretStr
22
-
from openhands.sdk importLLM
23
-
from openhands.sdk.llm import Message, TextContent
24
+
from openhands.sdk import Agent, Conversation, LLM
24
25
from openhands.sdk.llm.exceptions import (
25
26
LLMError,
26
27
LLMAuthenticationError,
@@ -32,19 +33,19 @@ from openhands.sdk.llm.exceptions import (
conversation.send_message("Continue the long analysis we started earlier…")
41
+
conversation.run()
41
42
42
43
except LLMContextWindowExceedError:
43
44
# Conversation is longer than the model’s context window
44
45
# Options:
45
46
# 1) Enable a condenser (recommended for long sessions)
46
47
# 2) Shorten inputs or reset conversation
47
-
print("Context window exceeded. Consider enabling a condenser.")
48
+
print("Hit the context limit. Consider enabling a condenser.")
48
49
49
50
except LLMAuthenticationError:
50
51
print("Invalid or missing API credentials. Check your API key or auth setup.")
@@ -66,68 +67,92 @@ except LLMError as e:
66
67
print(f"Unhandled LLM error: {e}")
67
68
```
68
69
69
-
The same exceptions are raised from both `LLM.completion()` and `LLM.responses()` paths.
70
70
71
-
### Example: Using the Responses API
71
+
72
+
### Avoiding context‑window errors with a condenser
73
+
74
+
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
75
+
76
+
```python icon="python" highlight={5-10}
77
+
from openhands.sdk.context.condenser import LLMSummarizingCondenser
print("Context window exceeded. Consider enabling a condenser.")
121
+
except LLMAuthenticationError:
122
+
print("Invalid or missing API credentials.")
123
+
except LLMRateLimitError:
124
+
print("Rate limit exceeded. Back off and retry later.")
125
+
except LLMTimeoutError:
126
+
print("Request timed out. Consider increasing timeout or retrying.")
127
+
except LLMServiceUnavailableError:
128
+
print("Service unavailable or connectivity issue. Retry with backoff.")
129
+
except LLMBadRequestError:
130
+
print("Bad request to provider. Validate inputs and arguments.")
88
131
except LLMError as e:
89
-
print(f"LLM error: {e}")
132
+
print(f"Unhandled LLM error: {e}")
90
133
```
91
134
92
-
## Using agents and conversations
93
-
94
-
When you use `Agent` and `Conversation`, LLM exceptions propagate out of `conversation.run()` and `conversation.send_message(...)` if a condenser is not present.
135
+
### Example: Using responses()
95
136
96
137
```python icon="python"
97
138
from pydantic import SecretStr
98
-
from openhands.sdk import Agent, Conversation, LLM
99
-
from openhands.sdk.llm.exceptions import LLMContextWindowExceedError
139
+
from openhands.sdk importLLM
140
+
from openhands.sdk.llm import Message, TextContent
141
+
from openhands.sdk.llm.exceptions import LLMError, LLMContextWindowExceedError
conv.send_message("Continue the long analysis we started earlier…")
107
-
conv.run()
146
+
resp = llm.responses([
147
+
Message.user([TextContent(text="Write a one-line haiku about code.")])
148
+
])
149
+
print(resp.message)
108
150
except LLMContextWindowExceedError:
109
-
print("Hit the context limit. Add a condenser to avoid this in long sessions.")
110
-
```
111
-
112
-
### Avoiding context‑window errors with a condenser
113
-
114
-
If a condenser is configured, the SDK emits a condensation request event instead of raising `LLMContextWindowExceedError`. The agent will summarize older history and continue.
115
-
116
-
```python icon="python" highlight={5-10}
117
-
from openhands.sdk.context.condenser import LLMSummarizingCondenser
0 commit comments