You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[AI agents and other AI features](../../ai-integration/ai-agents/ai-agents_overview#ai-agents-and-other-ai-features)
35
+
*[Reducing throughput and expediting LLM response](../../ai-integration/ai-agents/ai-agents_overview#reducing-throughput-and-expediting-llm-response)
35
36
36
37
</Admonition>
37
38
@@ -64,35 +65,34 @@ Once defined, the agent can be invoked by the client to handle user requests, re
64
65
### The main stages in defining an AI agent:
65
66
To define an AI agent, the client needs to specify -
66
67
67
-
* A **connection string** to the AI model
68
-
69
-
* An **agent configuration** that includes:
70
-
71
-
* Basic agent settings, like the unique ID by which the system recognizes the task.
72
-
73
-
* A **system prompt** that defines AI model characteristics like its role.
74
-
75
-
* Optional **agent parameters** whose values will be provided by the client when starting a
76
-
conversation.
77
-
Agent parameters can be included in queries triggered by the LLM.
78
-
79
-
* <aid="initial-context-queries"/>Optional **query tools** that the LLM will be able to invoke freely.
80
-
The LLM will be able to use these tools to query the database through the agent and get the results.
81
-
<Admonitiontype="note"title="">
82
-
You can optionally mark a query tool as an **initial context query**.
83
-
Initial context queries are executed by the agent immediately when it starts a conversation with the LLM, without waiting for the LLM to invoke them, to include data that is relevant for the conversation in the initial context sent to the LLM.
84
-
E.g., an initial context query can provide the LLM the last 5 orders placed by a customer, as context for an answer that the LLM is requested to provide about the customer's order history.
85
-
</Admonition>
86
-
<aid="llm-parameters"/>A query tool's RQL query may include -
87
-
***Agent parameters** whose values are provided by the client (discussed below).
88
-
***LLM parameters** whose values will be provided by the LLM when it invokes the query tool.
89
-
The LLM can fill these parameters with values that are relevant to the current conversation.
90
-
E.g.,
91
-
A query tool's RQL query may include an LLM parameter called `$productCategory`.
92
-
When the LLM invokes this query tool, it may fill `$productCategory` with `smartphones`, to get data about smartphones from the database.
93
-
The agent will replace `$productCategory` with `smartphones` before running the query.
94
-
95
-
* Optional **action tools** that the LLM will be able to invoke freely.
68
+
* A **connection string** to the AI model.
69
+
70
+
* An **agent configuration** that includes -
71
+
***Basic agent settings**, like the unique ID by which the system recognizes the task.
72
+
* A **system prompt**, that defines AI model characteristics like its role.
73
+
* Optional **agent parameters**.
74
+
Agent parameters' values are provided by the client when it starts a conversation with the agent, and can be used in queries initiated by the LLM (see **query tools** below).
75
+
* <aid="query-tools"/> Optional **query tools**.
76
+
The LLM will be able to invoke query tools freely to retrieve data from the database.
77
+
***Read only operations**
78
+
Query tools are only allowed to apply **read operations**.
79
+
To make changes in the database, use [action tools](../../ai-integration/ai-agents/ai-agents_overview#action-tools).
80
+
***Database access**
81
+
The LLM has no direct access to the database. To use a query tool, it must send a query request to the agent, which will send the RQL query defined by the tool to the database and pass its results to the LLM.
82
+
* <aid="query-parameters"/> **Query parameters**
83
+
The RQL query defined by a query tool may optionally include parameters, identified by a `$` prefix.
84
+
Both the user and the LLM can pass values to these parameters.
85
+
**Users** can pass values to query parameters through **agent parameters**,
86
+
when the client starts a conversation with the agent.
87
+
**The LLM** can pass values to queries through a **parameters schema**,
88
+
outlined as part of the query tool, when requesting the agent to run the query.
You can optionally set a query tool as an **initial context query**.
91
+
Queries that are **not** set this way are invoked when the LLM requests the agent to run them.
92
+
Queries that **are** set as initial context queries are executed by the agent immediately when it starts a conversation with the LLM, without waiting for the LLM to invoke them, to include data that is relevant for the conversation in the initial context sent to the LLM.
93
+
E.g., an initial context query can provide the LLM the last 5 orders placed by a customer, as context for an answer that the LLM is requested to provide about the customer's order history.
94
+
95
+
* <aid="action-tools"/> Optional **action tools** that the LLM will be able to invoke freely.
96
96
The LLM will be able to use these tools to request the client to perform actions.
97
97
98
98
### Initiating a conversation:
@@ -198,8 +198,31 @@ Streaming is supported by most AI models, including OpenAI services like GPT-4 a
Though in our example the LLM helps us find and reward productive employees, we remain careful throughout the code not to provide it with personal employee details or proprietary company information.
202
+
203
+
201
204
<hr />
202
205
203
206
## AI agents and other AI features
204
207
205
208
### AI agents and vector search
209
+
210
+
<hr />
211
+
212
+
## Reducing throughput and expediting LLM response
213
+
214
+
If throughput and LLM response time are a consideration, consider these options:
215
+
216
+
### maximum number of querying iterations:
217
+
218
+
You can limit the number of times that the LLM is allowed to trigger database queries in response to a single user prompt.
219
+
*[Set iterations limit using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-maximum-number-of-iterations)
220
+
221
+
### Chat trimming configuration:
222
+
223
+
The LLM doesn't keep the history of previous conversations. To allow a continuous conversation, we include in every new message we send to the LLM the history of the entire conversation since its start.
224
+
To save traffic and tokens, you can summarize conversations history. This can be helpful when transfer rate and cost are a concern or the context may become too large to handle efficiently.
225
+
226
+
*[Configure chat trimming using the API](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_api#set-chat-trimming-configuration)
227
+
*[Configure chat trimming using Studio](../../ai-integration/ai-agents/creating-ai-agents/creating-ai-agents_studio#configure-chat-trimming)
0 commit comments