Skip to content

Commit 618af40

Browse files
committed
RDoc-3491 initial context query documentation
1 parent bb99ceb commit 618af40

File tree

5 files changed

+225
-87
lines changed

5 files changed

+225
-87
lines changed

docs/ai-integration/ai-agents/ai-agents_overview.mdx

Lines changed: 102 additions & 58 deletions
Original file line numberDiff line numberDiff line change
@@ -29,8 +29,7 @@ import LanguageContent from "@site/src/components/LanguageContent";
2929
* [The main stages in defining an AI agent](../../ai-integration/ai-agents/ai-agents_overview#the-main-stages-in-defining-an-ai-agent)
3030
* [Initiating a conversation](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation)
3131
* [AI agent usage flow chart](../../ai-integration/ai-agents/ai-agents_overview#ai-agent-usage-flow-chart)
32-
* [Stream LLM responses](../../ai-integration/ai-agents/ai-agents_overview#stream-llm-responses)
33-
* [Initial context queries](../../ai-integration/ai-agents/ai-agents_overview#initial-context-queries)
32+
* [Streaming LLM responses (RavenDB 7.1.3 and up)](../../ai-integration/ai-agents/ai-agents_overview#streaming-llm-responses-ravendb-713-and-up)
3433
* [Security concerns](../../ai-integration/ai-agents/ai-agents_overview#security-concerns)
3534
* [AI agents and other AI features](../../ai-integration/ai-agents/ai-agents_overview#ai-agents-and-other-ai-features)
3635

@@ -63,70 +62,127 @@ Once defined, the agent can be invoked by the client to handle user requests, re
6362
</Admonition>
6463

6564
### The main stages in defining an AI agent:
66-
* Defining a **connection string** to the AI model
67-
* Defining the **agent configuration**, including -
65+
To define an AI agent, the client needs to specify -
66+
67+
* A **connection string** to the AI model
68+
69+
* An **agent configuration** that includes:
70+
6871
* Basic agent settings, like the unique ID by which the system recognizes the task.
69-
* A system prompt that defines the AI model's role
70-
* a JSON schema that defines the layout for the LLM response.
71-
* Optional **agent parameters** that RQL queries that you include in your query tools will be able to reference (see below).
72-
* Optional **query tools** that allow the LLM to query the database through the agent.
73-
* Optional **action tools** that allow the LLM to request the client to perform actions.
72+
73+
* A **system prompt** that defines AI model characteristics like its role.
74+
75+
* Optional **agent parameters** whose values will be provided by the client when starting a
76+
conversation.
77+
Agent parameters can be included in queries triggered by the LLM.
78+
79+
* <a id="initial-context-queries"/>Optional **query tools** that the LLM will be able to invoke freely.
80+
The LLM will be able to use these tools to query the database through the agent and get the results.
81+
<Admonition type="note" title="">
82+
You can optionally mark a query tool as an **initial context query**.
83+
Initial context queries are executed by the agent immediately when it starts a conversation with the LLM, without waiting for the LLM to invoke them, to include data that is relevant for the conversation in the initial context sent to the LLM.
84+
E.g., an initial context query can provide the LLM the last 5 orders placed by a customer, as context for an answer that the LLM is requested to provide about the customer's order history.
85+
</Admonition>
86+
<a id="llm-parameters"/>A query tool's RQL query may include -
87+
* **Agent parameters** whose values are provided by the client (discussed below).
88+
* **LLM parameters** whose values will be provided by the LLM when it invokes the query tool.
89+
The LLM can fill these parameters with values that are relevant to the current conversation.
90+
E.g.,
91+
A query tool's RQL query may include an LLM parameter called `$productCategory`.
92+
When the LLM invokes this query tool, it may fill `$productCategory` with `smartphones`, to get data about smartphones from the database.
93+
The agent will replace `$productCategory` with `smartphones` before running the query.
94+
95+
* Optional **action tools** that the LLM will be able to invoke freely.
96+
The LLM will be able to use these tools to request the client to perform actions.
7497

7598
### Initiating a conversation:
76-
To initiate a conversation with the agent the client needs to provide -
77-
* Values for **agent parameters**
78-
If agent parameters are defined for query tools, providing their values when starting a conversation is mandatory.
79-
E.g., you can define the RQL query `from "Orders" where ShipTo.Country == $country`, where `$country` is an agent parameter. When you start a conversation with the agent, you must provide a value for `$country` parameter. When the LLM uses this query, it will embed this value instead of the parameter.
80-
Providing query values when starting a conversation gives the client the ability to customize the interaction by its needs, as well as limit the scope of LLM queries.
81-
* A **user prompt** that defines the user's request.
82-
* **conversation history**
83-
If you want to maintain a continuous conversation with the LLM, you need to send it the entire history of the conversation so far. Conversation history is automatically kept in a dedicated `@conversations` collection and can be retrieved from it and continued.
99+
To start a conversation with the LLM, the agent will send it an **initial context** that includes -
100+
101+
* Pre-defined [agent configuration](../../ai-integration/ai-agents/ai-agents_overview#the-main-stages-in-defining-an-ai-agent) elements (automatically sent by the agent):
102+
* The system prompt
103+
* Optional agent parameters
104+
* Optional Query tools,
105+
and if any query tool is marked as an initial context query - results for this query.
106+
* Optional Action tools
107+
108+
* A **response object** - a JSON schema that defines the layout for the LLM response.
109+
The response object can be defined either as part of the pre-set agent configuration,
110+
or by the client when it invokes the agent.
111+
<Admonition type="note" title="">
112+
Allowing the client to set the response object when it starts the agent gives it the ability to tailor each conversation to its current needs.
113+
</Admonition>
114+
115+
* **Values for agent parameters**
116+
If agent parameters were defined in the agent configuration, the client is required to provide their values to the agent when starting a conversation.
117+
118+
E.g.,
119+
The agent configuration may include an agent parameter called `$country`.
120+
A query tool may include an RQL query like `from "Orders" where ShipTo.Country == $country`, using this agent parameter.
121+
When the client starts a conversation with the agent, it will be required to provide the value for `$country`, e.g. `France`.
122+
When the LLM requests the agent to invoke this query tool, the agent will replace `$country` with `France` before running the query.
123+
124+
<Admonition type="note" title="">
125+
Providing query values when starting a conversation gives the client the ability to shape and limit the scope of LLM queries by its objectives.
126+
</Admonition>
127+
128+
* Optional **conversation history**
129+
To continue a conversation with the LLM, the agent will need to send it the entire history of the conversation so far.
130+
Conversations are automatically kept in documents in the `@conversations` collection. The client will need to reference the agent to the conversation that it wants to continue.
131+
132+
* A **user prompt**, set by the client, that defines this part of the conversation.
133+
The user prompt may be, for example, a question or a request for particular information.
84134

85135
<hr />
86136

87137
## AI agent usage flow chart
88138

89-
The flow chart below illustrates the interaction between the User, RavenDB client, AI agent, AI model, and RavenDB database.
139+
The flow chart below illustrates interactions between the User, RavenDB client, AI agent, AI model, and RavenDB database.
90140

91141
![AI agent usage flow chart](./assets/ai-agents_flowchart.png)
92142

93-
1. **User `<->` Client flow**
94-
Users can use clients that interact with the AI agent. The user can provide input through the client, and get responses from the agent.
143+
1. **User`<->`Client** flow
144+
Users can use clients that interact with the AI agent.
145+
The user can provide agent parameters values through the client, and get responses from the agent.
95146

96-
2. **Client `<->` Database flow**
97-
The client can interact with the database directly, either by its own initiative or as a result of AI agent action requests.
147+
2. **Client`<->`Database** flow
148+
The client can interact with the database directly, either by its own initiative or as a result of AI agent action requests (query requests are handled by the agent).
98149
When performing actions on behalf of the AI agent, the client will return the agent the results of these actions.
99150

100-
3. **Client `<->` Agent flow**
101-
* The client can invoke the agent, pass it parameter values for query tools, provide it with a user prompt to initiate a conversation, and send it the history of the conversation so far.
102-
* The agent can respond to the client with answers to user prompts, or with requests for the client to perform actions.
103-
* E.g., the client can pass the agent a research topic, a user prompt that guides the AI model to act as a research assistant, and the history of the conversation so far.
151+
3. **Client`<->`Agent** flow
152+
* To invoke an agent, the client needs to provide it with an [initial context](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation).
153+
* During the conversation, the agent may send to the client action requests on behalf of the LLM.
154+
The client will need to process these requests and return action results to the agent.
155+
* When the LLM provides the agent with its final response, the agent will provide it to the client.
156+
The client does not need to reply to this message.
157+
* E.g., the client can pass the agent a research topic, a user prompt that guides the AI model to
158+
act as a research assistant, and the history of the conversation so far.
104159
The agent can respond with a summary of the research topic, and a request for the client to save it in the database.
105160

106-
4. **Agent `<->` Database flow**
161+
4. **Agent`<->`Database** flow
107162
* The agent can query the database on behalf of the AI model.
108-
* A query tool's RQL query may include _agent parameters_, which are placeholders for values provided by the user. When this is the case, the agent will replace these parameters with values provided by the user before running the query.
109-
* A query tool's RQL query may also include parameters that are placeholders for values that the AI model is permitted to fill. When this is the case, the agent will replace these parameters with values provided by the AI model before running the query.
110-
* When the query ends, the agent will return its results to the AI model.
111-
112-
5. **Agent `<->` Model flow**
113-
* When starting a conversation, the agent provides the AI model with -
114-
* A system prompt that defines the model's role and how it is expected to fulfill it
115-
* A JSON schema that defines the layout for the model's response
116-
* Query and Action tools
117-
* If this is a continuation of an ongoing conversation - the history of the conversation so far
118-
* A user prompt that initiates this part of the conversation
119-
* The AI model can respond to the agent with -
120-
* Answers to user prompts
121-
* Requests for the agent to query the database, optionally with values for query parameters that the AI model is permitted to fill
122-
* Requests for the client to perform actions
123-
* The agent can respond to the AI model with -
124-
* Results of database queries
125-
* Results of client actions
163+
When the query ends, the agent will return its results to the AI model.
164+
* When the agent is requested to run a query that includes _agent parameters_,
165+
it will replace these parameters with values provided by the client before
166+
running the query.
167+
* When the agent is requested to run a query that includes _LLM parameters_,
168+
it will replace these parameters with values provided by the LLM before
169+
running the query.
170+
171+
5. **Agent`<->`Model** flow
172+
* **When a conversation is started**, the agent needs to provide the AI model with
173+
an [initial context](../../ai-integration/ai-agents/ai-agents_overview#initiating-a-conversation), partly defined by the agent configuration and partly by the client.
174+
* **During the conversation**, the AI model can respond to the agent with -
175+
* Requests for queries.
176+
If a query includes LLM parameters, the LLM will include values for them, and the agent will replace the parameters with these values, run the query, and return its results to the LLM.
177+
If a query includes agent parameters, the agent will replace them with values provided by the client, run the query, and return its results to the LLM.
178+
* Requests for actions.
179+
The agent will pass such requests to the client and return their results to the LLM.
180+
* The final response to the user prompt, in the layout defined by the response object.
181+
The agent will pass the response to the client (which doesn't need to reply to it).
126182

127183
<hr />
128184

129-
## Stream LLM responses
185+
## Streaming LLM responses (RavenDB 7.1.3 and up)
130186

131187
Rather than wait for the LLM to finish generating a response and then pass it in its entirety to the client, the agent can stream response chunks (determined by the LLM, e.g. words or symbols) to the client one by one, immediately as each chunk is returned by the LLM, allowing the client to process and display the response gradually.
132188

@@ -138,18 +194,6 @@ Streaming is supported by most AI models, including OpenAI services like GPT-4 a
138194

139195
<hr />
140196

141-
## Initial context queries
142-
143-
The initial context queries are designed to gather relevant information from the database before the main conversation begins. These queries help set the stage for a more informed and context-aware interaction between the user and the AI agent.
144-
145-
1. **User Intent**: The agent should first determine the user's intent by asking clarifying questions or making initial queries to understand the context better.
146-
2. **Relevant Data Retrieval**: Based on the user's intent, the agent can issue queries to retrieve relevant data from the database. This may include user profiles, previous interactions, or specific documents related to the user's query.
147-
3. **Contextual Information**: The agent should also gather any additional contextual information that may be useful for the conversation. This could include metadata about the user's environment, preferences, or constraints.
148-
149-
By performing these initial context queries, the AI agent can create a more tailored and effective interaction with the user.
150-
151-
<hr />
152-
153197
## Security concerns
154198

155199
https://issues.hibernatingrhinos.com/issue/RavenDB-24777/AI-Agent-Security-Concerns
18.6 KB
Loading

0 commit comments

Comments
 (0)