You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This change allows users to declaratively specify hierarchical entities in their expected utterance results. For example, a user may declare the following:
```json
{
"text": "Order a pepperoni pizza"
"intent": "OrderFood",
"entities": {
"entity": "FoodItem":
"startPos": 8,
"endPos": 22,
"children": [
{
"entity": "Topping",
"startPos": 8,
"endPos": 16
},
{
"entity": "FoodType",
"startPos": 18,
"endPos": 22
}
]
}
}
```
This would result in 3 test cases, one for the parent entity (the "FoodItem" entity), and two additional test cases for each of the two nested entities ("FoodItem::Topping" and "FoodItem::FoodType").
Child entity type names are prefixed by their parent entity type names in the format `parentType::childType`. As such, the recursive entity parsing for the LUIS V3 provider has been updated to use this convention.
Fixes#335
Copy file name to clipboardExpand all lines: docs/Analyze.md
-36Lines changed: 0 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -125,42 +125,6 @@ thresholds:
125
125
threshold: 0.1
126
126
```
127
127
128
-
#### Example
129
-
130
-
While it's useful to set up the performance regression testing in a CI environment, you can also run the tools locally. Here's an end-to-end example for running a performance regression test.
131
-
132
-
The assumptions are that you have the following:
133
-
1. An existing NLU endpoint (in this case, for LUIS).
134
-
2. Environment variables or app settings pointing to the correct LUIS application to query and update.
135
-
3. A set of changes to the NLU training utterances to evaluate (`utterances.json`).
136
-
4. A test set that can be used to evaluate the endpoint (`tests.json`).
137
-
138
-
Here is the end-to-end:
139
-
```sh
140
-
# Get predictions from the current endpoint
141
-
dotnet nlu test -s luis -u tests.json -o baselineResults.json
142
-
# Generate the confusion matrix statistics for the results
143
-
dotnet nlu compare -e tests.json -a baselineResults.json -o baseline
144
-
# Train a new version of the model
145
-
dotnet nlu train -s luis -u utterances.json -a
146
-
# Get predictions from the new endpoint
147
-
dotnet nlu test -s luis -u tests.json -o latestResults.json
148
-
# Create a regression threshold for the overall intent F1 score
149
-
echo -e "thresholds:\n\
150
-
- type: intent\n\
151
-
- threshold: 0.1\n" > \
152
-
thresholds.yml
153
-
# Generate the confusion matrix statistics for the results and validate regression thresholds
154
-
dotnet nlu compare \
155
-
-e tests.json \
156
-
-a latestResults.json \
157
-
-o latest \
158
-
-b baseline/statistics.json \
159
-
-t thresholds.yml
160
-
```
161
-
162
-
If the F<sub>1</sub> score for overall intents has not dropped more than 0.1, the exit code for the final command will be 0, otherwise it will be 1 (or, more generally, the number of regression threshold tests failed).
163
-
164
128
### Unit Test Mode
165
129
166
130
Unit test mode can be enabled using the [`--unit-test`](#-u---unit-test) flag. This flag configures the command to return a non-zero exit code if any false positive or false negative results are detected. When in unit test mode, false positive results for entities are only generated for entity types included in the `strictEntities` configuration from `--test-settings` or the labeled test utterance. Similarly, false positive results will only be generated for intents when an explicit negative intent (e.g., "None") is included in the expected results. For example:
0 commit comments