You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:warning: Be careful -- this stops the currently running (containerized) TigerGraph database and deletes all of its data.
82
82
83
-
#### Start the database
84
83
To start the database, run the following [script](./scripts/start.sh):
85
-
```
84
+
```bash
85
+
./scripts/stop.sh # if you have an existing TG database
86
+
# wait several seconds for docker to reset
86
87
./scripts/start.sh
87
88
```
88
89
89
-
It will start a single node TigerGraph database and all required services.
90
+
It will start a single node TigerGraph database and all required services. Note the license in the container is a trial license supporting at most 100GB data. For benchmarks on SF-100 and larger, you need to obtain a license after running `start.sh`. We have an example command in the end of `start.sh`.
90
91
91
-
You can verify the readiness of TigerGraph by visiting it's console in the browser: `http://localhost:14240/`
92
-
93
-
#### Load the data
94
92
To set up the database, run the following [script](./scripts/setup.sh):
95
93
```bash
96
94
./scripts/setup.sh
97
95
```
98
-
It leverages the fact, that the TigerGraph container has the `$TIGERGRAPH_DATA_DIR`, `setup` and `queries` directories mounted as volumes.
99
-
(The configuration is stored in [vars.sh](./scripts/vars.sh).)
96
+
This step may take a while (several minutes), as it is responsible for defining the queries, loading jobs, loading the data and installing the queries. After the data is ready, you can explore the graph using TigerGraph GraphStudio in the browser: `http://localhost:14240/`. By default, the docker terminal can be accessed via `ssh tigergraph@localhost -p 14022` with password tigergraph, or using Docker command `docker exec --user tigergraph -it snb-interactive-tigergraph bash`.
100
97
101
-
If you have your data located elsewhere, please update `vars.sh` or run the setup script with parameters:
98
+
The above scripts can be executed with a single command:
102
99
```bash
103
-
./scripts/setup.sh<<dataset dir>> <<query dir>>
100
+
scripts/load-in-one-step.sh
104
101
```
105
102
106
-
This step may take a while (several minutes), as it is responsible for defining the queries, loading jobs, loading the data
107
-
and installing (optimizing and compiling on the server) the queries.
108
-
109
103
## Running the benchmark
110
104
111
-
To run the scripts of benchmark framework, edit the `driver/{create-validation-parameters,validate,benchmark}.properties` files,
112
-
then run their script, one of:
105
+
To run the scripts of benchmark framework, edit the `driver/{create-validation-parameters,validate,benchmark}.properties` files, then run their script, one of:
113
106
114
107
```bash
115
108
driver/create-validation-parameters.sh
116
109
driver/validate.sh
117
110
driver/benchmark.sh
118
111
```
119
112
120
-
:warning: Our DateTime library does not support dateTime precision to milliseconds. We use INT for datetime right now.
121
-
:warning: SNB data sets of **different scale factors require different configurations**forthe benchmark runs. Therefore, make sure you use the correct values (update_interleave and query frequencies) based on the files providedin the [`sf-properties/` directory](../sf-properties).
122
-
123
-
>**Warning:** Note that if the default workload contains updates which are persisted in the database. Therefore, the database needs to be re-loaded between steps – otherwise repeated updates would insert duplicate entries.*
113
+
SNB data sets of **different scale factors require different configurations** for the benchmark runs. Therefore, make sure you use the correct values (update_interleave and query frequencies) based on the files provided in the [`sf-properties/` directory](../sf-properties).
124
114
115
+
* The default workload contains updates which are persisted in the database. Therefore, **the database needs to be reloaded or restored from backup before each run**. Use the provided `scripts/backup-database.sh` and `scripts/restore-database.sh` scripts to achieve this.
0 commit comments