diff --git a/docs/.gitignore b/docs/.gitignore
new file mode 100644
index 0000000000..610a98479c
--- /dev/null
+++ b/docs/.gitignore
@@ -0,0 +1,2 @@
+book
+index.html
diff --git a/docs/README.md b/docs/README.md
new file mode 100644
index 0000000000..6b6fffff6b
--- /dev/null
+++ b/docs/README.md
@@ -0,0 +1,29 @@
+# Concord Documentation
+
+This documentation covers Concord, a workflow server and orchestration engine that connects different
+systems together using scenarios and plugins.
+
+Concord provides a comprehensive platform for process automation, featuring:
+
+- **Process Management**: execute workflows in isolated environments
+- **Project Organization**: group processes with shared configuration and resources
+- **Security**: built-in secrets management and team-based authorization
+- **Extensibility**: rich plugin ecosystem for system integration
+
+## Building
+
+To build and view the documentation:
+
+1. Install the required tools:
+ ```shell
+ cargo install mdbook mdbook-variables
+ ```
+
+2. Serve the documentation locally:
+ ```shell
+ mdbook serve
+ ```
+
+ The documentation will be available at `http://localhost:3000`
+
+The built documentation is output to the `book/` directory.
diff --git a/docs/book.toml b/docs/book.toml
new file mode 100644
index 0000000000..d949676e27
--- /dev/null
+++ b/docs/book.toml
@@ -0,0 +1,15 @@
+[book]
+authors = ["Concord Authors"]
+language = "en"
+src = "src"
+title = "Concord"
+
+[preprocessor.variables.variables.site]
+concord_core_version = "2.32.0"
+concord_source = "https://github.com/walmartlabs/concord/"
+concord_plugins_source = "https://github.com/walmartlabs/concord-plugins/"
+concord_plugins_v1_docs = "https://concord.walmartlabs.com/docs/plugins-v1"
+concord_plugins_v2_docs = "https://concord.walmartlabs.com/docs/plugins-v2"
+
+[output.html]
+default-theme = "light"
diff --git a/docs/src/SUMMARY.md b/docs/src/SUMMARY.md
new file mode 100644
index 0000000000..7b9e8a80b1
--- /dev/null
+++ b/docs/src/SUMMARY.md
@@ -0,0 +1,87 @@
+# Summary
+
+[Introduction](../README.md)
+
+# User Guide
+
+- [Overview](./getting-started/index.md)
+- [Quick Start](./getting-started/quickstart.md)
+- [Installation](./getting-started/installation.md)
+- [Configuration](./getting-started/configuration.md)
+- [Processes](./getting-started/processes.md)
+- [Projects](./getting-started/projects.md)
+- [Forms](./getting-started/forms.md)
+- [Scripting](./getting-started/scripting.md)
+- [Security](./getting-started/security.md)
+- [Tasks](./getting-started/tasks.md)
+- [Organizations and Teams](./getting-started/orgs.md)
+- [Policies](./getting-started/policies.md)
+- [JSON Store](./getting-started/json-store.md)
+- [Node Roster](./getting-started/node-roster.md)
+- [Development](./getting-started/development.md)
+
+# Reference
+
+- [Processes](./processes-v2/index.md)
+ - [Configuration](./processes-v2/configuration.md)
+ - [Flows](./processes-v2/flows.md)
+ - [Migration from v1](./processes-v2/migration.md)
+ - [Imports](./processes-v2/imports.md)
+ - [Profiles](./processes-v2/profiles.md)
+ - [Resources](./processes-v2/resources.md)
+ - [Tasks](./processes-v2/tasks.md)
+- [Plugins](./plugins/index.md)
+ - [Ansible](./plugins/ansible.md)
+ - [Asserts](./plugins/asserts.md)
+ - [Concord](./plugins/concord.md)
+ - [Crypto](./plugins/crypto.md)
+ - [Datetime](./plugins/datetime.md)
+ - [Docker](./plugins/docker.md)
+ - [Files](./plugins/files.md)
+ - [HTTP](./plugins/http.md)
+ - [JSON Store](./plugins/json-store.md)
+ - [Key-value](./plugins/key-value.md)
+ - [Lock](./plugins/lock.md)
+ - [Mocks](./plugins/mocks.md)
+ - [Node Roster](./plugins/node-roster.md)
+ - [Resource](./plugins/resource.md)
+ - [Slack](./plugins/slack.md)
+ - [Sleep](./plugins/sleep.md)
+ - [SMTP](./plugins/smtp.md)
+- [Triggers](./triggers/index.md)
+ - [GitHub](./triggers/github.md)
+ - [Cron](./triggers/cron.md)
+ - [Manual](./triggers/manual.md)
+ - [Generic](./triggers/generic.md)
+ - [OneOps](./triggers/oneops.md)
+- [CLI](./cli/index.md)
+ - [Linting](./cli/linting.md)
+ - [Running Flows](./cli/running-flows.md)
+- [API](./api/index.md)
+ - [API Key](./api/apikey.md)
+ - [Checkpoint](./api/checkpoint.md)
+ - [Form](./api/form.md)
+ - [JSON Store](./api/json-store.md)
+ - [Node Roster](./api/node-roster.md)
+ - [Organization](./api/org.md)
+ - [Policy](./api/policy.md)
+ - [Process](./api/process.md)
+ - [Project](./api/project.md)
+ - [Repository](./api/repository.md)
+ - [Role](./api/role.md)
+ - [Secret](./api/secret.md)
+ - [Team](./api/team.md)
+ - [Template](./api/template.md)
+ - [Trigger](./api/trigger.md)
+ - [User](./api/user.md)
+
+# Deprecated Features
+
+- [Processes (v1)](./processes-v1/index.md)
+ - [Configuration](./processes-v1/configuration.md)
+ - [Flows](./processes-v1/flows.md)
+ - [Imports](./processes-v1/imports.md)
+ - [Profiles](./processes-v1/profiles.md)
+ - [Resources](./processes-v1/resources.md)
+ - [Tasks](./processes-v1/tasks.md)
+- [Templates](./templates/index.md)
\ No newline at end of file
diff --git a/docs/src/api/apikey.md b/docs/src/api/apikey.md
new file mode 100644
index 0000000000..f50b59e743
--- /dev/null
+++ b/docs/src/api/apikey.md
@@ -0,0 +1,148 @@
+# API Key
+
+An API Key is specific to a user and allows access to the API with the key
+replacing the usage of user credentials for authentication.
+
+The REST API provides support for a number of operations:
+
+- [Create a New API Key](#create-key)
+- [List Existing API keys](#list-keys)
+- [Delete an Existing API Key](#delete-key)
+
+
+
+## Create a New API Key
+
+Creates a new API key for a user.
+
+* **URI** `/api/v1/apikey`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "username": "myLdapUsername",
+ "userDomain": "optional.domain.com"
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "id": "3b45a52f-91d7-4dd0-8bf6-b06548e0afa5",
+ "key": "someGeneratedKeyValue"
+ }
+ ```
+* **Example**: create a key, Concord will auto-generate a key name
+ ```
+ curl -u myLdapUser \
+ -X POST \
+ -H "Content-Type: application/json" \
+ -d '{ "username": "myLdapUser" }' \
+ https://concord.example.com/api/v1/apikey
+ ```
+
+* **Example**: create a key, specify a key name
+ ```
+ curl -u myLdapUser \
+ -X POST \
+ -H "Content-Type: application/json" \
+ -d '{ "username": "myLdapUser", "name": "myCustomApiKeyName" }' \
+ https://concord.example.com/api/v1/apikey
+ ```
+
+* **Example**: create a key when multiple users with the same username exist across domains
+ ```
+ curl -u myLdapUser@example.com \
+ -X POST \
+ -H "Content-Type: application/json" \
+ -d '{ "username": "myLdapUser", "userDomain": "example.com" }' \
+ https://concord.example.com/api/v1/apikey
+ ```
+
+
+
+## List Existing API keys
+
+Lists any existing API keys for the user. Only returns metadata, not actual keys.
+
+* **URI** `/api/v1/apikey`
+* **Method** `GET`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id" : "2505acba-314d-11e9-adf9-0242ac110002",
+ "userId": "aab8a8e2-2f75-4859-add1-3b8f5d7a6690",
+ "name" : "key#1"
+ }, {
+ "id" : "efd12c7a-3162-11e9-b9c0-0242ac110002",
+ "userId": "aab8a8e2-2f75-4859-add1-3b8f5d7a6690",
+ "name" : "myCustomApiKeyName"
+ }
+ ]
+ ```
+* **Example**
+ ```
+ curl -u myLdapUser \
+ -H "Content-Type: application/json" \
+ https://concord.example.com/api/v1/apikey
+ ```
+
+
+
+## Delete an existing API key
+
+Removes an existing API key.
+
+* **URI** `/api/v1/apikey/${id}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
+
+* **Example**
+ ```
+ curl -u myLdapUser \
+ -X DELETE \
+ -H "Content-Type: application/json" \
+ https://concord.example.com/api/v1/apikey/2505acba-314d-11e9-adf9-0242ac110002
+ ```
+
+
+
+# Using an API key to access the Concord API
+
+When accessing the Concord API, the **Authorization** header can be
+set with the value of an API key. This replaces the need to authenticate
+with user and password.
+
+* **Example**
+ ```
+ curl \
+ -H "Content-Type: application/json" \
+ -H "Authorization: someGeneratedKeyValue" \
+ https://concord.example.com/api/v1/apikey
+ ```
diff --git a/docs/src/api/checkpoint.md b/docs/src/api/checkpoint.md
new file mode 100644
index 0000000000..c3db33c8ae
--- /dev/null
+++ b/docs/src/api/checkpoint.md
@@ -0,0 +1,72 @@
+# Checkpoint
+
+The checkpoint API can be used to list and restore
+[checkpoints created in a flow](../processes-v1/flows.md#checkpoints).
+
+- [List Checkpoints](#list)
+- [Restore a Process](#restore)
+
+
+
+## List Checkpoints
+
+You can access a list of all checkpoints for a specific process, identified by
+the `id`, with the REST API.
+
+* **URI** `/api/v1/process/{id}/checkpoint`
+* **Method** `GET`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ none
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id": "...",
+ "name": "...",
+ "createdAt": "..."
+ },
+ {
+ "id": "...",
+ "name": "...",
+ "createdAt": "..."
+ },
+ ...
+ ]
+ ```
+
+
+
+## Restore a Process
+
+You can restore a process state from a named checkpoint of a specific process
+using the process identifier in the URL and the checkpoint identifier in the
+body.
+
+* **URI** `/api/v1/process/{id}/checkpoint/restore`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "id": "..."
+ }
+ ```
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
diff --git a/docs/src/api/form.md b/docs/src/api/form.md
new file mode 100644
index 0000000000..041c04985d
--- /dev/null
+++ b/docs/src/api/form.md
@@ -0,0 +1,151 @@
+# Form
+
+The REST API provides a number of operations to work with
+Concord [forms](../getting-started/forms.md):
+
+- [List Current Forms](#list-current-forms)
+- [Get Form Data](#get-form-data)
+- [Submit JSON Data as Form Values](#submit-json-data-as-form-values)
+- [Submit Multipart Data as Form Values](#submit-multipart-data-as-form-values)
+
+## List Current Forms
+
+Returns a list of currently available forms for a specific process.
+
+* **URI** `/api/v1/process/${instanceId}/form`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Parameters**
+ ID of a process: `${instanceId}`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "name": "myForm", "custom": false, ... },
+ { "name": "myOtherForm", ... },
+ ]
+ ```
+
+## Get Form Data
+
+Returns data of a form, including the form's fields and their values.
+
+* **URI** `/api/v1/process/${instanceId}/form/${formName}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Parameters**
+ ID of a process: `${instanceId}`
+ Name of the form: `${formName}`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "processInstanceId": "...",
+ "name": "myForm",
+ ...,
+ "fields": [
+ { "name": "...", "type": "..." }
+ ]
+ }
+ ```
+
+## Submit JSON Data as Form Values
+
+Submits the provided JSON data as form values. The process resumes if the data
+passes the validation.
+
+* **URI** `/api/v1/process/${instanceId}/form/${formName}`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "myField": "myValue",
+ ...
+ }
+ ```
+ A JSON object where keys must match the form's field values.
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+* **Validation errors response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": false,
+ "errors": {
+ "myField": "..."
+ }
+ }
+ ```
+
+## Submit Multipart Data as Form Values
+
+Submits the provided `multipart/form-data` request as form values. The process
+resumes if the data passes the validation. This endpoint can be used to submit
+`file` fields (upload a file).
+
+Note the `multipart` extension in the endpoint's URL.
+
+* **URI** `/api/v1/process/${instanceId}/form/${formName}/multipart`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: multipart/form-data`
+* **Body**
+ A `multipart/form-data` body where each part corresponds to one of
+ the form's fields.
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+* **Validation errors response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": false,
+ "errors": {
+ "myField": "..."
+ }
+ }
+ ```
+* **Example**
+ ```
+ curl -i -H "Authorization: ..." \
+ -F myValue=abc \
+ -F myFile=@form.yml \
+ https://concord.example.com/api/v1/process/361bec22-14eb-4063-a26d-0eb7e6d4654e/form/myForm/multipart
+ ```
diff --git a/docs/src/api/index.md b/docs/src/api/index.md
new file mode 100644
index 0000000000..607a97a461
--- /dev/null
+++ b/docs/src/api/index.md
@@ -0,0 +1,4 @@
+# Concord API
+
+The Concord API is REST-based and allows you to interact with all functionality
+of the server.
diff --git a/docs/src/api/json-store.md b/docs/src/api/json-store.md
new file mode 100644
index 0000000000..0c7be82942
--- /dev/null
+++ b/docs/src/api/json-store.md
@@ -0,0 +1,461 @@
+# JSON Store
+
+The API for working with Concord [JSON Stores](../getting-started/json-store.md),
+the data in stores and named queries.
+
+- [JSON Stores](#json-stores)
+ - [Create or Update a JSON Store](#create-update-store)
+ - [Get a JSON Store](#get-store)
+ - [Delete a JSON Store](#delete-store)
+ - [List Stores](#list-stores)
+ - [Get Current Capacity for a JSON Store](#store-capacity)
+ - [List Current Access Rules](#list-current-access-rules)
+ - [Update Access Rules](#update-access-rules)
+- [Items](#items)
+ - [Create or Update an Item](#create-update-item)
+ - [Get an Item](#get-item)
+ - [List Items](#list-items)
+ - [Delete an Item](#delete-item)
+- [Queries](#queries)
+ - [Create or Update a Query](#create-update-query)
+ - [Get a Query](#get-query)
+ - [List Queries](#list-queries)
+ - [Delete a Query](#delete-query)
+ - [Execute a Query](#execute-query)
+
+## JSON Stores
+
+
+
+### Create or Update a JSON Store
+
+Creates or updates a JSON Store with the specified parameters.
+
+* **URI** `/api/v1/org/{orgName}/jsonstore`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "id": "...",
+ "name": "myStore",
+ "visibility": "PRIVATE",
+ "owner": {
+ "id": "...",
+ "username": "...",
+ "userDomain": "...",
+ "userType": "..."
+ }
+ }
+ ```
+ All parameters except `name` are optional.
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+
+### Get a JSON Store
+
+Returns a previously created JSON store configuration.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "orgId": "...",
+ "orgName": "...",
+ "id": "...",
+ "name": "myStore",
+ "visibility": "PRIVATE",
+ "owner": {
+ "id": "...",
+ "username": "...",
+ "userDomain": "...",
+ "userType": "..."
+ }
+ }
+ ```
+
+
+
+### Delete a JSON Store
+
+Removes an existing JSON store and all its data and associated queries.
+
+**Warning:** the operation is irreversible.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "DELETED"
+ }
+ ```
+
+
+
+### List Stores
+
+Lists all existing JSON stores for the specified organization.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore`
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "orgId": "...", "orgName": "...", "id": "...", "name": "...", "visibility": "...", "owner": { ... } },
+ ...
+ ]
+ ```
+
+
+
+### Get Current Capacity for a JSON Store
+
+Returns the current capacity for a specified JSON store. The `size` parameter
+is the size of all items in the store and the `maxSize` is the maximum allowed
+size of the store (`-1` if unbounded).
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/capacity`
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "size": "...",
+ "maxSize": "..."
+ }
+ ```
+
+
+
+### List Current Access Rules
+
+Returns store's current [access rules](../getting-started/orgs.md#teams).
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/access`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {"teamId": "...", "orgName": "...", "teamName": "...", "level": "..."},
+ ...
+ ]
+ ```
+
+
+
+### Update Access Rules
+
+Updates stores's [access rules](../getting-started/orgs.md#teams) for a
+specific team.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/access`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "teamId": "...",
+ "orgName": "...",
+ "teamName": "...",
+ "level": "..."
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+
+
+## Items
+
+### Create or Update an Item
+
+Creates or updates a JSON store items.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/item/${itemPath}`
+* **Query parameters**
+ - `itemPath`: a unique value to identify the data and can contain path
+ separators (e.g. `dir1/dir2/item`)
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ any valid JSON object:
+
+ ```json
+ {
+ ...
+ }
+ ```
+* **Success Response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+
+
+
+### Get an Item
+
+Returns a previously created JSON store item.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/item/${itemPath}`
+* **Query parameters**
+ - `itemPath`: item's identifier.
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ a valid JSON.
+
+
+
+### List Items
+
+Lists items in the specified JSON store.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/item?offset=${offset}&limit=${limit}&filter=${filter}`
+* **Query parameters**
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return;
+ - `filter`: filters items by name (substring match, case-insensitive).
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ "item1",
+ "item2",
+ ...
+ ]
+ ```
+
+
+
+### Delete an Item
+
+Removes an item from the specified JSON store.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/item/${itemPath}`
+* **Query parameters**
+ - `itemPath`: item's identifier.
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "DELETED"
+ }
+ ```
+
+## Queries
+
+
+
+### Create or Update a Query
+
+Creates a new or updates an existing [named query](../getting-started/json-store.md#named-queries).
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/query`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myQuery",
+ "text": "select from ..."
+ }
+ ```
+* **Success Response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "CREATED"
+ }
+ ```
+
+
+
+### Get a Query
+
+Returns a previously created [named query](../getting-started/json-store.md#named-queries).
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/query/${queryName}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "storeId": "...",
+ "id": "...",
+ "name": "...",
+ "text": "..."
+ }
+ ```
+
+
+
+### List Queries
+
+Lists [named queries](../getting-started/json-store.md#named-queries) in
+the specified JSON store.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/query?offset=${offset}&limit=${limit}&filter=${filter}`
+* **Query parameters**
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return;
+ - `filter`: filters queries by name (substring match, case-insensitive).
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "name": "...", "text": "..." },
+ ...
+ ]
+ ```
+
+
+
+### Delete a Query
+
+Removes a [named query](../getting-started/json-store.md#named-queries) from
+the specified JSON store.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/query/${queryName}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "DELETED"
+ }
+ ```
+
+
+
+### Execute a Query
+
+Executes a previously created query using the submitted body as the query's
+parameter. Returns a list of rows.
+
+* **URI** `/api/v1/org/${orgName}/jsonstore/${storeName}/query/${queryName}/exec`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ any valid JSON object.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ ...
+ ]
+ ```
diff --git a/docs/src/api/node-roster.md b/docs/src/api/node-roster.md
new file mode 100644
index 0000000000..36ac36de2c
--- /dev/null
+++ b/docs/src/api/node-roster.md
@@ -0,0 +1,145 @@
+# Node Roster
+
+[Node Roster](../getting-started/node-roster.md) provides access to data
+gathered during [Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md) playbook executions.
+
+- [Hosts](#hosts)
+ - [List Hosts With An Artifact](#list-hosts-with-an-artifact)
+ - [Processes Which Deployed to a Host](#processes-which-deployed-to-a-host)
+- [Facts](#facts)
+ - [Latest Host Facts](#latest-host-facts)
+- [Artifacts](#artifacts)
+ - [Deployed Artifacts](#deployed-artifacts)
+
+## Hosts
+
+### List Hosts With An Artifact
+
+Returns a paginated list of all hosts that had the specified artifact
+deployed on.
+
+* **URI** `/api/v1/noderoster/hosts?artifact=${artifactPattern}&offset=${offset}&limit=${limit}`
+* **Query parameters**
+ - `artifact`: regex, the artifact's URL pattern;
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return.
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [ {
+ "id" : "d18f60ec-4804-11ea-9e99-0242ac110003",
+ "name" : "hostb",
+ "createdAt" : "2020-02-05T10:46:52.112Z",
+ "artifactUrl" : "http://localhost:57675/test.txt"
+ }, {
+ "id" : "d18eeb8a-4804-11ea-9e99-0242ac110003",
+ "name" : "hosta",
+ "createdAt" : "2020-02-05T10:46:52.109Z",
+ "artifactUrl" : "http://localhost:57675/test.txt"
+ } ]
+ ```
+
+ The result is a list of hosts where are artifact URLs matching the supplied
+ `artifactPattern`
+
+### Processes Which Deployed to a Host
+
+Returns a (paginated) list of processes that touched the specified host.
+
+* **URI** `/api/v1/noderoster/processes?hostName=${hostName}&hostId=${hostId}&offset=${offset}&limit=${limit}`
+* **Query parameters**
+ - `hostName`: name of the host;
+ - `hostId`: ID of the host;
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return.
+
+ Either `hostName` or `hostId` must be specified.
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [ {
+ "instanceId" : "5285f431-3551-4467-ad31-b43e9693eaab",
+ "createdAt" : "2020-02-03T20:32:07.276Z",
+ "initiatorId" : "230c5c9c-d9a7-11e6-bcfd-bb681c07b26c",
+ "initiator" : "admin"
+ } ]
+ ```
+
+## Facts
+
+### Latest Host Facts
+
+Returns the latest registered
+[Ansible facts](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts)
+for the specified host.
+
+* **URI** `/api/v1/noderoster/facts/last?hostName=${hostName}&hostId=${hostId}&offset=${offset}&limit=${limit}`
+* **Query parameters**
+ - `hostName`: name of the host;
+ - `hostId`: ID of the host;
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return.
+
+ Either `hostName` or `hostId` must be specified.
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ ...
+ }
+ ```
+
+## Artifacts
+
+### Deployed Artifacts
+
+Returns a (paginated) list of known artifacts deployed to the specified host.
+
+* **URI** `/api/v1/noderoster/artifacts?hostName=${hostName}&hostId=${hostId}&offset=${offset}&limit=${limit}`
+* **Query parameters**
+ - `hostName`: name of the host;
+ - `hostId`: ID of the host;
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return.
+
+ Either `hostName` or `hostId` must be specified.
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [ {
+ "url" : "http://localhost:53705/test.txt",
+ "processInstanceId" : "5285f431-3551-4467-ad31-b43e9693eaab"
+ } ]
+ ```
diff --git a/docs/src/api/org.md b/docs/src/api/org.md
new file mode 100644
index 0000000000..ab074e745e
--- /dev/null
+++ b/docs/src/api/org.md
@@ -0,0 +1,134 @@
+# Organization
+
+An Organization owns projects, repositories, inventories, secrets and teams.
+
+The REST API provides support for working with organizations:
+
+- [Create an Organization](#create-org)
+- [Update an Organization](#update-org)
+- [Delete an Organization](#delete-org)
+- [List Organizations](#list-org)
+
+
+
+## Create an Organization
+
+Creates a new organization with specified parameters.
+
+Only administrators can create new organizations.
+
+* **URI** `/api/v1/org`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myOrg"
+ }
+ ```
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+
+## Update an Organization
+
+Updates parameters of an existing organization.
+
+* **URI** `/api/v1/org`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "new name",
+ "id": "---"
+ }
+ ```
+ Organization `id` is mandatory, in case of updating organization `name`.
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+
+## Delete an Organization
+
+Removes an existing organization and all resources associated with it
+(projects, secrets, teams, etc). This operation is irreversible.
+
+Only administrators can delete organizations.
+
+* **URI** `/api/v1/org/${orgName}?confirmation=yes`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
+
+
+
+## List Organizations
+
+Lists all available organizations.
+
+* **URI** `/api/v1/org?onlyCurrent=${onlyCurrent}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Parameters**
+ If the `${onlyCurrent}` parameter is `true`, then the server will
+ return the list of the current user's organizations. Otherwise,
+ all organizations will be returned.
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id": "...",
+ "name": "..."
+ },
+ {
+ "id": "...",
+ "name": "..."
+ }
+ ]
+ ```
diff --git a/docs/src/api/policy.md b/docs/src/api/policy.md
new file mode 100644
index 0000000000..c70c7e019f
--- /dev/null
+++ b/docs/src/api/policy.md
@@ -0,0 +1,161 @@
+# Policy
+
+[Policies](../getting-started/policies.md) control various aspects of process
+execution.
+
+The REST API provides support for working with policies:
+
+- [Create or Update a Policy](#create-or-update-a-policy)
+- [Get a Policy](#get-a-policy)
+- [Remove a Policy](#remove-a-policy)
+- [Link a Policy](#link-a-policy)
+- [Unlink a Policy](#unlink-a-policy)
+
+
+
+## Create or Update a Policy
+
+Creates a new policy or updates an existing one. Requires administrator
+privileges.
+
+* **URI** `/api/v2/policy`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myPolicy",
+ "parentId": "...",
+ "rules": {
+ ...policy document...
+ }
+ }
+ ```
+
+ - `name` - the policy's name;
+ - `parentId` - optional, ID of a parent policy;
+ - `rules` - the policy's rules, see the
+ [Policies](../getting-started/policies.md) document.
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+
+## Get a Policy
+
+Returns an existing policy.
+
+* **URI** `/api/v2/policy/${name}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "name": "myPolicy",
+ "parentId": "...",
+ "rules": {
+ ...policy document...
+ }
+ }
+ ```
+
+
+
+## Remove a Policy
+
+Deletes an existing policy.
+
+* **URI** `/api/v2/policy/${name}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
+
+
+
+## Link a Policy
+
+Links an existing policy to an organization, project or a specific user.
+
+* **URI** `/api/v2/policy/${name}/link`
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "orgName": "myOrg",
+ "projectName": "myProject",
+ "userName": "someUser"
+ }
+ ```
+
+ All parameters are optional. If all parameters are omitted (or `null`) then
+ the policy becomes a system-wide policy.
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true
+ }
+ ```
+
+
+
+## Unlink a Policy
+
+Unlinks an existing policy from an organization, project or a specific user.
+
+* **URI** `/api/v2/policy/${name}/link?orgName=${orgName}&projectName=${projectName}&userName=${userName}`
+* **Query parameters**
+ All parameters are optional. If all parameters are omitted then the system
+ link is removed.
+* **Method** `DELETE`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
diff --git a/docs/src/api/process.md b/docs/src/api/process.md
new file mode 100644
index 0000000000..538a8b2a20
--- /dev/null
+++ b/docs/src/api/process.md
@@ -0,0 +1,500 @@
+# Process
+
+A process is an execution of a flow in repository of a project.
+
+The REST API provides support for a number of operations:
+
+- [Start a Process](#start)
+ - [Form data](#form-data)
+ - [ZIP File](#zip-file)
+ - [Browser](#browser)
+- [Stop a Process](#stop)
+- [Getting Status of a Process](#status)
+- [Retrieve a Process Log](#log)
+- [Download an Attachment](#download-attachment)
+- [List Processes](#list)
+- [Count Processes](#count)
+- [Resume a Process](#resume)
+- [Process Events](#process-events)
+ - [List events](#list-events)
+
+
+
+## Start a Process
+
+The best approach to start a [process](../getting-started/processes.md)
+manually is to execute a flow defined in the Concord file in a [repository of an
+existing project using the Concord Console](../console/repository.md).
+
+Alternatively you can create a [ZIP file with the necessary content](#zip-file)
+and submit it for execution.
+
+For simple user interaction with flows that include forms, a process can also be
+started [in a browser directly](#browser) and therefore via a link e.g. in an
+email or online documentation or even any web applicaion.
+
+The following provides complete API information. It allows users
+to start a new process using the provided files as request data.
+Accepts multiple additional files, which are put into the process'
+working directory.
+
+* **URI** `/api/v1/process`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: multipart/form-data`
+* **Body**
+ Multipart binary data.
+
+ The values will be interpreted depending on their name:
+ - `activeProfiles` - a comma-separated list of profiles to use;
+ - `archive` - ZIP archive, will be extracted into the process'
+ working directory;
+ - `request` - JSON file, will be used as the process' parameters
+ (see [examples](#examples) below);
+ - any value of `application/octet-stream` type - will be copied
+ as a file into the process' working directory;
+ - `orgId` or `org` - ID or name of the organization which
+ "owns" the process;
+ - `projectId` or `project` - ID or name of the project
+ which will be used to run the process;
+ - `repoId` or `repo` - ID or name of the repository which
+ will be used to run the process;
+ - `repoBranchOrTag` - overrides the configured branch or tag name
+ of the project's repository;
+ - `repoCommitId` - overrides the configured GIT commit ID of the
+ project's repository;
+ - `entryPoint` - name of the starting flow;
+ - `out` - list of comma-separated names of variables that will be
+ saved after the process finishes. Such variables can be retrieved
+ later using the [status](#status) request;
+ - `startAt` - ISO-8601 date-time value. If specified, the process
+ will be scheduled to run on the specified date and time. Can't be
+ in the past. Time offset (e.g. `Z`, `-06:00`) is required;
+ - any other value of `text/plain` type - will be used as a process'
+ parameter. Nested values can be specified using `.` as the
+ delimiter;
+ - any other value will be saved as a file in the process' working
+ directory.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "instanceId" : "0c8fdeca-5158-4781-ac58-97e34b9a70ee",
+ "ok" : true
+ }
+ ```
+
+### Examples
+
+An example of a invocation triggers the `default` flow in the `default`
+repository of `myProject` in the `myOrg` organization without further
+parameters.
+
+```
+curl -i -F org=myOrg -F project=myProject -F repo=default https://concord.example.com/api/v1/process
+```
+
+(use `-i` or `-v` to see the server's reply in case of any errors).
+
+You can specify the flow e.g. `main` to start with a different flow for
+the same `default` repository of the `myProject` without further parameters.
+
+```
+curl ... -F entryPoint=main https://concord.example.com/api/v1/process
+```
+
+Passing arguments:
+
+```
+curl ... -F arguments.x=123 https://concord.example.com/api/v1/process
+```
+
+Note that all arguments passed this way are `String` values. If you wish to
+pass other types of values you can use JSON files:
+
+```
+curl ... -F request=@values.json https://concord.example.com/api/v1/process
+```
+
+or using Curl's inline syntax:
+
+```
+curl ... -F request='{"arguments": {"x": true}};type=application/octet-stream' https://concord.example.com/api/v1/process
+```
+
+Scheduling an execution:
+
+```
+curl ... -F startAt='2018-03-15T15:25:00-05:00' https://concord.example.com/api/v1/process
+```
+
+You can also upload and run a `concord.yml` file without creating a Git
+repository or a payload archive:
+
+```
+curl ... -F concord.yml=@concord.yml https://concord.example.com/api/v1/process
+```
+
+
+
+### Form Data
+
+Concord accepts `multipart/form-data` requests to start a process.
+Special variables such as `arguments`, `archive`, `out`, `activeProfiles`, etc
+are automatically configured. Other submitted data of format `text/plain` is
+used to configure variables. All other information is stored as a file in the
+process' working directory.
+
+However, if user tries to upload a `.txt` file
+
+```
+curl ... -F myFile.txt=@myFile.txt -F archive=@target/payload.zip \
+ https://concord.example.com/api/v1/process
+```
+
+then curl uses `Content-Type: text/plain` header and Concord stores this as a
+configuration variable instead of a file as desired.
+
+As a workaround you can specify the content type of the field explicitly:
+
+```
+curl ... \
+-F "myFile.txt=@myFile.txt;type=application/octet-stream" \
+-F archive=@target/payload.zip \
+https://concord.example.com//api/v1/process
+```
+
+
+
+### ZIP File
+
+If no project exists in Concord, a ZIP file with flow definition and related
+resources can be submitted to Concord for execution. Typically this is only
+suggested for development processes and testing or one-off process executions.
+
+Follow these steps:
+
+Create a zip archive e.g. named `archive.zip` containing the Concord file -
+a single `concord.yml` file in the root of the archive:
+
+```yaml
+flows:
+ default:
+ - log: "Hello Concord User"
+```
+
+The format is described in
+[Directory Structure](../processes-v1/index.md#directory-structure) document.
+
+Now you can submit the archive directly to the Process REST endpoint of Concord
+with the admin authorization or your user credentials as described in our
+[getting started example](../getting-started/):
+
+```
+curl -F archive=@archive.zip http://concord.example.com/api/v1/process
+```
+
+The response should look like:
+
+```json
+{
+ "instanceId" : "a5bcd5ae-c064-4e5e-ac0c-3c3d061e1f97",
+ "ok" : true
+}
+```
+
+
+
+### Browser Link
+
+You can start a new process in Concord via simply accessing a URL in a browser.
+
+Clicking on the link forces the users to log into the Concord Console, and then
+starts the process with the specified parameter. Progress is indicated in the
+user interface showing the process ID and the initiator. After completion a link
+to the process is displayed, so the user can get more information. If a form is
+used in the flow, the progress view is replaced with the form and further steps
+can include additional forms, which also show up in the browser.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repo/{repoName}/start/{entryPoint}`
+* **Method** `GET`
+* **Headers** none
+* **Required Parameters**
+ - orgName - name of the organization in Concord
+ - projectName - name of the project in Concord
+ - repoName - name of the repository in the project
+ - entryPoint - name of the entryPoint to use
+* **Optional Parameters**
+ - activeProfiles - comma separate list of profiles to activate
+ - arguments - process arguments can be supplied using the `arguments.` prefix
+* **Body**
+ none
+* **Success response**
+ Redirects a user to a form or an intermediate page or a results page that
+ allows access to the process log.
+* **Examples**
+ - Minimal: `/api/v1/org/Default/project/test-project/repo/test-repo/start/default`
+ - Different flow _main_: `/api/v1/org/Default/project/test-project/repo/test-repo/start/main`
+ - Specific profile: `/api/v1/org/Default/project/test-project/repo/test-repo/start/default?activeProfiles=dev`
+ - Passing process arguments: `/api/v1/org/Default/project/test-project/repo/test-repo/start/default?arguments.x=123&arguments.y=boo`
+
+
+
+
+## Stop a Process
+
+Forcefully stops the process.
+
+* **URI** `/api/v1/process/${instanceId}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Parameters**
+ ID of a process: `${instanceId}`
+* **Body**
+ none
+* **Success response**
+ Empty body.
+
+
+
+## Getting the Status of a Process
+
+Returns the current status of a process.
+
+**Note:** this is a `v2` endpoint.
+
+* **URI** `/api/v2/process/${instanceId}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Parameters**
+ ID of a process: `${instanceId}`
+* **Query parameters**
+ - `include`: additional entries to return (`checkpoints`, `childrenIds`,
+ `history`), repeat the parameter to include multiple additional entries;
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "instanceId" : "45beb7c7-6aa2-40e4-ba1d-488f78700ab7",
+ "parentInstanceId" : "b82bb6c7-f184-405e-ae08-68b62125c8be",
+ "projectName" : "myProject2",
+ "createdAt" : "2017-07-19T16:31:39.331+0000",
+ "initiator" : "admin",
+ "lastUpdatedAt" : "2017-07-19T16:31:40.493+0000",
+ "status" : "FAILED",
+ "childrenIds":["d4892eab-f75d-43a2-bb26-20903ffa10d8","be79ee81-78db-4afa-b207-d361a417e892","d5a35c8f-faba-4b9d-b957-ca9c31bf2a39"]
+ }
+ ```
+
+
+
+## Retrieve a Process Log
+
+Downloads the log file of a process.
+
+* **URI** `/api/v1/process/${instanceId}/log`
+* **Method** `GET`
+* **Headers** `Authorization`, `Range`
+* **Parameters**
+ ID of a process: `${instanceId}`
+* **Body**
+ ```
+ Content-Type: text/plain
+ ```
+
+ The log file.
+* **Success response**
+ Redirects a user to a form or an intermediate page.
+
+* **Example**
+ ```
+ curl -H "Authorization: ..." -H "Range: ${startByte}-${endByte}"\
+ http://concord.example.com/api/v1/process/${instanceId}/log
+ ```
+
+
+
+## Downloading an Attachment
+
+Downloads a process' attachment.
+
+* **URI** `/api/v1/process/${instanceId}/attachment/${attachmentName}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/octet-stream
+ ```
+
+ ```
+ ...data...
+ ```
+
+
+
+## List Processes
+
+Retrieve a list of processes.
+
+**Note:** this is a `v2` endpoint.
+
+* **URI** `/api/v2/process`
+* **Query parameters**
+ - `orgId`: filter by the organization's ID;
+ - `orgName`: filter by the organization's name;
+ - `projectId`: filter by the project's ID;
+ - `projectName`: filter by the project's name, requires `orgId` or
+ `orgName`;
+ - `afterCreatedAt`: limit by date, ISO-8601 string with time offset;
+ - `beforeCreatedAt`: limit by date, ISO-8601 string with time offset;
+ - `tags`: filter by a tag, repeat the parameter to filter by multiple tags;
+ - `status`: filter by the process status;
+ - `initiator`: filter by the initiator's username (starts with the
+ specified string);
+ - `parentInstanceId`: filter by the parent's process ID;
+ - `include`: additional entries to return (`checkpoints`, `childrenIds`,
+ `history`), repeat the parameter to include multiple additional entries;
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return;
+ - `meta.[paramName][.operation]`: filter by the process metadata's value
+ `paramName` using the specified comparison `operation`. Supported
+ operations:
+ - `eq`, `notEq` - equality check;
+ - `contains`, `notContains` - substring search;
+ - `startsWith`, `notStartsWith` - beginning of the string match;
+ - `endsWith`, `notEndsWith` - end of the string match.
+ If the operator is omitted, the default `contains` mode is used.
+ Metadata filters require `projectId` or `orgName` and `projectName` to be
+ specified.
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "instanceId": "...", "status": "...", ... },
+ { "instanceId": "...", ... }
+ ]
+ ```
+* **Example**
+ ```
+curl -H "Authorization: ..." \
+'http://concord.example.com/api/v2/process?orgName=myOrg&projectName=myProject&meta.myMetaVar.startsWith=Hello&afterCreatedAt=2020-08-12T00:00:00.000Z'
+ ```
+
+
+
+## Count Processes
+
+Returns a total number of processes using the specified filters.
+
+**Note:** this is a `v2` endpoint.
+
+* **URI** `/api/v2/process/count`
+* **Query parameters**
+ Same as the [list](#list) method. A `projectId` or a combination of
+ `orgName` and `projectName` is required. Not supported: `limit`,
+ `offset`, `include`.
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ 12
+ ```
+
+
+
+## Resume a Process
+
+Resume a previously `SUSPENDED` process.
+
+**Note:** usually Concord suspends and resumes processes automatically, e.g.
+when forms or suspendable tasks used. The resume API can be used for custom
+integrations or for special use cases.
+
+* **URI** `/api/v1/process/${instanceId}/resume/${eventName}`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Parameters**
+ ID of a process: `${instanceId}`
+ Event name: `${eventName}` - must match the event name created when
+ the process was suspended.
+* **Body**
+ a JSON object. Must match the process `configuration` format:
+ ```json
+ {
+ "arguments": {
+ "x": 123
+ }
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+
+## Process Events
+
+### List Events
+
+Retrieve a list of events.
+
+* **URI** `/api/v1/process/${instanceId}/event`
+* **Parameters**
+ ID of a process: `${instanceId}`
+* **Query parameters**
+ - `type`: event type;
+ - `after`: limit by date, ISO-8601 string with time offset;
+ - `eventCorrelationId`: correlation ID of the event (e.g. a task call);
+ - `eventPhase`: for multi-phase events (e.g. a task call - `PRE` or
+ `POST`);
+ - `includeAll`: if `true` additional, potentially sensitive, data is
+ returned (e.g. task call parameters);
+ - `limit`: maximum number of records to return.
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id" : "eba7360e-790a-11e9-a33e-fa163e7ef419",
+ "eventType" : "PROCESS_STATUS",
+ "data" : {"status": "PREPARING"},
+ "eventDate" : "2019-05-18T01:19:02.172Z"
+ }, {
+ "id" : "ebfabd24-790a-11e9-a33e-fa163e7ef419",
+ "eventType" : "PROCESS_STATUS",
+ "data" : {"status": "ENQUEUED"},
+ "eventDate" : "2019-05-18T01:19:02.720Z"
+ }
+ ]
+ ```
diff --git a/docs/src/api/project.md b/docs/src/api/project.md
new file mode 100644
index 0000000000..43768d6eca
--- /dev/null
+++ b/docs/src/api/project.md
@@ -0,0 +1,393 @@
+# Project
+
+A project is a container for one or more repositories, a [secret](./secret.md)
+for accessing the repositories and further configuration.
+
+The REST API provides support for a number of operations:
+
+- [Create a Project](#create-project)
+- [Update a Project](#update-project)
+- [Delete a Project](#delete-project)
+- [List Projects](#list-projects)
+- [Get Project Configuration](#get-project-configuration)
+- [Update Project Configuration](#update-project-configuration)
+- [List Current Access Rules](#list-current-access-rules)
+- [Update Access Rules](#update-access-rules)
+- [Bulk Update Access Rules](#bulk-update-access-rules)
+- [Move Project To Another Organization](#move-project-to-another-organization)
+- [List Project KV store entries](#list-project-kv-store-entries)
+
+
+
+## Create a Project
+
+Creates a new project with specified parameters.
+
+* **URI** `/api/v1/org/${orgName}/project`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myProject",
+ "description": "my project",
+
+ "repositories": {
+ "myRepo": {
+ "url": "...",
+ "branch": "...",
+ "path": "...",
+ "secret": "..."
+ }
+ },
+
+ "cfg": {
+ ...
+ }
+ }
+ ```
+ All parameters except `name` are optional.
+
+ The project configuration is a JSON object of the following structure:
+ ```json
+ {
+ "group1": {
+ "subgroup": {
+ "key": "value"
+ }
+ },
+ "group2": {
+ ...
+ }
+ }
+ ```
+
+ Most of the parameter groups are defined by used plugins.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "CREATED"
+ }
+ ```
+
+**
+
+## Update a Project
+
+Updates parameters of an existing project.
+
+* **URI** `/api/v1/org/${orgName}/project`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "New name",
+ "id": "---",
+ "description": "my updated project",
+
+ "repositories": {
+ "myRepo": {
+ "url": "...",
+ "branch": "...",
+ "secret": "..."
+ }
+ },
+
+ "cfg": {
+ ...
+ }
+ }
+ ```
+ All parameters are optional.
+
+ Omitted parameters are not updated.
+
+ Project `id` is mandatory, in case of updating project `name`, .
+
+ An empty value must be specified in order to remove a project's value:
+ e.g. an empty `repositories` object to remove all repositories from a project.
+
+ See also: [project configuration](#get-project-configuration).
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED",
+ "id": "---"
+ }
+ ```**
+
+
+
+## Delete a Project
+
+Removes a project and its resources.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "DELETED"
+ }
+ ```
+
+
+
+## List Projects
+
+Lists all existing projects.
+
+* **URI** `/api/v1/org/${orgName}/project`
+* **Query parameters**
+ - `sortBy`: `projectId`, `filter`;
+ - `asc`: direction of sorting, `true` - ascending, `false` - descending
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "name": "..." },
+ { "name": "...", "description": "my project", ... }
+ ]
+ ```
+
+
+
+## Get Project Configuration
+
+Returns project's [configuration](../getting-started/projects.md#configuration) JSON or its part.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/cfg/${path}`
+* **Query parameters**
+ - `path`: path to a sub-object in the configuration, can be empty
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ ...
+ }
+ ```
+
+
+
+## Update Project Configuration
+
+Updates project's [configuration](../getting-started/projects.md#configuration) or its part.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/cfg/${path}`
+* **Query parameters**
+ - `path`: path to a sub-object in the configuration, can be empty
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "group1": {
+ "param1": 123
+ }
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+## List Current Access Rules
+
+Returns project's current access rules.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/access`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {"teamId": "...", "level": "..."},
+ ...
+ ]
+ ```
+
+## Update Access Rules
+
+Updates project's access rules for a specific team.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/access`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "teamId": "9304748c-81e6-11e9-b909-0fe0967f269a",
+ "orgName": "myOrg",
+ "teamName": "myTeam",
+ "level": "READER"
+ }
+ ```
+
+ Either `teamId` or `orgName` and `teamName` combinations are allowed.
+ The `level` parameter accepts one of the three possible values:
+ - `READER`
+ - `WRITER`
+ - `OWNER`
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+* **Example**
+ ```
+ curl -ikn -H 'Content-Type: application/json' \
+ -d '{"orgName": "MyOrg", "teamName": "myTeam", "level": "READER"}' \
+ http://concord.example.com/api/v1/org/MyOrg/project/MyProject/access
+ ```
+
+## Bulk Update Access Rules
+
+Updates project's access rules for multiple teams.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/access/bulk`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [{
+ "teamId": "9304748c-81e6-11e9-b909-0fe0967f269a",
+ "orgName": "myOrg",
+ "teamName": "myTeam",
+ "level": "READER"
+ }]
+ ```
+
+ Accepts a list of access rule elements. See [the non-bulk version](#update-access-rules)
+ of this method for description.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+## Move Project to Another Organization
+
+Moves the project to the specified Organization (through Organization name ID)
+
+* **URI** `/api/v1/org/${orgName}/project`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Tupe: application/json
+ ```
+
+ ```json
+ {
+ "name": "myProject",
+ "orgName": "anotherOrg"
+ }
+ ```
+ Also accepts `orgId` (Unique Organization ID) instead of `orgName`
+ in the request body.
+* **Success Response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+## List Project KV store entries
+
+List entries in a [Project KV store]({{ site.concord_plugins_v2_docs }}/key-value.md).
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/kv`
+* **Headers** `Authorization`
+* **Query parameters**
+- `filter` - filters KV items by key (substring match, case-insensitive);
+- `limit` - number, maximum number of records to return;
+- `offset` - number, offset of the first record, used for paging.
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [ {
+ "key" : "myKey",
+ "value" : "myValue",
+ "lastUpdatedAt" : "2021-03-29T23:22:13.334+03"
+ } ]
+ ```
diff --git a/docs/src/api/repository.md b/docs/src/api/repository.md
new file mode 100644
index 0000000000..3526b152ed
--- /dev/null
+++ b/docs/src/api/repository.md
@@ -0,0 +1,155 @@
+# Repository
+
+Concord projects have one or multiple associated repositories. The `repository` API supports
+a number of operations on the project specific repositories:
+
+- [Create a Repository](#create-repository)
+- [Update a Repository](#update-repository)
+- [Delete a Repository](#delete-repository)
+- [Validate a Repository](#validate-repository)
+- [Refresh a Repository](#refresh-repository)
+
+
+
+## Create a Repository
+
+A new repository can be created with a POST request and the required parameters.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repository`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "...",
+ "url": "...",
+ "branch": "...",
+ "commitId": "...",
+ "path": "...",
+ "secretId": "..."
+ }
+ ```
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "ok": true
+ }
+ ```
+
+
+
+## Update a Repository
+
+An existing repository can be updated with a POST request and the changed
+parameters.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repository`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "id": "...",
+ "name": "...",
+ "url": "...",
+ "branch": "...",
+ "commitId": "...",
+ "path": "...",
+ "secretId": "..."
+ }
+ ```
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result" : "UPDATED",
+ "ok" : true
+ }
+ ```
+
+
+
+
+## Delete a Repository
+
+A DELETE request can be used to remove a repository.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repository/{repositoryName}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
+
+
+
+
+## Validate a Repository
+
+A HTTP POST request can be used to validate a Concord repository. Specifically
+this action causes the Concord YML file to be parsed and validated with regards
+to syntax and any defined policies.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repository/{repositoryName}/validate`
+* **Method** `POST`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "VALIDATED",
+ "ok": true
+ }
+ ```
+
+
+
+
+## Refresh a Repository
+
+An existing repository can be refreshed with a POST request. This causes the
+clone of the git repository stored within Concord to be updated. As a
+consequence the Concord YML file is parsed again and any changes to triggers and
+other configurations are updated.
+
+* **URI** `/api/v1/org/{orgName}/project/{projectName}/repository/{repositoryName}/refresh`
+* **Method** `POST`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true
+ }
+ ```
diff --git a/docs/src/api/role.md b/docs/src/api/role.md
new file mode 100644
index 0000000000..bf29e6f85f
--- /dev/null
+++ b/docs/src/api/role.md
@@ -0,0 +1,170 @@
+# Role
+
+A role is a set of rights/permissions assigned to users.
+
+The REST API provides support for working with roles:
+
+- [Create or Update a Role](#create-update)
+- [Get a Role](#get)
+- [Remove a Role](#delete)
+- [List Roles](#list)
+- [Add/Remove LDAP groups to a role](#addremove-ldap-groups-to-a-role)
+- [List LDAP groups for a role](#list-ldap-groups-for-a-role)
+
+
+
+## Create or Update a Role
+
+Creates a new role or updates an existing one. Requires administrator
+privileges.
+
+* **URI** `/api/v1/role`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myRole",
+ "permissions": [...set of permissions...]
+ }
+ ```
+
+ - `name` - the role's name;
+ - `permissions` - optional, the set of role's permissions;
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "id": "..."
+ }
+ ```
+
+
+
+## Get a Role
+
+Returns an existing role.
+
+* **URI** `/api/v1/role/${name}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "id": "...",
+ "name": "...",
+ "permissions": [...set of permissions...]
+ }
+ ```
+
+
+
+## Remove a Role
+
+Deletes an existing role.
+
+* **URI** `/api/v1/role/${name}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "DELETED",
+ "ok": true
+ }
+ ```
+
+
+
+## List Roles
+
+List all existing roles.
+
+* **URI** `/api/v1/role`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id": "...",
+ "name": "...",
+ "permissions": [...set of permissions...]
+ }
+ ]
+ ```
+
+
+
+## Add/Remove LDAP groups to a Role
+
+Add or Remove LDAP groups to a Role. Requires administrator privileges.
+
+* **URI** `/api/v1/role/${roleName}/ldapGroups?replace=${replace}`
+* **Query parameters**
+ - `replace`: boolean, replace existing ldap groups mapped to a role, default is false;
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ ["group1", "group2",...]
+ ```
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true
+ }
+ ```
+
+
+
+## List LDAP groups for a role
+
+List LDAP groups for a role.
+
+* **URI** `/api/v1/role/${roleName}/ldapGroups`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ "group1", "group2", ...
+ ]
+ ```
diff --git a/docs/src/api/secret.md b/docs/src/api/secret.md
new file mode 100644
index 0000000000..6c75bf9f53
--- /dev/null
+++ b/docs/src/api/secret.md
@@ -0,0 +1,476 @@
+# Secret
+
+A secret is either a username/password or name/ssh-key pair for use in securing
+access to repositories and other systems. Secrets can be created and managed
+in the Concord Console user interface as well as the Concord REST API.
+
+The REST API provides support for the following operations related to secrets:
+
+- [Create a Secret](#create-secret)
+ - [Example: Generate a new Key Pair](#example-new-key-pair)
+ - [Example: Upload an Existing Key Pair](#example-upload-key-pair)
+ - [Example: Creating a Username and Password Secret](#example-username-password-secret)
+ - [Example: Storing a Single Value as Secret](#example-single-value-secret)
+- [Update a Secret](#update-secret-v2)
+- [Get Metadata of Secret](#meta-data)
+- [Get Public SSH Key of Secret](#get-key)
+- [Delete a Secret](#delete-secret)
+- [List Secrets](#list-secrets)
+- [List Current Access Rules](#list-current-access-rules)
+- [Update Access Rules](#update-access-rules)
+- [Bulk Update Access Rules](#bulk-update-access-rules)
+- [Move Secret to Another Organization](#move-secret-to-another-organization)
+
+
+
+## Create a Secret
+
+Creates a new secret to be stored in Concord.
+
+* **URI** `/api/v1/org/${orgName}/secret`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: multipart/form-data`
+* **Multipart request**
+
+ - `type` - mandatory, supported types:
+ - `key_pair` - a SSH key pair, public and private key files;
+ - `username_password` - a pair of string values;
+ - `data` - binary or text data.
+ - `name` - mandatory, the name of the created secret. Must be
+ unique for the organization;
+ - `storePassword` - optional, a password, will be used to encrypt
+ the created secret and which can be used to retrieve it back;
+ - `generatePassword` - optional, a boolean value. If `true`, the
+ server will automatically generate and return a `storePassword`
+ value;
+ - `visibility` - optional, `PUBLIC` (default) or `PRIVATE`. See
+ the description of [public and private resources](../getting-started/orgs.md);
+ - `project` - optional, a project name. If set, the secret can
+ only be used in the processes of the specified project;
+
+ The rest of the parameters depend of the `type` of the created
+ secret:
+ - `type=key_pair`:
+ - `public` - a public key file of the key pair;
+ - `private` - a private key file of the key pair.
+ - `type=username_password`:
+ - `username` - a string value;
+ - `password` - a string value.
+ - `type=data`:
+ - `data` - a string or binary value.
+
+ For `type=key_pair` if a `public` value is omitted, a new key
+ pair will generated by the server.
+
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "id": "...",
+ "result": "CREATED",
+ "ok": true
+ }
+ ```
+
+
+
+### Example: Generate a new Key Pair
+
+You can create a new key pair that is signed by the Concord server as follows:
+
+```
+curl -u myusername \
+-F name=myKey \
+-F type=key_pair \
+https://concord.example.com/api/v1/org/Default/secret
+```
+
+After successful authentication with the provided prompted password, the server
+generates the new key pair and returns the public key:
+
+```
+{
+ "id" : "e7c24546-e0e1-11e7-be9f-fa163e0708eb",
+ "result" : "CREATED",
+ "publicKey" : "ssh-rsa AAAAB3NzaC1...zri1 concord-server\n",
+ "ok" : true
+}
+```
+
+This key can be used as a deploy key in the git repository of your project to
+establish the necessary trust between the Concord server and your git repository
+hosting system.
+
+
+
+### Example: Upload an Existing Key Pair
+
+You can upload an existing key pair as follows:
+
+```
+curl -H "Authorization: auBy4eDWrKWsyhiDp3AQiw" \
+-F name=myKey \
+-F type=key_pair \
+-F public=@/path/to/id_rsa.pub \
+-F private=@/path/to/id_rsa \
+https://concord.example.com/api/v1/org/Default/secret
+```
+After successful authentication with the prompted password, the server
+uploads and stores the files. The secret can subsequently be used within your
+Concord flows.
+
+
+
+### Example: Creating a Username and Password Secret
+
+You can create a username and password secret as follows:
+
+```
+curl -u myusername \
+-F name=myKey \
+-F type=username_password \
+-F username=myUser \
+-F password=myPass \
+https://concord.example.com/api/v1/org/Default/secret
+```
+
+After successful authentication with the prompted password, the server
+creates and stores both values with a secret. It can subsequently be used within
+your Concord flows.
+
+
+
+### Example: Storing a Single Value as Secret
+
+You can store a single value as a secret on Concord as follows:
+
+```
+curl -u myusername \
+-F name=myKey \
+-F type=data \
+-F data=myValue \
+https://concord.example.com/api/v1/org/Default/secret
+```
+
+
+
+## Update a Secret
+
+Updates parameters of an existing secret.
+
+* **URI** `/api/v2/org/${orgName}/secret/${secretName}`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: multipart/form-data`
+* **Body**
+ Multipart binary data.
+
+ The values will be interpreted depending on their name:
+ - `name` - New secret name to update;
+ - `orgId` or `org` - New ID or name of the organization which
+ "owns" the secret;
+ - `projectId` or `project` - New ID or name of the project
+ for which secret will be restricted;
+ - `removeProjectLink` - remove restriction to a project, boolean, default value is false;
+ - `ownerId` - new secret owner identifier, UUID;
+ - `storePassword` - current store password used to encrypt the created secret and which can be used to retrieve it back;
+ - `newStorePassword` - new store password, `storePassword` is mandatory to update a new store password, secrets protected by `server key` cannot be updated to a password protected secret;
+ - `visibility` - new secret visibility, `PUBLIC` or `PRIVATE`. See the description of [public and private resources](../getting-started/orgs.md);
+ - `type` - type of new secret to be updated, takes values `data`, `username_password`, `key_pair`;
+
+ The secret data to be updated depends on `type` value and corresponding rest of the parameters are as follows:
+
+ - `type=key_pair`:
+ - `public` - a public key file of the key pair.
+ - `private` - a private key file of the key pair.
+
+ - `type=username_password`:
+ - `username` - a string value;
+ - `password` - a string value.
+ - `type=data`:
+ - `data` - a string or binary value(file).
+
+ `storePassword` is required to update a password-protected secret.
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+You can update a single value as a secret on Concord as follows:
+
+```
+curl -u myusername \
+-F org=Default \
+-F name=mySecret \
+-F data="$(echo -n "your-secret-value" | base64)" \
+http://concord.example.com/api/v1/org/Default/secret/myKey
+```
+
+
+
+## Get Metadata of Secret
+
+Retrieves metadata of an existing secret.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "id": "...",
+ "name": "secretName",
+ "orgId": "...",
+ "orgName": "...",
+ "projectId": "...",
+ "projectName": "...",
+ "type": "...",
+ "storeType": "...",
+ "visibility": "..."
+ }
+ ```
+
+
+
+## Get Public SSH Key of Secret
+
+Returns a public key from an existing key pair of a secret.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}/public`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "name": "secretName",
+ "publicKey": "ssh-rsa AAAA... concord-server",
+ "ok": true
+ }
+ ```
+
+On a typical Concord installation you can pass your username and be quoted
+for the password:
+
+```
+curl -u username 'https://concord.example.com/api/v1/org/Default/secret/myKey/public'
+```
+
+The server provides a JSON-formatted response similar to:
+
+```json
+{
+ "name" : "myKey",
+ "publicKey" : "ssh-rsa ABCXYZ... concord-server",
+ "ok" : true
+}
+```
+
+The value of the `publicKey` attribute represents the public key of the newly
+generated key.
+
+The value of the `name` attribute e.g. `myKey` identifies the key for
+usage in Concord.
+
+
+
+## Delete a Secret
+
+Deletes a secret and associated keys.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+
+## List Secrets
+
+List all existing secrets in a specific organization.
+
+* **Permissions**
+* **URI** `/api/v1/org/${orgName}/secret`
+* **Query parameters**
+ - `limit`: maximum number of records to return;
+ - `offset`: starting index from which to return;
+ - `filter`: secrets with matching name to return;
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "name": "...", "type": "..." },
+ { "name": "...", "type": "..." }
+ ]
+ ```
+
+## List Current Access Rules
+
+Returns secrets's current access rules.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}/access`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {"teamId": "...", "level": "..."},
+ ...
+ ]
+ ```
+
+## Update Access Rules
+
+Updates secrets's access rules for a specific team.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}/access`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "teamId": "9304748c-81e6-11e9-b909-0fe0967f269a",
+ "orgName": "myOrg",
+ "teamName": "myTeam",
+ "level": "READER"
+ }
+ ```
+
+ Either `teamId` or `orgName` and `teamName` combinations are allowed.
+ The `level` parameter accepts one of the three possible values:
+ - `READER`
+ - `WRITER`
+ - `OWNER`
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+* **Example**
+ ```
+ curl -ikn -H 'Content-Type: application/json' \
+ -d '{"orgName": "MyOrg", "teamName": "myTeam", "level": "READER"}' \
+ http://concord.example.com/api/v1/org/MyOrg/secret/MySecret/access
+ ```
+
+## Bulk Update Access Rules
+
+Updates secrets's access rules for multiple teams.
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}/access/bulk`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [{
+ "teamId": "9304748c-81e6-11e9-b909-0fe0967f269a",
+ "orgName": "myOrg",
+ "teamName": "myTeam",
+ "level": "READER"
+ }]
+ ```
+
+ Accepts a list of access rule elements. See [the non-bulk version](#update-access-rules)
+ of this method for description.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
+
+## Move Secret to Another Organization
+
+Moves the Secret to the specified Organization (through Organization name ID)
+
+* **URI** `/api/v1/org/${orgName}/secret/${secretName}`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "orgName": "anotherOrg"
+ }
+ ```
+ Also accepts `orgId` (Unique Organization ID) instead of `orgName`
+ in the request body.
+* **Success Response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "result": "UPDATED"
+ }
+ ```
diff --git a/docs/src/api/team.md b/docs/src/api/team.md
new file mode 100644
index 0000000000..c7202577ba
--- /dev/null
+++ b/docs/src/api/team.md
@@ -0,0 +1,223 @@
+# Team
+
+A team is a group of users. Users can be in multiple teams
+simultaneously.
+
+The REST API provides support for a number of operations:
+
+- [Create a Team](#create-team)
+- [Update a Team](#update-team)
+- [List Teams](#list-teams)
+- [List Users in a Team](#list-users)
+- [Add Users to a Team](#add-users)
+- [Add LDAP Groups to a Team](#add-ldap-group)
+- [Remove Users from a Team](#remove-users)
+
+
+
+## Create a Team
+
+Creates a new team with specified parameters or updates an existing one.
+
+* **URI** `/api/v1/org/${orgName}/team`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "myTeam",
+ "description": "my team"
+ }
+ ```
+ All parameters except `name` are optional.
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "CREATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+
+## Update a Team
+
+Updates parameters of an existing team.
+
+* **URI** `/api/v1/org/${orgName}/team`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "name": "new name",
+ "id": "---"
+ ---
+ }
+ ```
+
+ All parameters are optional.
+
+ Omitted parameters are not updated.
+
+ Team `id` is mandatory, in case of updating team `name`.
+
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true,
+ "id": "..."
+ }
+ ```
+
+
+## List Teams
+
+Lists all existing teams.
+
+* **URI** `/api/v1/org/${orgName}/team`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id": "...",
+ "name": "...",
+ "description": "..."
+ },
+ {
+ "id": "...",
+ "name": "...",
+ "description": "my project"
+ }
+ ]
+ ```
+
+
+## List Users in a Team
+
+Returns a list of users associated with the specified team.
+
+* **URI** `/api/v1/org/${orgName}/team/${teamName}/users`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "id": "...", "username": "..." },
+ { "id": "...", "username": "..." }
+ ]
+ ```
+
+
+## Add Users to a Team
+
+Adds a list of users to the specified team.
+
+* **URI** `/api/v1/org/${orgName}/team/${teamName}/users`
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ [
+ {
+ "username": "userA",
+ "role": "MEMBER"
+ },
+ {
+ "username": "userB",
+ "role": "MAINTAINER"
+ },
+ ...
+ ]
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+
+
+## Add LDAP Groups to a Team
+
+Adds a list of LDAP groups to the specified team.
+
+* **URI** `/api/v1/org/${orgName}/team/${teamName}/ldapGroups`
+* **Method** `PUT`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ [
+ {
+ "group": "CN=groupA,DC=example,DC=com",
+ "role": "MEMBER"
+ },
+ {
+ "group": "CN=groupB,DC=example,DC=com",
+ "role": "MAINTAINER"
+ },
+ ...
+ ]
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+
+
+## Remove Users from a Team
+
+Removes a list of users from the specified team.
+
+* **URI** `/api/v1/org/${orgName}/team/${teamName}/users`
+* **Method** `DELETE`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ ["userA", "userB", "..."]
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
diff --git a/docs/src/api/template.md b/docs/src/api/template.md
new file mode 100644
index 0000000000..c593ad9aab
--- /dev/null
+++ b/docs/src/api/template.md
@@ -0,0 +1,83 @@
+# Template
+
+[Templates](../templates/index.md) allows sharing of common elements and
+processes.
+
+The REST API provides support for a number of operations:
+
+- [Create a New Template Alias](#create-template-alias)
+- [List Template Aliases](#list-template-aliases)
+- [Delete a Template Alias](#delete-a-template-alias)
+
+
+
+## Create a New Template Alias
+
+Creates a new or updates existing template alias.
+
+* **Permissions** `template:manage`
+* **URI** `/api/v1/template/alias`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "alias": "my-template",
+ "url": "http://host/path/my-template.jar"
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
+
+
+
+## List Template Aliases
+
+Lists existing template aliases.
+
+* **Permissions** `template:manage`
+* **URI** `/api/v1/template/alias`
+* **Method** `GET`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ { "alias": "my-template", "url": "http://host/port/my-template.jar"},
+ { "alias": "...", "url": "..."}
+ ]
+ ```
+
+
+## Delete a Template Alias
+
+Removes an existing template alias.
+
+* **Permissions** `template:manage`
+* **URI** `/api/v1/template/alias/${alias}`
+* **Method** `DELETE`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true
+ }
+ ```
diff --git a/docs/src/api/trigger.md b/docs/src/api/trigger.md
new file mode 100644
index 0000000000..879f123046
--- /dev/null
+++ b/docs/src/api/trigger.md
@@ -0,0 +1,55 @@
+# Trigger
+
+[Triggers](../triggers/index.md) start processes in reaction to external events.
+
+The REST API provides support for a number of operations:
+
+- [List Triggers](#list-triggers)
+- [Refresh Triggers](#refresh-triggers)
+
+
+
+
+## List Triggers
+
+Returns a list of triggers registered for the specified project's repository.
+
+* **URI** `/api/v2/trigger?orgName={orgName}&projectName={projectName}&repoName={repoName}&type={eventSource}`
+* **Query parameters**
+ - `orgName`: organization filter for trigger list;
+ - `projectName`: project filter for trigger list;
+ - `repoName`: repository name filter for trigger list;
+ - `type`: Event source filter for trigger list (e.g. `cron`, `github`);
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ [
+ {
+ "id": "...",
+ "conditions": {
+ ...
+ }
+ }
+ ]
+ ```
+
+
+
+## Refresh Triggers
+
+Reloads the trigger definitions for the specified project's repository.
+
+* **URI** `/api/v1/org/${orgName}/project/${projectName}/repository/${repoName}/trigger`
+* **Method** `POST`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ none
diff --git a/docs/src/api/user.md b/docs/src/api/user.md
new file mode 100644
index 0000000000..ea49885897
--- /dev/null
+++ b/docs/src/api/user.md
@@ -0,0 +1,104 @@
+# User
+
+A user represents an actual person using Concord to execute processes or
+administer the server.
+
+The REST API provides support for a number of operations:
+
+- [Create or Update a User](#create-user)
+- [Find a User](#find-user)
+- [Sync LDAP groups for a User](#sync-ldap-groups-user)
+
+
+
+
+## Create or Update a User
+
+Creates a new user with specified parameters or updates an existing one
+using the specified username.
+
+* **URI** `/api/v1/user`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "username": "myUser",
+ "type": "LOCAL",
+ "roles": ["testRole1", "testRole2"]
+ }
+ ```
+
+ Allowed `type` value:
+ - `LOCAL` - a local user, can be authenticated using an [API key](./apikey.md);
+ - `LDAP` - a AD/LDAP user, can be authenticated using AD/LDAP credentials or an API key.
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "ok": true,
+ "id" : "9be3c167-9d82-4bf6-91c8-9e28cfa34fbb",
+ "created" : false
+ }
+ ```
+
+ The `created` parameter indicates whether the user was created or updated.
+
+
+
+## Find a User
+
+Find an existing user by name.
+
+* **URI** `/api/v1/user/${username}`
+* **Method** `GET`
+* **Headers** `Authorization`
+* **Body**
+ none
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "id" : "...",
+ "name" : "myUser"
+ }
+ ```
+
+ The `created` parameter indicates whether the user was created or updated.
+
+
+
+## Sync LDAP groups for a User
+
+Synchronize LDAP groups for a given user.
+
+* **URI** `/api/v1/userldapgroup/sync`
+* **Method** `POST`
+* **Headers** `Authorization`, `Content-Type: application/json`
+* **Body**
+ ```json
+ {
+ "username": "myUser",
+ "userDomain": "userDomain"
+ }
+ ```
+* **Success response**
+ ```
+ Content-Type: application/json
+ ```
+
+ ```json
+ {
+ "result": "UPDATED",
+ "ok": true
+ }
+ ```
+
+ The `UPDATED` result indicates the ldap groups for a specified username got synced successfully.
+
Note: Only administrators (role: concordAdmin) can synchronize user LDAP groups
diff --git a/docs/src/cli/index.md b/docs/src/cli/index.md
new file mode 100644
index 0000000000..88cac2a156
--- /dev/null
+++ b/docs/src/cli/index.md
@@ -0,0 +1,19 @@
+# Overview
+
+Concord provides a command-line tool to simplify some of the common operations.
+
+- [Installation](#installation)
+- [Linting](./linting.md)
+- [Running Flows Locally](./running-flows.md)
+
+## Installation
+
+Concord CLI requires Java 17+ available in `$PATH`. Installation is merely
+a download-and-copy process:
+
+```bash
+$ curl -o ~/bin/concord https://repo.maven.apache.org/maven2/com/walmartlabs/concord/concord-cli/{{ site.concord_core_version }}/concord-cli-{{ site.concord_core_version }}-executable.jar
+$ chmod +x ~/bin/concord
+$ concord --version
+{{ site.concord_core_version }}
+```
diff --git a/docs/src/cli/linting.md b/docs/src/cli/linting.md
new file mode 100644
index 0000000000..c77a2b2910
--- /dev/null
+++ b/docs/src/cli/linting.md
@@ -0,0 +1,82 @@
+# Linting
+
+The CLI tool supports "linting" of Concord YAML files. It can validate
+the syntax of flows and expressions without actually running them.
+
+```bash
+concord lint [-v] [target dir]
+```
+
+The `lint` command parses and validates Concord YAML files located in the
+current directory or directory specified as an argument. It allows to quickly
+verify if the [DSL](../processes-v2/index.md#dsl) syntax and the syntax of
+expressions are correct.
+
+Currently, it is not possible to verify whether the tasks are correctly called
+and/or their parameter types are correct. It is also does not take dynamically
+[imported resources](../processes-v2/imports.md) into account.
+
+For example, the following `concord.yml`is missing a closing bracket in the
+playbook expression.
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: "${myPlaybookName" # forgot to close the bracket
+```
+
+Running `concord lint` produces:
+
+```bash
+$ concord lint
+ERROR: @ [/home/ibodrov/tmp/lint/test/concord.yml] line: 3, col: 13
+ Invalid expression in task arguments: "${myPlaybookName" in IN VariableMapping [source=null, sourceExpression=null, sourceValue=${myPlaybookName, target=playbook, interpolateValue=true] Encountered "" at line 1, column 16.Was expecting one of: "}" ... "." ... "[" ... ";" ... ">" ... "gt" ... "<" ... "lt" ... ">=" ... "ge" ... "<=" ... "le" ... "==" ... "eq" ... "!=" ... "ne" ... "&&" ... "and" ... "||" ... "or" ... "*" ... "+" ... "-" ... "?" ... "/" ... "div" ... "%" ... "mod" ... "+=" ... "=" ...
+------------------------------------------------------------
+
+Found:
+ profiles: 0
+ flows: 1
+ forms: 0
+ triggers: 0
+ (not counting dynamically imported resources)
+
+Result: 1 error(s), 0 warning(s)
+
+INVALID
+```
+
+The linting feature is in very early development, more validation rules are
+added in future releases.
+
+## Running Flows Locally
+
+**Note:** this feature supports only [`concord-v2` flows](../processes-v2/index.md).
+The CLI tool forces the `runtime` parameter value to `concord-v2`.
+
+The CLI tool can run Concord flows locally:
+
+```yaml
+# concord.yml
+flows:
+ default:
+ - log: "Hello!"
+```
+
+```
+$ concord run
+Starting...
+21:23:45.951 [main] Hello!
+...done!
+```
+
+By default, `concord run` copies all files in the current directory into
+a `$PWD/target` directory -- similarly to Maven.
+
+The `concord run` command doesn't use a Concord Server, the flow execution is
+purely local. However, if the flow uses external
+[dependencies](../processes-v2/configuration.md#dependencies) or
+[imports](../processes-v2/imports.md) a working network connection might be
+required.
+
diff --git a/docs/src/cli/running-flows.md b/docs/src/cli/running-flows.md
new file mode 100644
index 0000000000..cdf420f298
--- /dev/null
+++ b/docs/src/cli/running-flows.md
@@ -0,0 +1,226 @@
+# Running Flows
+
+- [Overview](#overview)
+- [Secrets](#secrets)
+- [Dependencies](#dependencies)
+ - [Configuring Extra Repositories](#configuring-extra-repositories)
+- [Imports](#imports)
+
+## Overview
+
+**Note:** this feature is still under active development. All features
+described here are subject to change.
+
+**Note:** this feature supports only [`concord-v2` flows](../processes-v2/index.md).
+The CLI tool forces the `runtime` parameter value to `concord-v2`.
+
+The CLI tool can run Concord flows locally:
+
+```yaml
+# concord.yml
+flows:
+ default:
+ - log: "Hello!"
+```
+
+```
+$ concord run
+Starting...
+21:23:45.951 [main] Hello!
+...done!
+```
+
+By default, `concord run` copies all files in the current directory into
+a `$PWD/target` directory -- similarly to Maven.
+
+The `concord run` command doesn't use a Concord Server, the flow execution is
+purely local. However, if the flow uses external resources (such as
+`dependencies` or `imports`) a working network connection might be required.
+
+Supported features:
+- all regular [flow](../processes-v2/flows.md) elements;
+- [dependencies](#dependencies);
+- [imports](#imports);
+- [secrets]({{ site.concord_plugins_v2_docs }}/crypto.md). See [below](#secrets) for
+more details.
+
+Features that are currently *not* supported:
+- [forms](../getting-started/forms.md);
+- [profiles](../processes-v2/profiles.md);
+- password-protected secrets.
+
+## Secrets
+
+By default, Concord CLI uses a local file-based storage to access
+[secrets]({{ site.concord_plugins_v2_docs }}/crypto.md) used in flows.
+
+**Note:** currently, all secret values stored without encryption. Providing
+a password in the `crypto` task arguments makes no effect.
+
+### Secret Store Directory
+
+Concord CLI resolves Secret data in `$HOME/.concord/secrets` by default. This can
+be customized by providing the `--secret-dir` flag.
+
+```shell
+$ concord run --secret-dir="$HOME/.my_secrets" ...
+```
+
+### String Secrets
+
+```yaml
+# concord.yml
+flows:
+ default:
+ - log: "${crypto.exportAsString('myOrg', 'mySecretString', null)}"
+```
+
+Concord CLI looks for a `$HOME/.concord/secrets/myOrg/mySecretString` file
+and returns its content.
+
+### Key Pair Secrets
+
+```yaml
+# concord.yml
+flows:
+ default:
+ - set:
+ keyPair: "${crypto.exportKeyAsFile('myOrg', 'myKeyPair', null)}"
+```
+
+For key pair secrets, Concord CLI looks for two files:
+
+- `$HOME/.concord/secrets/myOrg/myKeyPair` (private key)
+- `$HOME/.concord/secrets/myOrg/myKeyPair.pub` (public key)
+
+### Username/Password Secrets
+
+Concord CLI looks for a single file matching the Secret name, in a
+directory name matching the Secret's organization within the
+[secret store directory](#secret-store-directory). Given the following crypto
+call:
+
+```yaml
+#concord.yml
+flows:
+ default:
+ - log: "${crypto.exportCredentials('myOrg', 'myCredentials', null)}"
+```
+
+When executed, Concord CLI loads the data from `$HOME/.concord/secrets/myOrg/myCredentials`.
+
+```json
+{
+ "username": "the_actual_username",
+ "password": "the_actual_password"
+}
+```
+
+### File Secrets
+
+Concord CLI copies a file matching the Secret name, in a directory name matching
+the Secret's organization within the [secret store directory](#secret-store-directory).
+Given the following crypto call:
+
+```yaml
+#concord.yml
+flows:
+ default:
+ - log: "${crypto.exportAsFile('myOrg', 'myFile', null)}"
+```
+
+When executed, Concord CLI copies the file from `$HOME/.concord/secrets/myOrg/myFile`
+to a random temporary file.
+
+### Project-Encrypted Strings
+
+Concord CLI also supports the `crypto.decryptString` method, but instead of
+decrypting the provided string, the string is used as a key to look up
+the actual value in a "vault" file.
+
+The default value file is stored in the `$HOME/.concord/vaults/default`
+directory and has very simple key-value format:
+
+```
+key = value
+```
+
+Let's take this flow as an example:
+
+```yaml
+flows:
+ default:
+ - log: "${crypto.decryptString('ZXhhbXBsZQ==')}"
+```
+
+When executed, it looks for the `ZXhhbXBsZQ==` key in the vault file and
+returns the associated value.
+
+```
+$ cat $HOME/.concord/vaults/default
+ZXhhbXBsZQ\=\= = hello!
+
+$ concord run
+Starting...
+21:52:07.221 [main] hello!
+...done!
+```
+
+## Dependencies
+
+Concord CLI supports flow [dependencies](../processes-v2/configuration.md#dependencies).
+
+By default, dependencies cached in `$HOME/.concord/depsCache/`.
+
+For Maven dependencies Concord CLI uses [Maven Central](https://repo.maven.apache.org/maven2/)
+repository by default.
+
+### Configuring Extra Repositories
+
+Create a maven repository configuration file for concord in `$HOME/.concord/mvn.json`.
+Set the contents to an object with a `repositories` attribute containing a list
+of maven repository definitions.
+
+```json
+{
+ "repositories": [
+ {
+ "id": "host",
+ "url": "file:///home/MY_USER_ID/.m2/repository"
+ },
+ {
+ "id": "internal",
+ "url": "https://my.nexus.repo/repository/custom_maven_repo"
+ },
+ {
+ "id": "central",
+ "url": "https://repo.maven.apache.org/maven2/"
+ }
+ ]
+}
+```
+
+## Imports
+
+Concord CLI supports flow [imports](../processes-v2/imports.md).
+
+For example:
+```yaml
+# concord.yml
+imports:
+ - git:
+ url: "https://github.com/walmartlabs/concord.git"
+ path: "examples/hello_world"
+```
+
+When executed it produces:
+
+```
+$ concord run
+Starting...
+21:58:37.918 [main] Hello, Concord!
+...done!
+```
+
+By default, Concord CLI stores a local cache of `git` imports in
+`$HOME/.concord/repoCache/$URL`.
diff --git a/docs/src/getting-started/configuration.md b/docs/src/getting-started/configuration.md
new file mode 100644
index 0000000000..6bdf17f44f
--- /dev/null
+++ b/docs/src/getting-started/configuration.md
@@ -0,0 +1,296 @@
+# Configuration
+
+The Concord server can be configured via a configuration file. Typically, this
+is done by the administrator responsible for the Concord installation.
+
+A Concord user does not need to be concerned about these settings and instead
+needs to define their processes and further details. Check out
+[our quickstart guide](./quickstart.md).
+
+The following configuration details are available:
+
+- [Server Configuration File](#server-cfg-file)
+- [Server Environment Variables](#server-environment-variables)
+- [Agent Configuration File](#agent-cfg-file)
+- [Agent Environment Variables](#agent-environment-variables)
+- [Common Environment Variables](#common-environment-variables)
+- [Default Process Variables](#default-process-variables)
+- [GitHub Integration](#github-integration)
+
+
+
+## Server Configuration File
+
+Concord Server uses [Typesafe Config](https://github.com/lightbend/config)
+format for its configuration files.
+
+The path to the configuration file must be passed via `ollie.conf` JVM
+parameter like so:
+
+```bash
+java ... -Dollie.conf=/opt/concord/conf/server.conf com.walmartlabs.concord.server.Main
+```
+
+When using Docker it can be passed as `CONCORD_CFG_FILE` environment variable.
+
+The complete configuration file for the Server can be found in
+[the source code repository](https://github.com/walmartlabs/concord/blob/master/server/dist/src/main/resources/concord-server.conf).
+
+A minimal example suitable for local development (assuming [OpenLDAP](./development.md#oldap)):
+
+```json
+concord-server {
+ db {
+ appPassword = "q1"
+ inventoryPassword = "q1"
+ }
+
+ secretStore {
+ # just some random base64 values
+ serverPassword = "cTFxMXExcTE="
+ secretStoreSalt = "SCk4KmBlazMi"
+ projectSecretSalt = "I34xCmcOCwVv"
+ }
+
+ ldap {
+ url = "ldap://oldap:389"
+ searchBase = "dc=example,dc=org"
+ principalSearchFilter = "(cn={0})"
+ userSearchFilter = "(cn={0})"
+ usernameProperty = "cn"
+ userPrincipalNameProperty = ""
+ returningAttributes = ["*", "memberOf"]
+
+ systemUsername = "cn=admin,dc=example,dc=org"
+ systemPassword = "admin"
+ }
+}
+```
+
+
+
+## Server Environment Variables
+
+All parameters are optional.
+
+### API
+
+| Variable | Description | Default value |
+|----------------|------------------------------|---------------|
+| API_PORT | API port number to listen on | 8001 |
+
+### Forms
+
+| Variable | Description | Default value |
+|-----------------|--------------------------------------|-----------------------------|
+| FORM_SERVER_DIR | Directory to store custom form files | _a new temporary directory_ |
+
+### HTTP(S)
+
+| Variable | Description | Default value |
+|-----------------|---------------------------------------------|---------------|
+| SECURE_COOKIES | Enable `secure` attribute on server cookies | false |
+| SESSION_TIMEOUT | Default timeout for sessions (seconds) | 1800 |
+
+### Logging
+
+| Variable | Description | Default value |
+|------------------------|------------------------------------------------|---------------|
+| ACCESS_LOG_PATH | Path to the access log, including the filename | _n/a_ |
+| ACCESS_LOG_RETAIN_DAYS | How many days to keep access logs | 7 |
+
+
+
+## Agent Configuration File
+
+Concord Agent uses [Typesafe Config](https://github.com/lightbend/config)
+format for its configuration files.
+
+The path to the configuration file must be passed via `ollie.conf` JVM
+parameter like so:
+
+```bash
+java ... -Dollie.conf=/opt/concord/conf/agent.conf com.walmartlabs.concord.agent.Main
+```
+
+When using Docker it can be passed as `CONCORD_CFG_FILE` environment variable.
+
+The complete configuration file for the Agent can be found in
+[the source code repository]({{ site.concord_source}}tree/master/agent/src/main/resources/concord-agent.conf).
+
+The configuration file is optional for local development.
+
+## Agent Environment Variables
+
+All parameters are optional.
+
+### Logging
+
+| Variable | Description | Default value |
+|---------------------------|-------------------------------------------------------------|---------------|
+| DEFAULT_DEPS_CFG | Path to the default process dependencies configuration file | _empty_ |
+| REDIRECT_RUNNER_TO_STDOUT | Redirect process logs to stdout | _false_ |
+
+## Common Environment Variables
+
+### JVM Parameters
+
+| Variable | Description | Default value |
+|--------------------|---------------------------------------|---------------------------|
+| CONCORD_JAVA_OPTS | Additional JVM arguments | concord-server: |
+| | | `-Xms2g -Xmx2g -server` |
+| | | concord-agent: |
+| | | `-Xmx256m` |
+
+### Dependencies
+
+| Variable | Description | Default value |
+|--------------------|---------------------------------------|---------------|
+| CONCORD_MAVEN_CFG | Path to a JSON file | _empty_ |
+
+See below for the expected format of the configuration file.
+
+Complete example:
+
+```json
+{
+ "repositories": [
+ {
+ "id": "central",
+ "layout": "default",
+ "url": "https://repo.maven.apache.org/maven2/",
+ "auth": {
+ "username": "...",
+ "password": "..."
+ },
+ "snapshotPolicy": {
+ "enabled": true,
+ "updatePolicy": "never",
+ "checksumPolicy": "ignore"
+ },
+ "releasePolicy": {
+ "enabled": true,
+ "updatePolicy": "never",
+ "checksumPolicy": "ignore"
+ }
+ },
+
+ {
+ "id": "private",
+ "url": "https://repo.example.com/maven2/"
+ }
+ ]
+}
+```
+
+Parameters:
+- `id` - string, mandatory. Arbitrary ID of the repository;
+- `layout` - string, optional. Maven repository layout. Default value is
+`default`;
+- `url` - string, mandatory. URL of the repository;
+- `auth` - object, optional. Authentication parameters, see
+ the [AuthenticationContext](https://maven.apache.org/resolver/apidocs/org/eclipse/aether/repository/AuthenticationContext.html)
+ javadoc for the list of accepted parameters. Common parameters:
+ - `username`, `password` - credentials;
+ - `preemptiveAuth` - if `true` Concord performs pre-emptive authentication.
+ Required if the remote server expects credentials in all requests (e.g. S3 buckets).
+- `snapshotPolicy` and `releasePolicy` - object, optional. Policies for
+snapshots and release versions. Parameters:
+ - `enabled` - boolean, optional. Enabled or disables the category. Default
+ value is `true`;
+ - `updatePolicy` - string, optional. See [RepositoryPolicy](https://maven.apache.org/resolver/apidocs/org/eclipse/aether/repository/RepositoryPolicy.html)
+ javadoc for the list of accepted values;
+ - `checksumPolicy` - string, optional. See [RepositoryPolicy](https://maven.apache.org/resolver/apidocs/org/eclipse/aether/repository/RepositoryPolicy.html)
+ javadoc for the list of accepted values;
+
+## Default Process Variables
+
+As a Concord administrator, you can set default variable values that
+are automatically set in all process executions.
+
+This, for example, allows you to set global parameters such as the connection
+details for an SMTP server used by the [SMTP task]({{ site.concord_plugins_v2_docs }}/smtp.md) in one
+central location separate from the individual projects.
+
+The values are configured in a YAML file. The path to the file and the name are
+configured in [the server's configuration file](#server-cfg-file). The
+following example, shows how to configure an SMTP server to be used by all
+processes. As a result, project authors do not need to specify the SMTP server
+configuration in their
+own `concord.yml`.
+
+```yml
+configuration:
+ arguments:
+ smtpParams:
+ host: "smtp.example.com"
+ port: 25
+
+ # another example
+ slackCfg:
+ authToken: "..."
+```
+
+## GitHub Integration
+
+### Repository Access
+
+To access external Git repositories Concord supports both the username and
+password, and the SSH key pair authentication.
+
+Additionally, an access token can be configured to use when no custom
+authentication specified:
+
+```
+# concord-server.conf
+concord-server {
+ git {
+ # GitHub username and an access token separated by a colon
+ oauth: "jsmith:af3f...f"
+ }
+}
+```
+
+The same token must be added to the Agent's configuration as well:
+
+```
+# concord-agent.conf
+concord-agent {
+ git {
+ oauth: "..."
+ }
+}
+```
+
+### Webhooks
+
+Concord supports both repository and organization level hooks.
+
+Here's a step-by-step instruction on how to configure Concord to use GitHub
+webhooks:
+
+Configure the shared secret:
+
+```
+# concord-server.conf
+concord-server {
+ github {
+ githubDomain = "github.com"
+ secret = "myGitHubSecret"
+ }
+}
+```
+
+Create a new webhook on the GitHub repository or organizations settings page:
+
+
+
+Use `Content-Type: application/json` and a secret you specified in the
+`concord-server.conf` file.
+
+**Note:** the `useInitiator` [feature](../triggers/github.md) requires
+a Concord environment with an AD/LDAP server. If you wish to use Concord without
+an AD or LDAP server, or your GitHub users are not the same as your AD/LDAP users,
+use `useInitiator: false` or omit it (`false` is the default value). In this case
+all processes triggered by GitHub would have the built-in `github` user as their
+initiator.
diff --git a/docs/src/getting-started/development.md b/docs/src/getting-started/development.md
new file mode 100644
index 0000000000..08d84d1219
--- /dev/null
+++ b/docs/src/getting-started/development.md
@@ -0,0 +1,177 @@
+# Development
+
+The following instructions are needed for developing Concord itself.
+
+## Database
+
+A locally-running instance of PostgreSQL is required. By default, the server
+will try to connect to `localhost:5432` using username `postgres`, password
+`q1` and database name `postgres`.
+
+The easiest way to get the database up and running is to use an official
+Docker image:
+```
+docker run -d -p 5432:5432 --name db -e 'POSTGRES_PASSWORD=q1' library/postgres:10.4
+```
+
+## Running from an IDE
+
+You need to [build](#building) the project before you can load it into an IDE.
+
+It is possible to start the server and an agent directly from an IDE using the
+following main classes:
+- concord-server: `com.walmartlabs.concord.server.dist.Main`
+- concord-agent: `com.walmartlabs.concord.agent.Main`
+
+The server requires a configuration file to start. Set `ollie.conf` JVM
+parameter to the path of your local `server.conf`. Check the
+[Server Configuration File](./configuration.md#server-cfg-file) for details.
+
+Here's an example of the Server's launch configuration in Intellij IDEA:
+
+
+
+To start the UI, please refer to the console's readme file.
+
+## Debugging
+
+The `concord-server` and `concord-agent` processes can be started in debug mode as
+normal Java applications.
+
+However, as the agent processes its payload in a separate JVM, it must be
+configured to start those processes with the remote debugging enabled. To
+enable the remote debugging add `_agent.json` to the root directory of the
+process' payload (so either into your GIT repository or into the payload
+archive) with this content:
+
+```json
+{
+ "jvmArgs": ["-Xdebug", "-Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=y"]
+}
+```
+
+`jvm` arguments can also be specified in the `requirements` section of the
+`configuration`:
+
+```yaml
+configuration:
+ requirements:
+ jvm:
+ extraArgs:
+ - "-Xdebug"
+ - "-Xrunjdwp:transport=dt_socket,address=5005,server=y,suspend=y"
+```
+
+**Note:** If both configurations exist, then `_agent.json` takes the priority.
+
+This makes all processes to listen on port `5005` for incoming connections from
+an IDE. Make sure to change the port number if you plan to debug multiple
+processes simultaneously.
+
+This method is suitable only for local development.
+
+
+## Building
+
+To skip NPM-related tasks when building the project:
+```
+./mvnw clean install -DskipTests -DskipNpm
+```
+
+## Making a Release
+
+All JAR files are signed using a GPG key. Pass phrase for a key must be configured in
+`~/.m2/settings.xml`:
+```xml
+
+
+ development
+
+ MY_PASS_PHASE
+
+
+
+```
+
+1. use `maven-release-plugin` as usual:
+ ```
+ ./mvnw release:prepare release:perform
+ ```
+2. push docker images;
+3. don't forget to push new tags and the release commit:
+ ```
+ git push origin master --tags
+ ```
+
+## Pull Requests
+
+- squash and rebase your commits;
+- wait for CI checks to pass.
+
+
+## Using OpenLDAP for Authentication
+
+Assuming that Concord Server is already running in a `server` container.
+
+1. update the `ldap` section in the Concord Server's configuration file:
+ ````
+ $ cat server.conf
+ ...
+ ldap {
+ url = "ldap://localhost:389"
+ searchBase = "dc=example,dc=org"
+ principalSearchFilter = "(cn={0})"
+ userSearchFilter = "(cn=*{0}*)"
+ usernameProperty = "cn"
+ systemUsername = "cn=admin,dc=example,dc=org"
+ systemPassword = "admin"
+ }
+ ...
+ ```
+
+2. Restart the server if it was running:
+ ```bash
+ docker restart server
+ ```
+
+3. start the OpenLDAP server. The easiest way is to use Docker:
+ ```bash
+ docker run -d --name oldap --network=container:server osixia/openldap
+ ```
+
+ Check the container's logs:
+ ```
+ ...
+ 5a709dd5 slapd starting
+ ...
+ ```
+
+4. create a user's LDIF file:
+ ```
+ $ cat myuser.ldif
+ dn: cn=myuser,dc=example,dc=org
+ cn: myuser
+ objectClass: top
+ objectClass: organizationalRole
+ objectClass: simpleSecurityObject
+ objectClass: mailAccount
+ userPassword: {SSHA}FZxXb9WXU8yO5VgJYCU8Z+pbVzCJisNX
+ mail: myuser@example.org
+ ```
+
+ This creates a new user `myuser` with the password `q1`.
+
+5. import the LDIF file:
+ ```
+ $ cat myuser.ldif | docker exec -i oldap ldapadd -x -D "cn=admin,dc=example,dc=org" -w admin
+
+ adding new entry "cn=myuser,dc=example,dc=org"
+ ```
+
+6. use `myuser` and `q1` to authenticate in the [Concord Console](../console/index.md):
+
+ 
+
+7. after successful authentication, you should see the UI similar to this:
+
+ 
diff --git a/docs/src/getting-started/forms.md b/docs/src/getting-started/forms.md
new file mode 100644
index 0000000000..9c7165e33d
--- /dev/null
+++ b/docs/src/getting-started/forms.md
@@ -0,0 +1,576 @@
+# Forms
+
+Concord flows can provide simple web-based user interfaces with forms for data
+input from users. Forms are described declaratively in
+[Concord file](../processes-v1/flows.md) and optionally contain
+[custom HTML/CSS/JS/etc resources](#custom-forms).
+
+- [Form declaration](#declaration)
+- [Form fields](#fields)
+- [Form submitter](#submitter)
+- [Using a form in a flow](#using)
+- [Custom error messages](#error)
+- [Custom forms](#custom)
+- [Accessing form data](#access)
+- [File upload](#upload)
+- [Shared resources](#shared)
+- [User access](#user)
+- [Restricting forms](#restriction)
+- [Dynamic forms](#dynamic)
+- [Using API](#using-api)
+- [Examples](#examples)
+
+
+
+## Form Declaration
+
+Forms are declared at in the `forms` section of the Concord file:
+
+```yaml
+forms:
+ myForm:
+ - ...
+```
+
+The name of a form (in this example it's `myForm`) can be used to
+[call a form](#using-a-form-in-a-flow) from a process. Also, it will be used
+as a name of an object which will store the values of the fields.
+
+Such form definitions can be reused multiple times in the same process.
+
+Form fields can also be defined
+[dynamically during the runtime of the process](#dynamic).
+
+> **Note:** Form names can only contain alphanumerics, whitespaces, underscores(_) and dollar signs($).
+
+
+
+## Form Fields
+
+Forms must contain one or more fields:
+
+```yaml
+forms:
+ myForm:
+ - fullName: { label: "Name", type: "string", pattern: ".* .*", readonly: true, placeholder: "Place name here" }
+ - age: { label: "Age", type: "int", min: 21, max: 100 }
+ - favouriteColour: { label: "Favourite colour", type: "string", allow: ["gray", "grey"], search: true }
+ - languages: { label: "Preferred languages", type: "string+", allow: "${locale.languages()}" }
+ - password: { label: "Password", type: "string", inputType: "password" }
+ - rememberMe: { label: "Remember me", type: "boolean" }
+ - photo: { label: "Photo", type: "file" }
+ - email: { label: "Email", type: "string", inputType: "email" }
+```
+
+Field declaration consists of the name (`myField`), the type
+(`string`) and additional options.
+
+The name of a field will be used to store a field's value in the
+form's results. E.g. if the form's name is `myForm` and the field's
+name is `myField`, then the value of the field will be stored in
+`myForm.myField` variable.
+
+Common options:
+- `label`: the field's label, usually human-readable;
+- `value`: default value [expression](#expressions), evaluated when
+the form is called;
+- `allow`: allowed value(s). Can be a YAML literal, array, object or an
+[expression](#expressions).
+
+Supported types of fields and their options:
+- `string`: a string value
+ - `pattern`: (optional) a regular expression to check the value.
+ - `inputType`: (optional) specifies the `type` of html ``
+ element to display e.g. `text`, `button`, `checkbox` and others.
+ - `readonly`: (optional) specifies that an input field is read-only.
+ - `placeholder`: (optional) specifies a short hint that describes the expected value of an input field.
+ - `search`: (optional) allows user to type and search item in dropdown input
+- `int`: an integer value
+ - `min`, `max`: (optional) value bounds.
+ - `readonly`: (optional) specifies that an input field is read-only.
+ - `placeholder`: (optional) specifies a short hint that describes the expected value of an input field.
+- `decimal`: a decimal value
+ - `min`, `max`: (optional) value bounds.
+ - `readonly`: (optional) specifies that an input field is read-only.
+ - `placeholder`: (optional) specifies a short hint that describes the expected value of an input field.
+- `boolean`: a boolean value, `true` or `false`;
+ - `readonly`: (optional) specifies that an input field is read-only.
+- `file`: a file upload field, the submitted file is stored as a file in the
+process' workspace. Find more tips in our [dedicated section](#upload).
+
+Supported input types:
+- `password`: provide a way for the user to securely enter a password.
+- `email`: provide a way for the user to enter a correct email.
+
+Cardinality of the field can be specified by adding a cardinality
+quantifier to the type:
+- a single non-optional value: `string`;
+- optional value: `string?`;
+- one or more values: `string+`;
+- zero or more values: `string*`.
+
+Additional field types will be added in the next versions.
+
+
+
+### Form Submitter
+
+Concord can optionally store the form submitter's data in a `submittedBy`
+variable. It can be enabled using `saveSubmittedBy` form call option:
+```yaml
+flows:
+ default:
+ - form: myForm
+ saveSubmittedBy: true
+
+ - log: "Hello, ${myForm.submittedBy.displayName}"
+```
+
+The variable has the same structure as `${initiator}` or `${currentUser}`
+(see [Provided Variables](../processes-v1/index.md#provided-variables)
+section).
+
+
+
+## Using a Form in a Flow
+
+To call a form from a process, use `form` command:
+
+```yaml
+flows:
+ default:
+ - form: myForm
+ - log: "Hello, ${myForm.myField}"
+```
+
+Expressions can also be used in form calls:
+
+```yaml
+configuration:
+ arguments:
+ formNameVar: "myForm"
+
+flows:
+ default:
+ - form: ${formNameVar}
+ - log: "Hello, ${myForm.name}"
+ - log: "Hello, ${context.getVariable(formNameVar).name}"
+
+forms:
+ myForm:
+ - name: { type: "string" }
+```
+
+Forms will be pre-populated with values if the current context
+contains a map object, stored under the form's name. E.g. if the
+context has a map object
+
+```json
+{
+ "myForm": {
+ "myField": "my string value"
+ }
+}
+```
+
+then the form's `myField` will be populated with `my string value`.
+
+The `form` command accepts additional options:
+```yaml
+flows:
+ default:
+ - form: myForm
+ yield: true
+ values:
+ myField: "a different value"
+ additionalData:
+ nestedField:
+ aValue: 123
+```
+
+Supported options:
+
+- `yield`: a boolean value. If `true`, the UI wizard will stop after
+this form and the rest of the process will continue in the background.
+Supported only for non-custom (without user HTML) forms;
+- `values`: additional values, to override default form values or to
+provide additional data;
+- `fields`: allows defining the form fields at runtime, see more in the
+ [Dynamic Forms](#dynamic) section.
+
+
+
+## Custom Error Messages
+
+While Concord provides default error messages for form field validation, the error
+text that displays can be customized . With a form created from your YAML
+file, this can be accomplished with the addition of a `locale.properties`
+file in the same directory location.
+
+The error types that can be customized are:
+- `invalidCardinality`
+- `expectedString`
+- `expectedInteger`
+- `expectedDecimal`
+- `expectedBoolean`
+- `doesntMatchPattern`
+- `integerRangeError`
+- `decimalRangeError`
+- `valueNotAllowed`
+
+To customize the same error message for all fields, the syntax is
+simply the `error type:customized error`. A `locale.properties`
+file that looks like the following example flags all fields empty
+after submission with the error 'Required field':
+```
+invalidCardinality=Required field
+```
+
+For customizing specific fields in a form, use the format `fieldname.error type=custom
+message`. In a form to collect a name, phone number, and an optional email, the following
+`locale.properties` file requires a name and phone number, and enforces a specific pattern
+for the phone number (specified in YAML).
+```
+username.invalidCardinality=Please enter your username
+phonenumber.invalidCardinality=Please enter your phone number
+phonenumber.doesntMatchPattern=Please enter your phone number with the format ###-###-####
+```
+
+
+
+## Custom Forms
+
+Look and feel of a form can be changed by providing form's own HTML,
+CSS, JavaScript and other resources.
+
+For example, if we have a Concord file file with a single form:
+```yaml
+flows:
+ default:
+ - form: myForm
+ - log: "Hello, ${myForm.name}"
+
+forms:
+ myForm:
+ - name: {type: "string"}
+```
+
+then we can provide a custom HTML for this form by placing it into
+`forms/myForm/index.html` file:
+```
+forms/
+ myForm/
+ index.html
+```
+
+When the form is activated, the server will redirect a user to the
+`index.html` file.
+
+Here's an example of how a `index.html` file could look like:
+```html
+
+
+
+ My Form
+
+
+
+
+
My Form
+
+
+
+
+
+
+
+```
+
+Let's take a closer look:
+1. `data.js` is referenced - a JavaScript file which is generated by
+the server when the form is opened. See the
+[Accessing the data](#accessing-form-data) section for more details;
+2. `submitUrl`, a value provided in `data.js`, used as a submit URL
+of the form. For every instance of a form, the server provides a
+unique URL;
+3. a HTML input field added with the name same as the name of
+`myForm` field.
+
+Forms can use any external JavaScript library or a CSS resource. The
+only mandatory part is to use provided `submitUrl` value.
+
+Custom forms with file uploading fields must use
+`enctype="multipart/form-data"`:
+```html
+
+```
+
+
+
+## Accessing Form Data
+
+When a custom form is opened, the server generates a `data.js` file.
+It contains values of the fields, validation error and additional
+metadata:
+```javascript
+data = {
+ "success" : false,
+ "processFailed" : false,
+ "submitUrl" : "/api/service/custom_form/f5c0ab7c-72d8-42ee-b02e-26baea56f686/cc0beb01-b42c-4991-ae6c-180de2b672e5/continue",
+ "fields" : [ "name" ],
+ "definitions" : {
+ "name": {
+ "type": "string"
+ }
+ },
+ "values" : {
+ "name": "Concord"
+ },
+ "errors" : {
+ "name": "Required value"
+ }
+};
+```
+
+The file defines a JavaScript object with the following fields:
+
+- `success` - `false` if a form submit failed;
+- `processFailed` - `true` if a process execution failed outside of
+a form;
+- `submitUrl` - automatically generated URL which should be used to
+submit new form values and resume the process;
+- `fields` - list of form field names in the order of their declaration in the
+ Concord file;
+- `definitions` - form field definitions. Each key represents a
+field:
+ - `type` - type of the field;
+ - `label` - optional label, set in the form definition;
+ - `cardinality` - required cardinality of the field's value;
+ - `allow` - allowed value(s) of the field.
+- `values` - current values of the form fields;
+- `errors` - validation error messages.
+
+
+
+
+## File Upload
+
+Forms with `file` fields allow users to upload arbitrary files:
+
+```yaml
+forms:
+ myForm:
+ - myFile: { label: "Upload a text file", type: "file" }
+
+flows:
+ default:
+ - form: myForm
+ - log: "Path: ${myForm.myFile}"
+ - log: "Content: ${resource.asString(myForm.myFile)}"
+```
+
+After the file is uploaded, the path to the file in the workspace is stored as
+the field's value.
+
+Typically, the server limits the maximum size of uploaded files. The exact limit
+depends on the configuration of a particular environment.
+
+Custom forms must use `` in order
+to support file upload.
+
+
+
+## Shared Resources
+
+Custom forms can have shared resources (e.g. common scripts or CSS
+files). Those resources should be put into `forms/shared` directory
+of a process:
+```
+forms/
+ myForm/
+ index.html
+ myOtherForm/
+ image.png
+ index.html
+ shared/
+ logo.png
+ common.js
+ common.css
+```
+
+Shared resources can be referenced by forms using relative path:
+```html
+
+
+
+```
+
+
+
+## User Access
+
+Forms can be accessed by a user in two different ways:
+- through [the URL](../api/process.md#browser-link);
+- by clicking on the _Wizard_ button on the Console's process
+status page.
+
+In both cases, users will be redirected from form to form until the
+process finishes, an error occurs or until a form with `yield: true`
+is reached.
+
+
+
+## Restricting Forms
+
+Submitting a form can be restricted to a particular user or a group of
+users. This can be used to, but is not limited to, create flows with approval
+steps. You can configure a flow, where an action is required from a user that is
+not the process' initiator.
+
+Restricted forms can be submitted only by the specified user(s) or the membersos a
+security group - e.g. configured in your Active Directory/LDAP setup.
+
+To restrict a form to specific user(s), use the `runAs` attribute. Used with a
+boolean variable, rendered as a checkbox, in the form, can change the flow
+depending on the approval or disapproval from the authorized user defined in
+`username`.
+
+```yaml
+flows:
+ default:
+ - form: approvalForm
+ runAs:
+ username: "expectedUsername"
+
+ - if: ${approvalForm.approved}
+ then:
+ - log: "Approved =)"
+ else:
+ - log: "Rejected =("
+
+forms:
+ approvalForm:
+ - approved: { type: boolean }
+```
+Multiple users can be specified under `username`:
+
+```yaml
+flows:
+ default:
+ - form: approvalForm
+ runAs:
+ username:
+ - "userA"
+ - "userB"
+```
+
+Here's how a form can be restricted to specific AD/LDAP groups:
+
+In most cases it is more practical to use groups of users to decide on the
+authorization. This can be achieved with the `group` list specified as
+attributes of the `ldap` parameter of `runAs`.
+
+```yaml
+- form: approvalForm
+ runAs:
+ ldap:
+ - group: "CN=managers,.*"
+ - group: "CN=project-leads,.*"
+```
+
+The `group` element is a list of regular expressions used to match
+the user's groups. If there's at least one match - the user will be
+allowed to submit the form.
+
+By default, after the restricted form is submitted, the process continues to run
+on behalf of the process initiator. If you need to continue the execution on
+behalf of the user that submitted the form, you need to set the `keep` attribute
+to `true`. The `currentUser.username` variable initially contains the value of
+`initiator.username`. After the form with the `keep: true` configuration,
+`currentUser` contains details from the user, who submitted the form.
+
+```yaml
+flows:
+ default:
+ - log: "Starting as ${currentUser.username}" # the same as ${initiator.username}
+
+ - form: approvalForm
+ runAs:
+ username: "expectedUsername"
+ keep: true
+
+ - log: "Continuing as ${currentUser.username}" # the user that submitted the form
+
+forms:
+ approvalForm:
+ - approved: { type: boolean }
+```
+
+
+
+## Dynamic Forms
+
+Form fields can be declared directly at the form usage step, without creating a
+form definition. Here's a complete example:
+
+```yaml
+flows:
+ default:
+ - form: myForm
+ fields:
+ - firstName: {type: "string"}
+ - lastName: {type: "string"}
+ - log: "Hello, ${myForm.firstName} ${myForm.lastName}"
+```
+
+The `fields` parameter expects a list of form field definitions just like the
+regular `forms` section. The list of fields can be stored as a variable and
+referenced using an expression:
+
+```yaml
+configuration:
+ arguments:
+ myFormFields:
+ - firstName: {type: "string"}
+ - lastName: {type: "string"}
+flows:
+ default:
+ - form: myForm
+ fields: ${myFormFields}
+```
+
+With the usage of a [script](./scripting.md), the fields can be set dynamically at
+process runtime, resulting in a dynamic form. A number of examples are available
+in the
+[dynamic_form_fields project]({{site.concord_source}}tree/master/examples/dynamic_form_fields).
+
+## Using API
+
+Forms can be retrieved and submitted using [the REST API](../api/form.md).
+A form can be submitted either by posting JSON data or by using
+`multipart/form-data` requests which also support file upload.
+
+## Examples
+
+The Concord repository contains a couple of examples on how to use
+custom and regular forms:
+
+- [single form]({{site.concord_source}}tree/master/examples/forms)
+- [custom form]({{site.concord_source}}tree/master/examples/custom_form)
+- [custom form with no external dependencies]({{site.concord_source}}tree/master/examples/custom_form_basic)
+- [custom form with dynamic fields]({{site.concord_source}}tree/master/examples/dynamic_forms)
+- [approval-style flow]({{site.concord_source}}tree/master/examples/approval)
+
diff --git a/docs/src/getting-started/images/concord_top_level.png b/docs/src/getting-started/images/concord_top_level.png
new file mode 100644
index 0000000000..118dd7bca8
Binary files /dev/null and b/docs/src/getting-started/images/concord_top_level.png differ
diff --git a/docs/src/getting-started/index.md b/docs/src/getting-started/index.md
new file mode 100644
index 0000000000..4e56226d52
--- /dev/null
+++ b/docs/src/getting-started/index.md
@@ -0,0 +1,101 @@
+# Overview
+
+
+
+# Main Concepts
+
+Concord is a workflow server. It is the orchestration engine that connects
+different systems together using scenarios and plugins created by users.
+
+Check out [the overview document](../../overview/index.md) for more
+information about features and benefits of Concord.
+
+## Processes
+
+Processes are the main concept of Concord. A process is an execution of
+[Concord Flows](../processes-v2/flows.md) in an isolated environment.
+
+A process can run in a [project](#projects), thus sharing configuration and
+resources (such as [the KV store]({{ site.concord_plugins_v2_docs }}/key-value.md)) with other
+processes in the same project.
+
+Processes can be suspended (typically using a [form](./forms.md)) and resumed.
+While suspended processes are not consuming any resources apart from the DB
+disk space. See the [Process Overview](../processes-v2/index.md) section for
+more details about the lifecycle of Concord processes.
+
+## Projects
+
+A project is a way to group up processes and use shared environment and
+configuration.
+
+## Secrets
+
+Concord provides an API and [the plugin]({{ site.concord_plugins_v2_docs }}/crypto.md) to work with
+secrets such as:
+- SSH keys;
+- username/password pairs;
+- single value secrets (e.g. API tokens);
+- binary data (files)
+
+Secrets can optionally be protected by a user-provided password.
+
+## Users and Teams
+
+Concord can use an Active Directory/LDAP server or the local user store
+for authentication. Team-based authorization can be used to secure various
+resources.
+
+## Organizations
+
+Organizations are essentially namespaces to which resources such as projects,
+secrets, teams and others belong to.
+
+# Components
+
+[Concord](../../overview/index.md) consists of several components. The four
+main components are:
+- [Server](#concord-server) - the central component, manages the process state
+and resources;
+- [Console](#concord-console) - provides UI for project and process management,
+etc;
+- [Agent](#concord-agent) - executes user processes;
+- [Database](#database) - stores the process state, all Concord entities, logs,
+etc.
+
+## Concord Server
+
+The Server is the central component. It provides [the API](../api/index.md) which
+is used to control processes, manage entities such as projects and secrets,
+etc.
+
+A minimal Concord installation contains at least one Server. Multiple servers
+can run in active-active or active-passive configurations.
+
+## Concord Console
+
+The Console provides user interface for managing processes, projects, secrets and
+other entities.
+
+Read more about the console [here](../console/index.md).
+
+## Concord Agent
+
+The Agent is responsible for process execution. It receives workload from
+[Server](#concord-server) and, depending on the configuration of the job,
+starts processes in separate JVMs and/or Docker containers.
+
+Depending on [the configuration](./configuration.md#agent-cfg-file) a single
+agent can execute one or many jobs simultaneously.
+
+
+
+A single Concord installation can have hundreds of Agents. It is also possible
+to have Agents with different capabilities (e.g. running on different hardware)
+connected to the single Concord instance. Which is useful when you need to run
+some resource-intensive processes such as Ansible with lots of `forks`.
+
+## Database
+
+Concord uses [PostgreSQL](https://www.postgresql.org/) (10.4 or higher) to
+store process state, logs and entities such as projects and secrets.
diff --git a/docs/src/getting-started/install/docker-compose.md b/docs/src/getting-started/install/docker-compose.md
new file mode 100755
index 0000000000..992fb78f45
--- /dev/null
+++ b/docs/src/getting-started/install/docker-compose.md
@@ -0,0 +1,31 @@
+# {{ page.title }}
+
+The [Docker Compose template]({{ site.concord_source }}/tree/master/docker-images/compose)
+can be used to quickly spin up a local Concord environment.
+
+## Prerequisites
+
+- [Docker CE](https://docs.docker.com/get-docker/) 19.03+
+- [Docker Compose](https://docs.docker.com/compose/) 1.26.0+
+
+## Usage
+
+```
+$ git clone https://github.com/walmartlabs/concord.git
+$ cd concord/docker-images/compose/
+$ docker-compose up
+```
+
+Check the server container's log for the autogenerated admin token:
+
+```
+concord-server_1 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+concord-server_1 |
+concord-server_1 | Admin API token created: **********
+concord-server_1 |
+concord-server_1 | (don't forget to remove it in production)
+concord-server_1 |
+concord-server_1 | >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+```
+
+Use http://localhost:8001/#/login?useApiKey=true to login with the API token.
diff --git a/docs/src/getting-started/install/docker.md b/docs/src/getting-started/install/docker.md
new file mode 100755
index 0000000000..60250ed021
--- /dev/null
+++ b/docs/src/getting-started/install/docker.md
@@ -0,0 +1,193 @@
+# {{ page.title }}
+
+Pre-built Docker images can be used to run all four components of Concord:
+
+- [the Database](https://hub.docker.com/_/postgres)
+- [Concord Server](https://hub.docker.com/r/walmartlabs/concord-server)
+- [Concord Agent](https://hub.docker.com/r/walmartlabs/concord-agent)
+
+**Note:** starting from 1.36.0 the `concord-console` image is no longer needed.
+The UI is served by the `concord-server` container. For older versions check
+[the previous revision](https://github.com/walmartlabs/concord-website/blob/fcb31e1931541f5930913efce0503ce5d8b83f4b/docs/getting-started/install/docker.md)
+of this document.
+
+## Prerequisites
+
+### Docker
+
+If you do not already have Docker installed, find binaries and instructions
+on the [Docker website](https://www.docker.com/). You can install Docker through
+various methods that are outside the scope of this document.
+
+### Referencing a Private Docker Registry
+
+If you are using a private Docker registry, add its name to an image name in
+the examples below. For example, if your private docker registry is running
+on `docker.myorg.com` this command:
+
+```bash
+docker run ... walmartlabs/concord-agent
+```
+
+has to be run as:
+
+```bash
+docker run ... docker.myorg.com/walmartlabs/concord-agent
+```
+
+## Starting Concord Docker Images
+
+There are four components to Concord: the Server, the
+Console, the Database and an agent. Follow these steps to start all four
+components and run a simple process to test your Concord instance.
+
+
+
+### Step 1. Start the Database
+
+Concord uses [PostgreSQL](https://www.postgresql.org/) version 10.4 or higher:
+
+```bash
+docker run -d \
+-e 'POSTGRES_PASSWORD=q1' \
+-p 5432:5432 \
+--name db \
+library/postgres:10.4
+```
+
+Verify that the Database is running and ready to accept connections:
+
+```bash
+$ docker ps -a
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+c3b438edc980 postgres:10.4 "docker-entrypoint.s…" 3 seconds ago Up 1 second 5432/tcp db
+
+$ psql -U postgres -h localhost -p 5432 postgres
+Password for user postgres: (enter "q1")
+
+postgres=# select 1;
+ ?column?
+----------
+ 1
+(1 row)
+```
+
+
+
+### Step 2. Create the Server's Configuration File
+
+Create a `server.conf` file somewhere on the local filesystem with the
+following content:
+
+```json
+concord-server {
+ db {
+ url="jdbc:postgresql://db:5432/postgres"
+ appPassword = "q1"
+ inventoryPassword = "q1"
+ }
+
+ secretStore {
+ serverPassword = "cTE="
+ secretStoreSalt = "cTE="
+ projectSecretSalt = "cTE="
+ }
+
+ # AD/LDAP authentication
+ ldap {
+ url = "ldaps://AD_OR_LDAP_HOST:3269"
+ searchBase = "DC=myorg,DC=com"
+ principalSearchFilter = "(&(sAMAccountName={0})(objectCategory=person))"
+ userSearchFilter = "(&(|(sAMAccountName={0}*)(displayName={0}*))(objectCategory=person))"
+ usernameProperty = "sAMAccountName"
+ systemUsername = "me@myorg.com"
+ systemPassword = "secret"
+ }
+}
+```
+
+Make sure that the `db` section contains the same password you specified on
+[Step 1](#step-1).
+
+The `secretStore` parameters define the keys that are used
+to encrypt user secrets. The keys must be base64-encoded:
+
+```bash
+$ echo -ne "q1" | base64
+cTE=
+```
+
+The `ldap` section parameters depends on your organization's Active Directory
+or LDAP server setup. If you wish to use a local OpenLDAP instance, follow the
+[Using OpenLDAP for Authentication](../development.md#oldap) guide.
+
+The configuration file format and available parameters described in the
+[Configuration](../configuration.md) document.
+
+### Step 3. Start the Concord Server
+
+```bash
+docker run -d \
+-p 8001:8001 \
+--name server \
+--link db \
+-v /path/to/server.conf:/opt/concord/conf/server.conf:ro \
+-e CONCORD_CFG_FILE=/opt/concord/conf/server.conf \
+walmartlabs/concord-server
+```
+
+Replace `/path/to/server.conf` with the path to the file created on
+[Step 2](#step-2).
+
+Check the server's status:
+
+```bash
+$ docker logs server
+...
+14:38:17.866 [main] [INFO ] com.walmartlabs.concord.server.Main - main -> started in 5687ms
+...
+
+$ curl -i http://localhost:8001/api/v1/server/version
+...
+{
+ "version" : "0.99.0",
+ "env" : "n/a",
+ "ok" : true
+}
+```
+
+The API and the Console are available on [http://localhost:8001](http://localhost:8001).
+Try logging in using your AD/LDAP credentials.
+
+### Step 4. Start a Concord Agent
+
+```bash
+docker run -d \
+--name agent \
+--link server \
+-e SERVER_API_BASE_URL=http://server:8001 \
+-e SERVER_WEBSOCKET_URL=ws://server:8001/websocket \
+walmartlabs/concord-agent
+```
+
+Check the agent's status:
+
+```bash
+$ docker logs agent
+...
+4:41:45.530 [queue-client] [INFO ] c.w.c.server.queueclient.QueueClient - connect ['ws://server:8001/websocket'] -> done
+...
+```
+
+## First Project
+
+As a next step you can create your first project as detailed in the
+[quickstart guide](../quickstart.md).
+
+## Clean Up
+
+Once you have explored Concord you can stop and remove the containers.
+
+```bash
+docker rm -f console agent server db
+```
diff --git a/docs/src/getting-started/install/index.md b/docs/src/getting-started/install/index.md
new file mode 100644
index 0000000000..3b91ae1f95
--- /dev/null
+++ b/docs/src/getting-started/install/index.md
@@ -0,0 +1,7 @@
+# {{ page.title }}
+
+There are several options to install Concord:
+
+- using [Docker Compose](./docker-compose.md)
+- manually with [Docker](./docker.md)
+- [Vagrant](./vagrant.md)
diff --git a/docs/src/getting-started/install/vagrant.md b/docs/src/getting-started/install/vagrant.md
new file mode 100755
index 0000000000..6f04aa1e67
--- /dev/null
+++ b/docs/src/getting-started/install/vagrant.md
@@ -0,0 +1,20 @@
+# {{ page.title }}
+
+Requires Vagrant 2.2+.
+
+To start Concord using [Vagrant](https://www.vagrantup.com/):
+
+```bash
+$ git clone https://github.com/walmartlabs/concord.git
+$ cd concord/vagrant
+$ vagrant up
+```
+
+It starts Concord using the latest available Docker images.
+OpenLDAP running in a container will be used for authentication.
+
+The `vagrant up` command might take a while, typically ~5 minutes, depending
+on the internet connection.
+
+Refer to the [README.md](https://github.com/walmartlabs/concord/blob/master/vagrant/README.md)
+file for more details.
diff --git a/docs/src/getting-started/installation.md b/docs/src/getting-started/installation.md
new file mode 100755
index 0000000000..3279958137
--- /dev/null
+++ b/docs/src/getting-started/installation.md
@@ -0,0 +1,71 @@
+# Installation
+
+There are several options to install Concord:
+
+- using [Docker Compose](./install/docker-compose.md)
+- manually with [Docker](./install/docker.md)
+- [Vagrant](./install/vagrant.md)
+
+If you already have access to a Concord deployment or after finishing these
+installation steps, you can read the [Introduction to Concord](./index.md)
+to understand the basic concepts of Concord or set up your first project with
+the [quick start tips](./quickstart.md).
+
+## Database Requirements
+
+Concord requires PostgreSQL 10.4 or higher. The Server's configuration file
+provides several important DB connectivity options described below.
+
+### Default Admin API Token
+
+By default, Concord Server automatically generates the default admin API token
+and prints it out in the log on the first startup. Alternatively, the token's
+value can be specified in [the Server's configuration file](./configuration.md#server-configuration-file):
+
+```
+concord-server {
+ db {
+ changeLogParameters {
+ defaultAdminToken = "...any base64 value..."
+ }
+ }
+}
+```
+
+### Schema Migration
+
+Concord Server automatically applies DB schema changes every time it starts.
+By default, it requires `SUPERUSER` privileges to install additional extensions
+and to perform certain migrations.
+
+To deploy the schema using a non-superuser account:
+
+- create a non-superuser account and install required extensions:
+ ```sql
+ create extension if not exists "uuid-ossp";
+ create extension if not exists "pg_trgm";
+
+ create user app with password '...app password...';
+
+ grant all privileges on schema public to app;
+ ```
+- specify the following options in [the Server's configuration file](./configuration.md#server-configuration-file):
+
+ ```
+ concord-server {
+ db {
+ url = "jdbc:postgresql://host:5432/postgres"
+
+ appUsername = "app"
+ appPassword = "...app password..."
+
+ inventoryUsername = "app"
+ inventoryPassword = "...app password..."
+
+ changeLogParameters {
+ superuserAvailable = "false"
+ createExtensionAvailable = "false"
+ }
+ }
+ }
+ ```
diff --git a/docs/src/getting-started/json-store.md b/docs/src/getting-started/json-store.md
new file mode 100644
index 0000000000..fb2c93e9d0
--- /dev/null
+++ b/docs/src/getting-started/json-store.md
@@ -0,0 +1,152 @@
+# JSON Store
+
+JSON Store provides a built-in mechanism of storing and querying for
+arbitrary JSON data persistently. It is useful for processes which require
+state management beyond regular variables or features provided by
+[the Key Value store]({{ site.concord_plugins_v2_docs }}/key-value.md).
+
+**Note:** JSON Store supersedes the old Inventory and Inventory Query APIs.
+Existing users are encouraged to switch to the JSON Store API. The data created
+using the old API is available both trough the Inventory and JSON Store APIs.
+
+## Concepts
+
+Any Concord [organization](./orgs.md) can contain multiple JSON stores.
+Each store must have a name that's unique for that organization. Just like projects
+or secrets, JSON stores can be either _public_ or _private_. Data in public
+stores can be read by any user in the same organization as the store.
+Private stores require explicit access rules.
+
+The total size of a store and the maximum allowed number of stores can be
+restricted using [policies](./policies.md#json-store-rule).
+
+Each store can contain multiple _items_. Each item is a well-formed JSON
+document -- Concord performs syntax validation whenever a document is added or
+updated. Documents are identified by their "path" in the store, each path must
+be unique and can contain only one document.
+
+Items can be added or retrieved using [the API](../api/json-store.md),
+by using [the JSON Store task]({{ site.concord_plugins_v2_docs }}/json-store.md) or using
+[named queries](#named-queries).
+
+## Named Queries
+
+Named queries can be used to retrieve multiple items at once, perform
+aggregations and filtering on the fly.
+
+Queries use SQL:2011 syntax with [PostgreSQL 10 extensions for JSON](https://www.postgresql.org/docs/10/functions-json.html).
+When executing a query, Concord automatically limits it to the query's store by
+adding the store ID condition. All queries are read only and can only access
+the `JSON_STORE_DATA` table.
+
+Query parameters can be passed as JSON objects when the query is executed. Note
+that only valid JSON objects are allowed. If you wish to pass an array or a
+literal value as a query parameter then you need to wrap it into an object (see
+[the example below](#example)).
+
+Queries can be created and executed by using [the API](../api/json-store.md),
+by using [the task]({{ site.concord_plugins_v2_docs }}/json-store.md#execute-a-named-query) or in the
+Concord Console, which provides a way to execute and preview results of a query
+before saving it.
+
+The result of execution is a JSON array of rows returned by the query. All
+values must be representable in JSON - strings, numbers, booleans, arrays and
+objects. Currently, there are no limitations on how many rows or columns a query
+can return (subject to change).
+
+## Limitations
+
+The following PostgreSQL JSON(b) operators are not supported: `?`, `?|` and `?&`.
+
+Query arguments are not supported when executing queries in the Concord Console.
+
+## Example
+
+Let's create a simple user database of some fictional services. All operations
+except uploading the data can be performed in the Concord Console, but we're
+going to use `curl` for this example.
+
+The example uses the `Default` Concord organization. Depending on your Concord
+instance's configuration it might not be available. In this case, replace
+`Default` with the name of your organization.
+
+First, create a store:
+
+```
+$ curl -ikn -X POST \
+-H 'Content-Type: application/json' \
+-d '{"name": "myStore"}' \
+https://concord.example.com/api/v1/org/Default/jsonstore
+
+{
+ "result" : "CREATED",
+ "ok" : true
+}
+```
+
+Then we can add some data into the new store:
+
+```
+$ curl -ikn -X PUT \
+-H 'Content-Type: application/json' \
+-d '{"service": "service_a", "users": ["bob", "alice"]}' \
+https://concord.example.com/api/v1/org/Default/jsonstore/myStore/item/service_a
+
+$ curl -ikn -X PUT \
+-H 'Content-Type: application/json' \
+-d '{"service": "service_b", "users": ["alice", "mike"]}' \
+https://concord.example.com/api/v1/org/Default/jsonstore/myStore/item/service_b
+```
+
+Check if the data is there:
+
+```
+$ curl -ikn https://concord.example.com/api/v1/org/Default/jsonstore/myStore/item/service_a
+
+{"users": ["bob", "alice"], "service": "service_a"}
+```
+
+Now let's create a simple named query that we can use to find a `service` value
+by user.
+
+First, create a JSON file with the query definition:
+
+```json
+{
+ "name": "lookupServiceByUser",
+ "text": "select item_data->'service' from json_store_data where item_data @> ?::jsonb"
+}
+```
+
+Next, register the query:
+
+```
+$ curl -ikn -X POST \
+-H 'Content-Type: application/json' \
+-d @/tmp/query.json \
+https://concord.example.com/api/v1/org/Default/jsonstore/myStore/query
+```
+
+(replace `/tmp/query.json` with the path of the created file).
+
+Execute the query:
+
+```
+curl -ikn -X POST \
+-H 'Content-Type: application/json' \
+-d '{ "users": ["mike"] }' \
+https://concord.example.com/api/v1/org/Default/jsonstore/myStore/query/lookupServiceByUser/exec
+
+[ "service_b" ]
+```
+
+Let's take a closer look at the query:
+
+```sql
+select item_data->'service' from json_store_data where item_data @> ?::jsonb
+```
+
+We passed `{ "users": ["mike"] }` as the query parameter. If there's a document
+with a `users` property that contains a string value `mike` then the `service`
+value of the same document is returned. In this case, the query returns
+`[ "service_b" ]`.
diff --git a/docs/src/getting-started/node-roster.md b/docs/src/getting-started/node-roster.md
new file mode 100644
index 0000000000..abeed50bbb
--- /dev/null
+++ b/docs/src/getting-started/node-roster.md
@@ -0,0 +1,89 @@
+# Node Roster
+
+Node Roster is an optional feature of Concord. It collects
+[Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md) deployment data which is exposed via the API
+and a flow [task]({{ site.concord_plugins_v2_docs }}/node-roster.md).
+
+Node Roster requires a minimum [Ansible Plugin]({{ site.concord_plugins_v2_docs }}/ansible.md)
+version of 1.38.0. No further configuration is required for usage.
+
+## Features
+
+- automatically processes Ansible events and collects deployment data such as:
+ - remote hosts and their [Ansible facts](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts);
+ - deployed [artifacts](#supported-modules);
+ - deployers (users)
+- provides a way to fetch the collected data using [API](../api/node-roster.md)
+ or the `nodeRoster` [task]({{ site.concord_plugins_v2_docs }}/node-roster.md).
+
+## Supported Modules
+
+Node Roster supports the following Ansible modules:
+- [get_url](https://docs.ansible.com/ansible/latest/modules/get_url_module.html)
+- [maven_artifact](https://docs.ansible.com/ansible/latest/modules/maven_artifact_module.html)
+- [uri](https://docs.ansible.com/ansible/latest/modules/uri_module.html)
+
+Future versions will further extend this list.
+
+## Example
+
+A simple example of a flow and a playbook that downloads a remote file and puts
+it onto the remote host's directory.
+
+The flow:
+```yaml
+# concord.yml
+configuration:
+ dependencies:
+ - "mvn://com.walmartlabs.concord.plugins.basic:ansible-tasks:{{ site.concord_core_version }}"
+
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: playbook.yml
+ inventory:
+ myHosts:
+ hosts:
+ - "myhost.example.com"
+ vars:
+ ansible_connection: "local" # just for example purposes, don't actually connect
+ extraVars:
+ artifactDest: "${workDir}"
+```
+
+The playbook:
+```yaml
+# playbook.yml
+---
+- hosts: myHosts
+ tasks:
+ - get_url:
+ url: "http://central.maven.org/maven2/com/walmartlabs/concord/concord-cli/{{ site.concord_core_version }}/concord-cli-{{ site.concord_core_version }}-executable.jar"
+ dest: "{% raw %}{{ artifactDest }}{% endraw %}"
+```
+
+To run the example, either put it into a Git repository and follow
+the [Quick Start guide](../getting-started/quickstart.md) or start it using `curl` (in the directory with
+`concord.yml` and `playbook.yml`):
+```
+$ curl -i -u CONCORD_USER \
+-F concord.yml=@concord.yml \
+-F playbook.yml=@playbook.yml \
+https://concord.example.com/api/v1/process
+```
+
+Open the Concord UI to check the process status. After the process finishes,
+try one of the Node Roster endpoints:
+```
+$ curl -i -u CONCORD_USER https://concord.example.com/api/v1/noderoster/artifacts?hostName=myhost.example.com
+HTTP/1.1 200 OK
+...
+[ {
+ "url" : "http://central.maven.org/maven2/com/walmartlabs/concord/concord-cli/{{ site.concord_core_version }}/concord-cli-{{ site.concord_core_version }}-executable.jar"
+} ]
+```
+
+The API endpoint in the example returns a list of artifacts that were deployed
+to the specified host. Check [the API documentation](../api/node-roster.md)
+for the complete list of endpoints.
diff --git a/docs/src/getting-started/orgs.md b/docs/src/getting-started/orgs.md
new file mode 100644
index 0000000000..eab3b44eac
--- /dev/null
+++ b/docs/src/getting-started/orgs.md
@@ -0,0 +1,43 @@
+# Organizations and Teams
+
+Concord implements role-based access control using Organizations and Teams.
+
+- [Organizations](#organizations)
+- [Teams](#teams)
+- [Public and Private Resources](#public-and-private-resources)
+
+## Organizations
+
+Organizations own resources such as projects, secrets, inventories and
+processes. Organizations contain one or more [teams](#teams).
+
+Organizations are created by Concord administrators using [the REST API](../api/org.md).
+
+## Teams
+
+Teams are a part of an Organization and represent groups of users. Users in
+teams can have different **team roles**:
+- `MEMBER` - a regular team member, has access to the team's resources, but
+cannot invite other users to the team or manage organizations;
+- `MAINTAINER` - has the same permissions as a `MEMBER` and can manage users
+in the team;
+- `OWNER` - has the same permissions as a `MAINTAINER`, but, in addition, can
+manage the team's Organization.
+
+Teams have different **access levels** to the Organization's resources:
+- `READER` - can use the resource;
+- `WRITER` - can use and modify the resource;
+- `OWNER` - can use, modify or remove the resource.
+
+## Public and Private Resources
+
+Resources such as project, secrets and inventories can have different
+**visibility**:
+- `PUBLIC` - any Concord user can access and use the resource. For example,
+a public project can be used by anyone to start a new process.
+- `PRIVATE` - only [teams](#teams) that have an appropriate **access level**
+can use the resource.
+
+If a public project references another resource, for example, a secret used
+to retrieve the project's repository, the references resource must be `PUBLIC`
+as well or have an appropriate access level set up.
\ No newline at end of file
diff --git a/docs/src/getting-started/policies.md b/docs/src/getting-started/policies.md
new file mode 100644
index 0000000000..24a4057f1f
--- /dev/null
+++ b/docs/src/getting-started/policies.md
@@ -0,0 +1,897 @@
+# Policies
+
+Policies is a powerful and flexible mechanism to control different
+characteristics of processes and system entities.
+
+- [Overview](#overview)
+- [Document Format](#document-format)
+- [Attachment Rule](#attachment-rule)
+- [Ansible Rule](#ansible-rule)
+- [Dependency Rule](#dependency-rule)
+- [Dependency Rewrite Rule](#dependency-rewrite-rule)
+- [Dependency Versions Rule](#dependency-versions-rule)
+- [Entity Rule](#entity-rule)
+- [File Rule](#file-rule)
+- [JSON Store Rule](#json-store-rule)
+- [Process Configuration Rule](#process-configuration-rule)
+- [Default Process Configuration Rule](#default-process-configuration-rule)
+- [Task Rule](#task-rule)
+- [Workspace Rule](#workspace-rule)
+- [Runtime Rule](#runtime-rule)
+- [RawPayload Rule](#rawpayload-rule)
+- [CronTrigger Rule](#crontrigger-rule)
+
+## Overview
+
+A policy is a JSON document describing rules that can affect the execution of
+processes, creation of entities such as project and secrets, define the limits
+for the process queue, etc.
+
+Policies can be applied system-wide as well as linked to an organization, a
+specific project or to a user.
+
+Policies can be created using the [Policy API](../api/policy.md). Currently,
+only the users with the administrator role can create or link policies.
+
+Policies can inherent other policies - in this case the parent policies are
+applied first, going from the \"oldest\" ancestors to the latest link.
+
+## Document Format
+
+There are two types of objects in the policy document: `allow/deny/warn` actions
+and free-form group of attributes:
+
+```json
+{
+ "[actionRules]": {
+ "deny": [
+ {
+ ...rule...
+ }
+ ],
+ "warn": [
+ {
+ ...rule...
+ }
+ ],
+ "allow": [
+ {
+ ...rule...
+ }
+ ]
+ },
+
+ "[anotherRule]": {
+ ...rule...
+ }
+}
+```
+
+Here's the list of currently supported rules:
+- [ansible](#ansible-rule) - controls the execution of
+ [Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md) plays;
+- [dependency](#dependency-rule) - applies rules to process dependencies;
+- [entity](#entity-rule) - controls creation or update of entities
+ such as organizations, projects and secrets;
+- [file](#file-rule) - applies to process files;
+- [processCfg](#process-configuration-rule) - allows changing the process'
+ `configuration` values;
+- [queue](#queue-rule) - controls the process queue behaviour;
+- [task](#task-rule) - applies rules to flow tasks;
+- [workspace](#workspace-rule) - controls the size of the workspace.
+
+## Attachment Rule
+
+Attachment rules allow you to limit the size of
+[process attachments](../api/process.md#downloading-an-attachment).
+
+The syntax:
+
+```json
+{
+ "attachments": {
+ "msg": "The size of process attachments exceeds the allowed value: current {0} byte(s), limit {1} byte(s)",
+ "maxSizeInBytes": 1024
+ }
+}
+```
+
+Concord applies the limit to all files stored in the process'
+`${workDir}/_attachments` directory, including the process state
+files (variables, flow state, etc) and all files created during
+the execution of the process.
+
+## Ansible Rule
+
+Ansible rules allow you to control the execution of
+[Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md) plays.
+
+The syntax:
+
+```json
+{
+ "action": "ansibleTaskName",
+ "params": [
+ {
+ "name": "paramName",
+ "values": ["arrayOfValues"]
+ }
+ ],
+ "msg": "optional message"
+}
+```
+
+The `action` attribute defines the name of the Ansible step and the `params`
+object is matched with the step's input parameters. The error message can be
+specified using the `msg` attribute.
+
+For example, to forbid a certain URI from being used in the Ansible's
+[get_url](https://docs.ansible.com/ansible/2.6/modules/get_url_module.html)
+step:
+
+```json
+{
+ "ansible": {
+ "deny": [
+ {
+ "action": "get_url",
+ "params": [
+ {
+ "name": "url",
+ "values": ["https://jsonplaceholder.typicode.com/todos"]
+ }
+ ],
+ "msg": "Found a forbidden URL"
+ }
+ ]
+ }
+}
+```
+
+If someone tries to use the forbidden URL in their `get_url`, they see a
+message in the process log:
+
+```
+ANSIBLE: [ERROR]: Task 'get_url (get_url)' is forbidden by the task policy: Found a
+ANSIBLE: forbidden URL
+```
+
+The Ansible rule supports [regular JUEL expressions](../processes-v1/flows.md#expressions)
+which are evaluated each time the Ansible plugin starts using the current
+process' context. This allows users to create context-aware Ansible policies:
+
+```json
+{
+ "ansible": {
+ "deny": [
+ {
+ "action": "maven_artifact",
+ "params": [
+ {
+ "artifact_url": "url",
+ "values": ["${mySecretTask.getForbiddenArtifacts()}"]
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+**Note:** the `artifact_url` from the example above is not a standard
+[maven_artifact](https://docs.ansible.com/ansible/2.6/modules/maven_artifact_module.html)
+step's parameter. It is created dynamically from the supplied values of
+`repository_url`, `group_id`, `artifact_id`, etc.
+
+## Dependency Rule
+
+Dependency rules provide a way to control which process dependencies are allowed
+for use.
+
+The syntax:
+
+```json
+{
+ "scheme": "...scheme...",
+ "groupId": "...groupId...",
+ "artifactId": "...artifactId...",
+ "fromVersion": "1.0.0",
+ "toVersion": "1.1.0",
+ "msg": "optional message"
+}
+```
+
+The attributes:
+
+- `scheme` - the dependency URL scheme. For example: `http` or `mvn`;
+- `groupId` and `artifactId` - parts of the dependency's Maven GAV (only for
+`mvn` dependencies);
+- `fromVersion` and `toVersion` - define the version range (only for `mvn`
+dependencies).
+
+For example, restricting a specific version range of a plugin can be done like
+so:
+
+```json
+{
+ "dependency": {
+ "deny": [
+ {
+ "groupId": "com.walmartlabs.concord.plugins.basic",
+ "artifactId": "ansible-tasks",
+ "toVersion": "1.13.1",
+ "msg": "Usage of ansible-tasks <= 1.14.0 is forbidden"
+ }
+ ]
+ }
+}
+```
+
+In this example, all versions of the `ansible-tasks` dependency lower than
+`1.13.1` are rejected.
+
+Another example, warn users every time they are trying to use non-`mvn`
+dependencies:
+
+```json
+{
+"dependency": {
+ "warn": [
+ {
+ "msg": "Using direct dependency URLs is not recommended. Consider using mvn:// dependencies.",
+ "scheme": "^(?!mvn.*$).*"
+ }
+ ]
+ }
+}
+```
+
+## Dependency Rewrite Rule
+
+Dependency rewrite rules provide a way to change dependency artifacts (e.g. dependency versions).
+
+The syntax:
+
+```json
+{
+ "msg": "optional message",
+ "groupId": "...groupId...",
+ "artifactId": "...artifactId...",
+ "fromVersion":"...optional lower bound (inclusive) of version...",
+ "toVersion":"..optional upper bound (inclusive) of version...",
+ "value":"mvn://new dependency artifact"
+}
+```
+
+The attributes:
+
+- `groupId` and `artifactId` - parts of the dependency's Maven GAV;
+- `fromVersion` and `toVersion` - define the version range;
+- `value` - new dependency value.
+
+For example, updating groovy dependency version to `2.5.21`:
+
+```json
+{
+ "dependencyRewrite": [
+ {
+ "groupId": "org.codehaus.groovy",
+ "artifactId": "groovy-all",
+ "toVersion": "2.5.20",
+ "value": "mvn://org.codehaus.groovy:groovy-all:pom:2.5.21"
+ }
+ ]
+}
+```
+
+## Dependency Versions Rule
+
+The dependency versions rule provides a way to map `latest` version tags of
+[process dependencies](../processes-v1/configuration.md#dependencies) to
+actual version values.
+
+The syntax:
+
+```json
+[
+ {
+ "artifact": "...groupId:artifactId...",
+ "version": "...version"
+ },
+
+ {
+ "artifact": "...groupId:artifactId...",
+ "version": "...version"
+ },
+ ...
+]
+```
+
+The attributes:
+- `artifact` - Maven's `groupId` and `artifactId` values, separated by colon `:`;
+- `version` - the artifact's version to use instead of the `latest` tag.
+
+For example:
+
+```json
+{
+ "dependencyVersions": [
+ {
+ "artifact": "com.walmartlabs.concord.plugins.basic:ansible-tasks",
+ "version": "{{ site.concord_core_version }}"
+ },
+
+ {
+ "artifact": "mvn://com.walmartlabs.concord.plugins:jenkins-task",
+ "version": "{{ site.concord_plugins_version }}"
+ }
+ ]
+}
+```
+
+If a process specifies `latest` instead of the version:
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://com.walmartlabs.concord.plugins.basic:ansible-tasks:latest"
+ - "mvn://com.walmartlabs.concord.plugins:jenkins-task:latest"
+```
+
+the effective dependency list is:
+
+```yaml
+- "mvn://com.walmartlabs.concord.plugins.basic:ansible-tasks:{{ site.concord_core_version }}"
+- "mvn://com.walmartlabs.concord.plugins:jenkins-task:{{ site.concord_plugins_version }}"
+```
+
+## Entity Rule
+
+Entity rules control the creation or update of Concord
+[organizations](../api/org.md), [projects](../api/project.md),
+[secrets](../api/secret.md), etc.
+
+The syntax:
+
+```json
+{
+ "entity": "entityType",
+ "action": "action",
+ "conditions": {
+ "param": "value"
+ },
+ "msg": "optional message"
+}
+```
+
+The currently supported `entity` types are:
+
+- `org` - organizations;
+- `project` - projects;
+- `repository` - repositories in projects;
+- `secret` - secrets;
+- `jsonStore` - JSON stores;
+- `jsonStoreItem` - items in JSON stores;
+- `jsonStoreQuery` - JSON store queries;
+- `trigger` - triggers.
+
+Available actions:
+
+- `create`
+- `update`
+
+The `conditions` are matched against an object containing both the entity's
+and the entity's owner attributes:
+
+```json
+{
+ "owner": {
+ "id": "...userId...",
+ "username": "...username...",
+ "userType": "LOCAL or LDAP",
+ "email": "...",
+ "displayName": "...",
+ "groups": ["AD/LDAP groups"],
+ "attributes": {
+ ...other AD/LDAP attributes...
+ }
+ },
+ "entity": {
+ ...entity specific attributes...
+ }
+}
+```
+
+Different types of entities provide different sets of attributes:
+
+- `org`:
+ - `id` - organization ID (UUID, optional);
+ - `name` - organization name;
+ - `meta` - metadata (JSON object, optional);
+ - `cfg` - configuration (JSON object, optional).
+- `project`:
+ - `id` - project ID (UUID, optional);
+ - `name` - project name;
+ - `orgId` - the project's organization ID (UUID);
+ - `orgName` - the project's organization name;
+ - `visibility` - the project's visibility (`PUBLIC` or `PRIVATE`);
+ - `meta` - metadata (JSON object, optional);
+ - `cfg` - configuration (JSON object, optional).
+- `repository`:
+ - `name` - repository name;
+ - `url` - repository URL;
+ - `branch` - branch name;
+ - `secret` - reference to a secret (optional, JSON object, see below for
+ the list of fields);
+ - `orgId` - the project's organization ID (UUID);
+ - `orgName` - the project's organization name;
+ - `projectId` - project ID (UUID);
+ - `projectName` - project name.
+- `jsonStore`:
+ - `name` - JSON store name;
+ - `orgId` - the store's organization ID;
+ - `visibility` - the store's visibility (optional);
+ - `ownerId` - user ID of the store's owner (UUID, optional);
+ - `ownerName` - username of the store's owner (optional);
+ - `ownerDomain` - user domain of the store's owner (optional);
+ - `ownerType` - user type of the store's owner (optional).
+- `jsonStoreItem`:
+ - `path` - item's path;
+ - `data` - data (JSON object);
+ - `jsonStoreId` - ID of the store (UUID);
+ - `jsonStoreName` - name of the store;
+ - `orgId` - the store's organization ID (UUID);
+ - `orgName` - the store's organization name.
+- `jsonStoreQuery`:
+ - `name` - the query's name;
+ - `text` - the query's text;
+ - `storeId` - the store's ID (UUID);
+ - `storeName` - the store's name;
+ - `orgId` - the store's organization ID (UUID);
+ - `orgName` - the store's organization name.
+- `secret`:
+ - `name` - project name;
+ - `orgId` - the secrets's organization ID (UUID);
+ - `type` - the secret's type;
+ - `visibility` - the secret's visibility (`PUBLIC` or `PRIVATE`, optional);
+ - `storeType` - the secret's store type (optional).
+- `trigger`
+ - `eventSource` - the trigger's event type (string, `github`, `manual`, etc);
+ - `orgId` - linked organization's ID (UUID, optional);
+ - `params` - the trigger's configuration (JSON object, optional).
+
+For example, to restrict creation of projects in the `Default` organization use:
+
+```json
+{
+ "entity": {
+ "deny": [
+ {
+ "msg": "project in default org are disabled",
+ "action": "create",
+ "entity": "project",
+ "conditions":{
+ "entity": {
+ "orgId": "0fac1b18-d179-11e7-b3e7-d7df4543ed4f"
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+To prevent users with a specific AD/LDAP group from creating any new entities:
+
+```json
+{
+ "entity": {
+ "deny":[
+ {
+ "action": ".*",
+ "entity": ".*",
+ "conditions": {
+ "owner": {
+ "userType": "LDAP",
+ "groups": ["CN=SomeGroup,.*"]
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+Another example is a policy to prevent users from creating wide-sweeping,
+"blanket" GitHub triggers for all projects:
+
+```json
+{
+ "entity": {
+ "deny": [
+ {
+ "msg": "Blanket GitHub triggers are disallowed",
+ "action": "create",
+ "entity": "trigger",
+ "conditions":{
+ "entity": {
+ "eventSource": "github",
+ "params": {
+ "org": "\\.\\*",
+ "project": "\\.\\*",
+ "repository": "\\.\\*",
+ "unknownRepo": [true, false]
+ }
+ }
+ }
+ }
+ ]
+ }
+}
+```
+
+## File Rule
+
+The file rules control the types and sizes of files that are allowed in
+the process' workspace.
+
+The syntax:
+
+```json
+{
+ "maxSize": "1G",
+ "type": "...type...",
+ "names": ["...filename patterns..."],
+ "msg": "optional message"
+}
+```
+
+The attributes:
+
+- `maxSize` - maximum size of a file (`G` for gigabytes, `M` - megabytes, etc);
+- `type` - `file` or `dir`;
+- `names` - filename patterns (regular expressions).
+
+For example, to forbid files larger than 128Mb:
+
+```json
+{
+ "file": {
+ "deny": [
+ {
+ "maxSize": "128M",
+ "msg": "Files larger than 128M are forbidden"
+ }
+ ]
+ }
+}
+```
+
+## JSON Store Rule
+
+The `jsonStore` rule control parameters of [JSON stores](./json-store.md).
+
+The syntax:
+```json
+{
+ "data":{
+ "maxSizeInBytes": 100,
+ "msg": "optional message"
+ },
+ "store":{
+ "maxNumberPerOrg": 30,
+ "msg": "optional message"
+ }
+}
+```
+
+The attributes:
+
+- `data`
+ - `maxSizeInBytes` - maximum allowed size of a store in bytes;
+- `store`
+ - `maxNumberPerOrg` - maximum allowed number of stores per organization.
+
+Example:
+
+```json
+{
+ "jsonStore":{
+ "data":{
+ "maxSizeInBytes": 1048576
+ },
+ "store":{
+ "maxNumberPerOrg": 30
+ }
+ }
+}
+```
+
+## Process Configuration Rule
+
+The `processCfg` values are merged into the process' `configuration` object,
+overriding any existing values with the same keys:
+
+```json
+{
+ "...variable...": "...value..."
+}
+```
+
+Those values take precedence over the values specified by users in the process'
+`configuration` section. The [defaultProcessCfg](#default-process-configuration-rule)
+rule can be used to set the initial values.
+
+For example, to force a specific [processTimeout](../processes-v1/configuration.md#process-timeout)
+value:
+
+```json
+{
+ "processCfg": {
+ "processTimeout": "PT2H"
+ }
+}
+```
+
+Or to override a value in `arguments`:
+
+```json
+{
+ "processCfg": {
+ "arguments": {
+ "message": "Hello from Concord!"
+ }
+ }
+}
+```
+
+## Default Process Configuration Rule
+
+The `defaultProcessCfg` rule allows settings initial values for process
+`configuration`.
+
+```json
+{
+ "...variable...": "...value..."
+}
+```
+
+Those values can be overriden by users their process' `configuration` sections.
+The [processCfg](#process-configuration-rule) rule can be used to override any
+user values.
+
+For example, to set the default [processTimeout](../processes-v1/configuration.md#process-timeout)
+value:
+
+```json
+{
+ "defaultProcessCfg": {
+ "processTimeout": "PT2H"
+ }
+}
+```
+
+## Queue Rule
+
+The queue rule controls different aspects of the process queue - the maximum
+number of concurrently running processes, the default process timeout, etc.
+
+The syntax:
+
+```json
+{
+ "concurrent": {
+ "maxPerOrg": "10",
+ "maxPerProject": "5",
+ "msg": "optional message"
+ },
+ "forkDepth": {
+ "max": 5,
+ "msg": "optional message"
+ },
+ "processTimeout": {
+ "max": "PT1H",
+ "msg": "optional message"
+ }
+}
+```
+
+The attributes:
+
+- `concurrent` - controls the number of concurrently running processes:
+ - `maxPerOrg` - max number of running processes per organization;
+ - `maxPerProject` - max number of running processes per project;
+- `forkDepth` - the maximum allowed depth of process forks, i.e. how many
+_ancestors_ a process can have. Can be used to prevent "fork bombs";
+- `processTimeout` - limits the maximum allowed value of the
+[processTimeout parameter](../processes-v1/configuration.md#process-timeout).
+
+For example:
+
+```json
+{
+ "queue": {
+ "forkDepth": {
+ "max": 5
+ },
+ "concurrent": {
+ "max": 40
+ }
+ }
+}
+```
+
+## Task Rule
+
+Task rules control the execution of flow tasks. They can trigger on specific
+methods or parameter values.
+
+The syntax:
+
+```json
+{
+ "taskName": "...task name...",
+ "method": "...method name...",
+ "params": [
+ {
+ "name": "...parameter name...",
+ "index": 0,
+ "values": [
+ false,
+ null
+ ],
+ "protected": true
+ }
+ ],
+ "msg": "optional message"
+}
+```
+
+The attributes:
+
+- `taskName` - name of the task (as in the task's `@Named` annotation);
+- `method` - the task's method name;
+- `params` - list of the task's parameters to match.
+
+The `params` attribute accepts a list of parameter definitions:
+
+- `name` - name of the parameter in the process' `Context`;
+- `index` - index of the parameter in the method's signature;
+- `values` - a list of values to trigger on;
+- `protected` - if `true` the parameter will be treated as a protected
+variable.
+
+For example, if there is a need to disable a specific task based on some
+variable in the process' context, it can be achieved with a policy:
+
+```json
+{
+ "task": {
+ "deny": [
+ {
+ "taskName": "ansible",
+ "method": "execute",
+ "params": [
+ {
+ "name": "gatekeeperResult",
+ "index": 0,
+ "values": [
+ false,
+ null
+ ],
+ "protected": true
+ }
+ ],
+ "msg": "I won't run Ansible without running the Gatekeeper task first"
+ }
+ ]
+ }
+}
+```
+
+In this example, because the Ansible's plugin method `execute` accepts
+a `Context`, the policy executor looks for a `gatekeeperResult` in
+the process' context.
+
+## Workspace Rule
+
+The workspace rule allows control of the overall size of the process'
+workspace.
+
+The syntax:
+
+```json
+{
+ "maxSizeInBytes": 1024,
+ "ignoredFiles": ["...filename patterns..."],
+ "msg": "optional message"
+}
+```
+
+The attributes:
+
+- `maxSizeInBytes` - maximum allowed size of the workspace minus the
+`ignoredFiles` (in bytes);
+- `ignoredFiles` - list of filename patterns (regular expressions). The
+matching files will be excluded from the total size calculation.
+
+Example:
+
+```json
+{
+ "workspace": {
+ "msg": "Workspace too big (allowed size is 256Mb, excluding '.git')",
+ "ignoredFiles": [
+ ".*/\\.git/.*"
+ ],
+ "maxSizeInBytes": 268435456
+ }
+}
+```
+
+## Runtime Rule
+
+The runtime rule controls allowed runtime(s) for process execution.
+
+The syntax:
+
+```json
+{
+ "msg": "optional message",
+ "runtimes": ["concord runtime(s)..."]
+}
+```
+
+The attributes:
+
+- `runtimes` - List of allowed concord runtime(s);
+
+Example:
+
+```json
+{
+ "runtime": {
+ "msg": "{0} runtime version is not allowed",
+ "runtimes": ["concord-v2"]
+ }
+}
+```
+
+## RawPayload Rule
+
+RawPayload rules allows you to limit the size of the raw payload archive sent to start the process.
+
+The syntax:
+
+```json
+{
+ "rawPayload": {
+ "msg": "Raw payload size too big: current {0} bytes, limit {1} bytes",
+ "maxSizeInBytes": 1024
+ }
+}
+```
+
+## CronTrigger Rule
+
+Cron trigger rule allows you, administrators to set the minimum interval between the process triggered by cron.
+
+The syntax:
+
+```json
+{
+ "cronTrigger": {
+ "minInterval": "interval in seconds"
+ }
+}
+```
+
+For example:
+
+```json
+{
+ "cronTrigger": {
+ "minInterval": 61
+ }
+}
+```
diff --git a/docs/src/getting-started/processes.md b/docs/src/getting-started/processes.md
new file mode 100644
index 0000000000..7e70032056
--- /dev/null
+++ b/docs/src/getting-started/processes.md
@@ -0,0 +1,129 @@
+# Processes
+
+A process is an execution of flows written in [Concord DSL](../processes-v1/index.md#dsl)
+running in a [project](../getting-started/projects.md) or standalone.
+A process can represent a single deployment, CI/CD job or any other, typically
+a "one-off", type of workload.
+
+Let's take a look at an example:
+
+```yaml
+configuration:
+ arguments:
+ todoId: "1"
+
+flows:
+ default:
+ - task: http
+ in:
+ url: "https://jsonplaceholder.typicode.com/todos/${todoId}"
+ response: json
+ out: myTodo
+
+ - if: "${myTodo.content.completed}"
+ then:
+ - log: "All done!"
+ else:
+ - log: "You got a todo item: ${myTodo.content.title}"
+```
+
+When executed this flow performs a number of steps:
+- fetches a JSON object from the specified URL;
+- saves the response as a flow variable;
+- checks if the retrieved "todo" is completed or not;
+- prints out a message depending whether the condition is true or not.
+
+The example demonstrates a few concepts:
+- flow definitions use Concord's YAML-based [DSL](../processes-v1/index.md#dsl);
+- flows can call [tasks](../getting-started/tasks.md). And tasks can perform
+useful actions;
+- flows can use [conditional expressions](../processes-v1/flows.md#conditional-expressions);
+- tasks can save their results as flow [variables](../processes-v1/flows.md#setting-variables)
+- an [expression language](../processes-v1/flows.md#expressions) can be used to work
+with data inside flows;
+
+There are multiple ways how to execute a Concord process: using a Git
+repository, sending the necessary files in [the API request](../api/process.md#start-a-process),
+using a [trigger](../triggers/index.md), etc.
+
+No matter how the process was started it goes through the same execution steps:
+
+- project repository data is cloned or updated;
+- binary payload from the process invocation is added to the workspace;
+- configuration parameters from different sources are merged together;
+- [imports](../processes-v2/imports.md) and [templates](../templates/index.md)
+are downloaded and applied;
+- the process is added to the queue;
+- one of the agents picks up the process from the queue;
+- the agent downloads the process state,
+[dependencies](../processes-v2/configuration.md#dependencies) and `imports`;
+- the agent starts [the runtime](#runtime) in the process' working directory;
+- the flow configured as entry point is invoked.
+
+During its life, a process can go though various statuses:
+
+- `NEW` - the process start request is received, passed the initial validation
+and saved for execution;
+- `PREPARING` - the start request is being processed. During this status,
+the Server prepares the initial process state;
+- `ENQUEUED` - the process is ready to be picked up by one of the Agents;
+- `WAITING` - the processes is waiting for "external" conditions
+(e.g. concurrent execution limits, waiting for another process or lock, etc);
+- `STARTING` - the process was dispatched to an Agent and is being prepared to
+start on the Agent's side;
+- `RUNNING` - the process is running;
+- `SUSPENDED` - the process is waiting for an external event (e.g. a form);
+- `RESUMING` - the Server received the event the process was waiting for and
+now prepares the process' resume state;
+- `FINISHED` - final status, the process was completed successfully. Or, at
+least, all process-level errors were handled in the process itself;
+- `FAILED` - the process failed with an unhandled error;
+- `CANCELLED` - the process was cancelled by a user;
+- `TIMED_OUT` - the process exceeded its
+[execution time limit](#process-timeout).
+
+## Runtime
+
+The runtime is what actually executes the process. It is an interpreter written
+in Java that executes flows written in [Concord DSL](../processes-v1/index.md#dsl).
+Typically this is executed in a separate JVM process.
+
+Currently there are two versions of the runtime:
+- [concord-v1](../processes-v1/index.md) - used by default;
+- [concord-v2](../processes-v2/index.md) - new and improved version
+introduced in 1.42.0.
+
+The runtime can be specified using `configuration.runtime` parameter in
+the `concord.yml` file:
+
+```yaml
+configuration:
+ runtime: "concord-v2"
+```
+
+or in the request parameters:
+
+```
+$ curl -F runtime=concord-v2 ... https://concord.example.com/api/v1/process
+```
+
+## Process Events
+
+During process execution, Concord records various events: process status
+changes, task calls, internal plugin events, etc. The data is stored in the
+database and used later in the [Concord Console](../console/index.md) and
+other components.
+
+Events can be retrieved using [the API](../api/process.md#list-events).
+Currently, those event types are:
+
+- `PROCESS_STATUS` - process status changes;
+- `ELEMENT` - flow element events (such as task calls).
+
+In addition, plugins can use their own specific event types. For example, the
+[Ansible plugin]({{ site.concord_plugins_v2_docs }}/ansible.md) uses custom events to record playbook
+execution details. This data is extensively used by the Concord Console to
+provide visibility into the playbook execution - hosts, playbook steps, etc.
+
+Event recording can be configured in the [Runner](../processes-v1/configuration.md#runner)
+section of the process' `configuration` object.
diff --git a/docs/src/getting-started/projects.md b/docs/src/getting-started/projects.md
new file mode 100644
index 0000000000..17f9f963b2
--- /dev/null
+++ b/docs/src/getting-started/projects.md
@@ -0,0 +1,41 @@
+# Projects
+
+Projects are a way to organize processes and their configuration. Projects can
+be created using [the API](../api/project.md) or using [Console](../console/project.md).
+
+## Configuration
+
+Project-level configuration provides a way to specify
+[configuration](../processes-v2/configuration.md) for all
+processes executed in the context of the project.
+
+For example, to specify a common default argument for all project processes:
+
+```
+$ curl -ikn -X PUT -H 'Content-Type: application/json' \
+-d '{"arguments": {"name": "me"}}' \
+https://concord.example.com/api/v1/org/MyOrg/project/MyProject/cfg
+```
+
+All processes using the `name` variable get the default value:
+
+```yaml
+flows:
+ default:
+ - log: "Hello, ${name}"
+```
+
+```
+$ curl -ikn -F org=MyOrg -F project=MyProject -F concord.yml=@concord.yml \
+https://concord.example.com/api/v1/process
+```
+
+```
+10:42:00 [INFO ] c.w.concord.plugins.log.LoggingTask - Hello, me
+```
+
+Processes can override project defaults by providing their own values for the
+variable in the `configuration` object or in the request parameters.
+
+See [the API](../api/project.md#get-project-configuration) documentation for
+more details on how to work with project configurations.
diff --git a/docs/src/getting-started/quickstart.md b/docs/src/getting-started/quickstart.md
new file mode 100644
index 0000000000..3f6f2405d3
--- /dev/null
+++ b/docs/src/getting-started/quickstart.md
@@ -0,0 +1,145 @@
+# Quick Start
+
+If you have [installed your own Concord server](./installation.md) or have
+access to a server already, you can set up your first simple Concord process
+execution with a few simple steps:
+
+- [Create a Git Repository](#create-repository)
+- [Add the Concord File](#add-concord-file)
+- [Add a Deploy Key](#add-deploy-key)
+- [Create Project in Concord](#create-project)
+- [Execute a Process](#execute-process)
+- [Next Steps](#next-steps)
+
+
+
+## Create a Git Repository
+
+Concord process definitions and their resources are best managed and source
+controlled in a Git repository. Concord can automatically retrieve the contents
+of the repository and create necessary resources and executions as defined in
+the content.
+
+Start with the following steps:
+
+- Create the repository in your Git management system, such as GitHub, using the
+ user interface;
+- Clone the repository to your local workstation.
+
+
+
+## Add the Concord File
+
+As a next step, add the Concord file `concord.yml` in the root of the repository.
+A minimal example file uses the automatically used `default` flow:
+
+```yaml
+flows:
+ default:
+ - log: "Hello Concord User"
+```
+
+The `default` flow in the example simply outputs a message to the process log.
+
+
+
+## Add a Deploy Key
+
+In order to grant Concord access to the Git repository via SSH, you need to
+create a new key on the Concord server.
+
+- Log into the Concord Console user interface;
+- Navigate to _Organizations_ → _[your organization] → Secrets_ (Contact your support team/administrators to create a new organization or have you added to an existing organization)
+- Select _New secret_ on the toolbar;
+- Provide a string e.g. `mykey` as _Name_ and select _Generate a new key pair_ as _Type_;
+- Press _Create_.
+
+The user interface shows the public key of the generated key similar to
+`ssh-rsa ABCXYZ... concord-server`. This value has to be added as an authorized deploy
+key for the git repository. In GitHub, for example, this can be done in the
+_Settings - Deploy keys_ section of the repository.
+
+Alternatively the key can be
+[created](../api/secret.md#create-secret) and
+[accessed](../api/secret.md#get-key) with the REST API for secrets.
+
+
+
+
+## Create Project in Concord
+
+Now you can create a new project in the Concord Console.
+
+- Log into the Concord Console user interface;
+- Navigate to _Organizations_ → _[your organization] → Projects_ (Contact your support team/administrators to create a new organization or have you added to an existing organization)
+- Select _New project_ on the toolbar;
+- Provide a _Name_ for the project e.g. 'myproject';
+- Click the _Create_ button;
+- Under 'Repositories' tab, select _Add repository_;
+- Provide a _Name_ for the repository e.g. 'myrepository';
+- Select the _Custom authentication_ button;
+- Select the _Secret_ created earlier using the name e.g. `mykey`;
+- Use the SSH URL for the repository in the _URL_ field e.g. `git@github.com:me/myrepo.git`;
+
+If global authentication/trust between your GitHub repositories and the Concord
+server is configured, you can simply use the HTTPS URL for the repository in the
+_URL_ field.
+
+Alternatively you can
+[create a project with the REST API](../api/project.md#create-project).
+
+**Note**: project creation in the Default organization might be disabled by
+the instance admins using [policies](./policies.md#entity-rule)
+
+
+
+## Execute a Process
+
+Everything is ready to kick off an execution of the flow - a process:
+
+- Locate the repository for the project;
+- Press on the three dots for the repository on the right;
+- Press on the _Run_ button;
+- Confirm to start the process by clicking on _Yes_ in the dialog;
+
+A successful process execution results in a message such as:
+
+```
+{
+ "instanceId": "e3fd96f9-580f-4b9b-b846-cc8fdd310cf6",
+ "ok": true
+}
+```
+
+The _Open process status_ button navigates you to the process execution and
+provides access to the log, forms and more. Note how the log message
+`Hello Concord User` is visible.
+
+Alternatively the process can be accessed via the queue:
+
+- Click on the _Processes_ tab;
+- Click on the _Instance ID_ value of the specific process;
+- Press on the _Log_ tab to inspect the log.
+
+Alternatively the process can be started via the
+[Process REST API](../api/process.md).
+
+
+
+## Next Steps
+
+Congratulations, your first process flow execution completed successfully!
+
+You can now learn more about flows and perform tasks such as
+
+- Add a [form](./forms.md) to capture user input;
+- Using [variables](../processes-v2/index.md#variables);
+- [Groups of steps](../processes-v2/flows.md#groups-of-steps);
+- Add [conditional expressions](../processes-v2/flows.md#conditional-execution);
+- [Calling other flows](../processes-v2/flows.md#calling-other-flows);
+- Work with [Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md), [Jira]({{ site.concord_plugins_v2_docs }}/jira.md) and [other]({{ site.concord_plugins_v2_docs }}/) tasks;
+- Maybe even [implement your own task](../processes-v2/tasks.md#development)
+
+and much more. Have a look at all the documentation about the
+[Concord DSL](../processes-v2/flows.md), [forms](./forms.md),
+[scripting](./scripting.md) and other aspects to find out more!
diff --git a/docs/src/getting-started/scripting.md b/docs/src/getting-started/scripting.md
new file mode 100644
index 0000000000..d14e1d3db6
--- /dev/null
+++ b/docs/src/getting-started/scripting.md
@@ -0,0 +1,405 @@
+# Scripting
+
+Concord flows can include scripting language snippets for execution. The
+scripts run within the same JVM that is running Concord, and hence need to
+implement the Java Scripting API as defined by JSR-223. Language examples with a
+compliant runtimes are [JavaScript](#javascript), [Groovy](#groovy),
+[Python](#python), [JRuby](#ruby) and many others.
+
+Script engines must support Java 8.
+
+Script languages have to be identified by setting the language explicitly or can be
+automatically identified based on the file extension used. They can be stored
+as external files and invoked from the Concord YAML file or they can be inline
+in the file.
+
+[Flow variables](#using-flow-variables), [Concord tasks](#using-concord-tasks) and other Java
+methods can be accessed from the scripts due to the usage of the Java Scripting
+API. The script and your Concord processes essentially run within the same
+context on the JVM.
+
+- [Using Flow Variables](#using-flow-variables)
+ - [Flow Variables in Runtime V2](#flow-variables-in-runtime-v2)
+- [Using Concord Tasks](#using-concord-tasks)
+- [Error Handling](#error-handling)
+- [Dry-run mode](#dry-run-mode)
+- [Javascript](#javascript)
+- [Groovy](#groovy)
+- [Python](#python)
+- [Ruby](#ruby)
+
+
+## Using Flow Variables
+
+For most of the supported languages, flow variables can be accessed
+directly inside the script (without using ${} syntax):
+
+```yaml
+configuration:
+ arguments:
+ myVar: "world"
+
+flows:
+ default:
+ - script: js
+ body: |
+ print("Hello, ", myVar)
+```
+
+If a flow variable contains an illegal character for a chosen scripting
+language, it can be accessed using a built-in `execution` variable:
+
+```yaml
+- script: js
+ body: |
+ var x = execution.getVariable("an-illegal-name");
+ print("We got", x);
+```
+
+To set a variable, you need to use the `execution.setVariable()` method:
+
+```yaml
+- script: js
+ body: |
+ execution.setVariable("myVar", "Hello!");
+```
+
+> Note that not every data structure of supported scripting languages is
+> directly compatible with the Concord runtime. The values exposed to the flow
+> via `execution.setVariable` must be serializable in order to work correctly
+> with forms or when the process suspends. Refer to the specific language
+> section for more details.
+
+### Flow Variables in Runtime V2
+
+Similar to Runtime V1, flow variables can be accessed directly inside the script
+by the variable's name.
+
+```yaml
+configuration:
+ runtime: concord-v2
+ arguments:
+ myVar: "world"
+
+flows:
+ default:
+ - script: js
+ body: |
+ print("Hello, ", myVar);
+```
+
+Additionally, the `execution` variable has a `variables()` method which returns a
+[`Variables` object](https://github.com/walmartlabs/concord/blob/master/runtime/v2/sdk/src/main/java/com/walmartlabs/concord/runtime/v2/sdk/Variables.java). This object includes a number of methods for interacting with flow variables.
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - script: js
+ body: |
+ var myVar = execution.variables().getString('myString', 'world');
+ print("Hello, ", myVar);
+```
+
+To set a variable, use the `execution.variables().set()` method:
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - script: js
+ body: |
+ execution.variables().set('myVar', 'Hello, world!');
+```
+
+## Using Concord Tasks
+
+Scripts can retrieve and invoke all tasks available for flows by name:
+
+```yaml
+- script: js
+ body: |
+ var slack = tasks.get("slack");
+ slack.call(execution, "C5NUWH9S5", "Hi there!");
+```
+
+The number and type of arguments depend on the particular task's method. In
+this example, the script calls `call` method of the [SlackTask](https://github.com/walmartlabs/concord/blob/1e053db578b9550e0aac656e1916eaf8f8eba0b8/plugins/tasks/slack/src/main/java/com/walmartlabs/concord/plugins/slack/SlackTask.java#L54)
+instance.
+
+The `execution` variable is an alias for [context](../processes-v1/index.md#context)
+and automatically provided by the runtime for all supported script engines.
+
+## External Scripts
+
+Scripts can be automatically retrieved from an external server:
+
+```yaml
+- script: "http://localhost:8000/myScript.groovy"
+```
+
+The file extension in the URL must match the script engine's
+supported extensions -- e.g. `.groovy` for the Groovy language, `.js`
+for JavaScript, etc.
+
+## Error Handling
+
+Script can have an optional error block. It is executed when an exception occurs
+in the script execution:
+
+```yaml
+- script: groovy
+ body: |
+ throw new RuntimeException("kaboom!")
+ error:
+ - log: "Caught an error: ${lastError.cause}"
+```
+
+Using external script file:
+
+```yaml
+- script: "http://localhost:8000/myScript.groovy"
+ error:
+ - log: "Caught an error: ${lastError.cause}"
+```
+
+## Dry-run mode
+
+[Dry-run mode](../processes-v2/index.md#dry-run-mode) is useful for testing and validating
+the flow logic before running it in production.
+
+By default, script steps do not support dry-run mode. To enable a script to run in this mode,
+you need to modify the script to support dry-run mode or mark script step as dry-run ready
+using `meta` field of the step if you are confident it is safe to run.
+
+An example of a script step marked as dry-run ready:
+
+```yaml
+flows:
+ myFlow:
+ - script: js
+ body: |
+ log.info('I'm confident that this script can be executed in dry-run mode!');
+ meta:
+ dryRunReady: true # dry-run ready marker for this step
+```
+
+> **Important**: Use the `meta.dryRunReady` only if you are certain that the script is safe
+> to run in dry-run mode
+
+If you need to change the logic in the script depending on whether it is running in dry-run mode
+or not, you can use the `isDryRun` variable. `isDryRun` variable is available to indicate whether
+the process is running in dry-run mode:
+
+```yaml
+flows:
+ default:
+ - script: js
+ body: |
+ if (isDryRun) {
+ log.info('running in DRY-RUN mode');
+ } else {
+ log.info('running in REGULAR mode');
+ }
+ meta:
+ dryRunReady: true # dry-run ready marker for this step is also needed in this case
+```
+
+## JavaScript
+
+JavaScript support is built-in and doesn't require any external
+dependencies. It is based on the
+[Nashorn](https://en.wikipedia.org/wiki/Nashorn_(JavaScript_engine))
+engine and requires the identifier `js`.
+[Nashorn](https://wiki.openjdk.java.net/display/Nashorn/Main) is based on
+ECMAScript, adds
+[numerous extensions](https://wiki.openjdk.java.net/display/Nashorn/Nashorn+extensions).
+including e.g. a `print` command.
+
+Using an inline script:
+
+```yaml
+flows:
+ default:
+ - script: js
+ body: |
+ function doSomething(i) {
+ return i * 2;
+ }
+
+ execution.setVariable("result", doSomething(2));
+
+ - log: ${result} # will output "4"
+```
+
+Using an external script file:
+
+```yaml
+flows:
+ default:
+ - script: test.js
+ - log: ${result}
+```
+
+```javascript
+// test.js
+function doSomething(i) {
+ return i * 2;
+}
+
+execution.setVariable("result", doSomething(2));
+```
+
+### Compatibility
+
+JavaScript objects must be converted to regular Java `Map` instances to be
+compatible with the Concord runtime:
+
+```yaml
+flows:
+ default:
+ - script: js
+ body: |
+ var x = {a: 1};
+ var HashMap = Java.type('java.util.HashMap');
+ execution.setVariable('x', new HashMap(x));
+ - log: "${x.a}"
+```
+
+Alternatively, a `HashMap` instance can be used directly in the JavaScript
+code.
+
+Similarly, JavaScript arrays (lists) must be converted into compatible
+Java `List` objects:
+
+```javascript
+var arr = [1, 2, 3];
+var ArrayList = Java.type('java.util.ArrayList');
+execution.setVariable('x', new ArrayList(arr));
+```
+
+## Groovy
+
+Groovy is another compatible engine that is fully-supported in Concord. It
+requires the addition of a dependency to
+[groovy-all](https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-all/) and
+the identifier `groovy`. For versions 2.4.* and lower jar packaging is used in
+projects, so the correct dependency is
+e.g. `mvn://org.codehaus.groovy:groovy-all:2.4.12`. Versions `2.5.0` and higher
+use pom packaging, which has to be added to the dependency declaration before
+the version. For example: `mvn://org.codehaus.groovy:groovy-all:pom:2.5.21`.
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.codehaus.groovy:groovy-all:pom:2.5.21"
+flows:
+ default:
+ - script: groovy
+ body: |
+ def x = 2 * 3
+ execution.setVariable("result", x)
+ - log: ${result}
+```
+
+The following example uses some standard Java APIs to create a date value in the
+desired format.
+
+```yaml
+- script: groovy
+ body: |
+ def dateFormat = new java.text.SimpleDateFormat('yyyy-MM-dd')
+ execution.setVariable("businessDate", dateFormat.format(new Date()))
+- log: "Today is ${businessDate}"
+```
+
+### Compatibility
+
+Groovy's `LazyMap` are not serializable and must be converted to regular Java
+Maps:
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.codehaus.groovy:groovy-all:pom:2.5.21"
+
+flows:
+ default:
+ - script: groovy
+ body: |
+ def x = new groovy.json.JsonSlurper().parseText('{"a": 123}') // produces a LazyMap instance
+ execution.setVariable('x', new java.util.HashMap(x))
+ - log: "${x.a}"
+```
+
+## Python
+
+Python scripts can be executed using the [Jython](http://www.jython.org/)
+runtime. It requires the addition of a dependency to
+[jython-standalone](https://repo1.maven.org/maven2/org/python/jython-standalone)
+located in the Central Repository or on another server and the identifier
+`python`. Any version that supports JSR-223 and Java 8 should work.
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.python:jython-standalone:2.7.2"
+
+flows:
+ default:
+ - script: python
+ body: |
+ x = 2 * 3;
+ execution.setVariable("result", x)
+
+ - log: ${result}
+```
+
+Note that `pip` and 3rd-party modules with native dependencies are not
+supported.
+
+### Compatibility
+
+Python objects must be converted to regular Java `List` and `Map` instances to be
+compatible with the Concord runtime:
+
+```yaml
+flows:
+ default:
+ - script: python
+ body: |
+ from java.util import HashMap, ArrayList
+
+ aDict = {'x': 123}
+ aList = [1, 2, 3]
+
+ execution.setVariable('aDict', HashMap(aDict))
+ execution.setVariable('aList', ArrayList(aList))
+
+ - log: "${aDict}"
+ - log: "${aList}"
+```
+
+## Ruby
+
+Ruby scripts can be executed using the [JRuby](https://www.jruby.org)
+runtime. It requires the addition of a dependency to
+[jruby](https://repo1.maven.org/maven2/org/jruby/jruby)
+located in the Central Repository or on another server and the identifier
+`ruby`.
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.jruby:jruby:9.4.2.0"
+
+flows:
+ default:
+ - script: ruby
+ body: |
+ puts "Hello!"
+```
diff --git a/docs/src/getting-started/security.md b/docs/src/getting-started/security.md
new file mode 100644
index 0000000000..86b278e700
--- /dev/null
+++ b/docs/src/getting-started/security.md
@@ -0,0 +1,94 @@
+# Security
+
+- [Authentication](#authentication)
+- [Secret Management](#secret-management)
+
+## Authentication
+
+Concord supports multiple authentication methods:
+- Concord [API tokens](#using-api-tokens);
+- basic authentication (username/password);
+- temporary [session tokens](#using-session-tokens);
+- OpenID Connect, via [the OIDC plugin](https://github.com/walmartlabs/concord/tree/master/server/plugins/oidc).
+
+Plugins can implement additional authentication methods.
+
+### Using API Tokens
+
+The key must be passed in the `Authorization` header on every API request. For
+example:
+
+```
+curl -v -H "Authorization: " ...
+```
+
+API keys are managed using the [API key](../api/apikey.md) endpoint or using
+the UI.
+
+### Using Username and Password
+
+For example:
+```
+curl -v -u myuser:mypwd ...
+```
+
+The actual user record will be created on the first successful authentication
+attempt. After that, it can be managed as usual, by using
+the [User](../api/user.md) API endpoint.
+
+Username/password authentication uses an LDAP/Active Directory realm. Check
+[Configuration](./configuration.md#server-configuration-file) document for details.
+
+### Using Session Tokens
+
+For each process Concord generates a temporary "session token" that can be used
+to call Concord API. The token is valid until the process reaches one of
+the final statuses:
+- `FINISHED`
+- `FAILED`
+- `CANCELLED`
+- `TIMED_OUT`.
+
+The session token must be passed in the `X-Concord-SessionToken` header:
+
+```
+curl -v -H "X-Concord-SessionToken: " ...
+```
+
+Such API requests use the process's security principal, i.e. they run on behalf
+of the process' current user.
+
+The current session token is available as `${processInfo.sessionToken}`
+[variable](../processes-v1/index.md#provided-variables).
+
+## Secret Management
+
+Concord provides an API to create and manage various types of secrets that can
+be used in user flows and for Git repository authentication.
+
+Secrets can be created and managed using
+[the Secret API endpoint](../api/secret.md) or the UI.
+
+Supported types:
+- plain strings and binary data (files) ([example](../api/secret.md#example-single-value-secret);
+- username/password pairs ([example](../api/secret.md#example-username-password-secret));
+- SSH key pairs ([example](../api/secret.md#example-new-key-pair)).
+
+Secrets can optionally be protected by a password provided by the user.
+Non password-protected secrets are encrypted with an environment specific key
+defined in Concord Server's configuration.
+
+Additionally, Concord supports "encrypted strings" - secrets that are stored
+"inline", directly in Concord YAML files:
+
+```yaml
+flows:
+ default:
+ - log: "Hello, ${crypto.decryptString('aXQncyBub3QgYW4gYWN0dWFsIGVuY3J5cHRlZCBzdHJpbmc=')}"
+```
+
+Concord encrypts and decrypts such values by using a project-specific
+encryption key. In order to use encrypted strings, the process must run in a project.
+
+The [crypto]({{ site.concord_plugins_v2_docs }}/crypto.md) task can be used to work with secrets and
+encrypted strings.
diff --git a/docs/src/getting-started/tasks.md b/docs/src/getting-started/tasks.md
new file mode 100644
index 0000000000..a644eed32d
--- /dev/null
+++ b/docs/src/getting-started/tasks.md
@@ -0,0 +1,22 @@
+# Tasks
+
+Tasks are used to call Java code that implements functionality that is
+too complex to express with the Concord DSL and EL in YAML directly.
+Processes can include external tasks as dependencies, extending
+the functionality available for Concord flows.
+
+For example, the [Ansible]({{ site.concord_plugins_v2_docs }}/ansible.md) plugin provides
+a way to execute an Ansible playbook as a flow step,
+the [Docker]({{ site.concord_plugins_v2_docs }}/docker.md) plugin allows users to execute any
+Docker image, etc.
+
+In addition to the standard plugins, users can create their own tasks
+leveraging (almost) any 3rd-party Java library or even wrapping existing
+non-Java tools (e.g. Ansible).
+
+Currently, Concord supports two different runtimes. The task usage and
+development is different depending on the chosen runtime. See the runtime
+specific pages for more details:
+
+- [Runtime v1 tasks](../processes-v1/tasks.md)
+- [Runtime v2 tasks](../processes-v2/tasks.md)
diff --git a/docs/src/plugins/ansible.md b/docs/src/plugins/ansible.md
new file mode 100644
index 0000000000..c8e9733e8b
--- /dev/null
+++ b/docs/src/plugins/ansible.md
@@ -0,0 +1,858 @@
+# Ansible
+
+Concord supports running [Ansible](https://www.ansible.com/) playbooks with the
+`ansible` task as part of any flow. This allows you to provision and manage
+application deployments with Concord.
+
+- [Usage](#usage)
+- [Ansible](#ansible)
+- [Parameters](#parameters)
+- [Configuring Ansible](#configuring-ansible)
+- [Inline inventories](#inline-inventories)
+- [Dynamic inventories](#dynamic-inventories)
+- [Authentication with Secrets](#secrets)
+- [Ansible Vault](#ansible-vault)
+- [Custom Docker Images](#docker)
+- [Retry and Limit Files](#retry-limit)
+- [Ansible Lookup Plugins](#ansible-lookup-plugins)
+- [Group Vars](#group-vars)
+- [Input Variables](#input-variables)
+- [Output Variables](#out)
+- [Extra Modules](#extra-modules)
+- [External Roles](#external-roles)
+- [Log Filtering](#log-filtering)
+- [Limitations](#limitations)
+
+## Usage
+
+To be able to use the task in a Concord flow, it must be added as a
+[dependency](../processes-v2/configuration.html#dependencies):
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:ansible-tasks:{{ site.concord_core_version }}
+```
+
+This adds the task to the classpath and allows you to invoke the task in a flow:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: playbook/hello.yml
+ out: ansibleResult
+```
+
+## Ansible
+
+The plugin, with a configuration as above, executes an Ansible playbook with the
+Ansible installation running on Concord.
+
+__The version of Ansible being used is {{ site.concord_ansible_version }}.__
+
+A number of configuration parameters are pre-configured by the plugin:
+
+```
+[defaults]
+host_key_checking = false
+retry_files_enabled = true
+gather_subset = !facter,!ohai
+remote_tmp = /tmp/${USER}/ansible
+timeout = 120
+
+[ssh_connection]
+pipelining = true
+```
+
+Further and up to date details are available
+[in the source code of the plugin]({{ site.concord_source }}blob/master/plugins/tasks/ansible/src/main/java/com/walmartlabs/concord/plugins/ansible/v2/AnsibleTaskV2.java).
+
+One of the most important lines is `gather_subset = !facter,!ohai`. This disables
+some of the variables that are usually available such as `ansible_default_ipv4`.
+The parameters can be overridden in your own Ansible task invocation as
+described in [Configuring Ansible](#configuring-ansible):
+
+```yaml
+- task: ansible
+ in:
+ config:
+ defaults:
+ gather_subset: all
+```
+
+
+## Parameters
+
+All parameter sorted alphabetically. Usage documentation can be found in the
+following sections:
+
+- `auth` - authentication parameters:
+ - `privateKey` - private key parameters;
+ - `path` - string, path to a private key file located in the process's working directory;
+ - `user` - string, remote username;
+ - `secret` - parameters of the SSH key pair stored as a Concord secret
+ - `org` - string, the secret's organization name;
+ - `name` - string, the secret's name;
+ - `password` - string, the secret's password (optional);
+ - `krb5` - Kerberos 5 authentication:
+ - `user` - AD username;
+ - `password` - AD password.
+- `config` - JSON object, used to create an
+ [Ansible configuration](#configuring-ansible);
+- `check` - boolean, when set to true Ansible does not make any changes; instead
+ it tries to predict some of the changes that may occur. Check
+ [the official documentation](https://docs.ansible.com/ansible/2.5/user_guide/playbooks_checkmode.html)
+ for more details
+- `debug` - boolean, enables additional debug logging;
+- `disableConcordCallbacks` - boolean, disables all Ansible callback plugins
+ provided by Concord (event recording, `outVars` processing, etc). Default is
+ `false`;
+- `dockerImage` - string, optional [Docker image](#custom-docker-images) to use;
+- `dynamicInventoryFile` - string, path to a dynamic inventory
+ script. See also [Dynamic inventories](#dynamic-inventories) section;
+- `enableLogFiltering` - boolean, see [Log Filtering](#log-filtering) section;
+- `enablePolicy` - boolean, apply active Concord [policies](../getting-started/policies.html#ansible-rule).
+ Default is `true`;
+- `enableEvents` - boolean, record Ansible events - task executions, hosts, etc.
+ Default is `true`;
+- `enableStats` - boolean, save the statistics as a JSON file. Default is `true`;
+- `enableOutsVars` - boolean, process [output variables](#output-variables).
+ Default is `true`;
+- `extraEnv` - JSON object, additional environment variables
+- `extraVars` - JSON object, used as `--extra-vars`. See also
+ the [Input Variables](#input-variables) section;
+- `extraVarsFiles` - list of strings, paths to extra variables files. See also
+ the [Input Variables](#input-variables) section;
+- `groupVars` - configuration for exporting secrets as Ansible [group_vars](#group-vars) files;
+- `inventory` - JSON object, an inventory data specifying
+ [a static, inline inventories](#inline-inventories)section;
+- `inventoryFile` - string, path to an inventory file;
+- `limit` - limit file, see [Retry and Limit Files](#retry-limit)
+- `playbook` - string, a path to a playbook. See [the note](#custom-docker-images)
+on usage with `dockerImage`;
+- `retry` - boolean, the retry flag, see [Retry and Limit Files](#retry-limit);
+- `tags` - string, a comma-separated list or an array of
+ [tags](http://docs.ansible.com/ansible/latest/playbooks_tags.html);
+- `skipTags` - string, a comma-separated list or an array of
+ [tags](http://docs.ansible.com/ansible/latest/playbooks_tags.html) to skip;
+- `saveRetryFile` - file name for the retry file, see [Retry and Limit Files](#retry-limit)
+- `syntaxCheck` - boolean, perform a syntax check on the playbook, but do not execute it
+- `vaultPassword` - string, password to use with [Ansible Vault](#ansible-vault).
+- `verbose` - integer, increase log
+ [verbosity](http://docs.ansible.com/ansible/latest/ansible-playbook.html#cmdoption-ansible-playbook-v). 1-4
+ correlate to -v through -vvvv.
+
+## Result Data
+
+In addition to
+[common task result fields](../processes-v2/flows.html#task-result-data-structure),
+the `ansible` task returns:
+
+- `exitCode` - number, ansible process exit code;
+- Custom attributes matching names defined in [`out_vars`](#output-variables);
+
+## Configuring Ansible
+
+Ansible's [configuration](https://docs.ansible.com/ansible/latest/reference_appendices/config.html)
+can be specified under the `config` key:
+
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ config:
+ defaults:
+ forks: 50
+ ssh_connection:
+ pipelining: True
+```
+
+which is equivalent to:
+
+```
+[defaults]
+forks = 50
+
+[ssh_connection]
+pipelining = True
+```
+
+## Inline Inventories
+
+Using an inline
+[inventory](http://docs.ansible.com/ansible/latest/intro_inventory.html) you
+can specify the details for all target systems to use.
+
+The example sets the host IP of the `local` inventory item and an
+additional variable in `vars`:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: "playbook/hello.yml"
+ inventory:
+ local:
+ hosts:
+ - "127.0.0.1"
+ vars:
+ ansible_connection: "local"
+```
+
+Multiple inventories can be used as well:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ inventory:
+ - local:
+ hosts:
+ - "127.0.0.1"
+ vars:
+ ansible_connection: "local"
+ - remote:
+ hosts:
+ - "example.com"
+```
+
+In the example above, the plugin creates two temporary inventory files and runs
+`ansible-playbook -i fileA -i fileB ...` command.
+
+The plugin allows mixing and matching of inventory files and inline inventory
+definitions:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ inventory:
+ - "path/to/a/local/file.ini"
+ - local:
+ hosts:
+ - "127.0.0.1"
+ vars:
+ ansible_connection: "local"
+```
+
+Alternatively, an inventory file can be uploaded supplied as a separate file
+e.g. `inventory.ini`:
+
+```
+[local]
+127.0.0.1
+
+[local:vars]
+ansible_connection=local
+````
+
+and specify to use it in `inventoryFile`:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: "playbook/hello.yml"
+ inventoryFile: inventory.ini
+```
+
+## Dynamic Inventories
+
+Alternatively to a static configuration to set the target system for Ansible,
+you can use a script to create the inventory - a
+[dynamic inventory](http://docs.ansible.com/ansible/latest/intro_dynamic_inventory.html).
+
+You can specify the name of the script using the `dynamicInventoryFile` as input
+parameter for the task:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: "playbook/hello.yml"
+ dynamicInventoryFile: "inventory.py"
+```
+
+The script is automatically marked as executable and passed directly to
+`ansible-playbook` command.
+
+
+
+
+## Authentication with Secrets
+
+### Linux / SSH
+
+The Ansible task can use a key managed as a secret by Concord, that you have
+created or uploaded via the user interface or the
+[REST API](../api/secret.html) to connect to the target servers.
+
+The public part of a key pair should be added as a trusted key to the
+target server. The easiest way to check if the key is correct is to
+try to login to the remote server like this:
+
+```
+ssh -v -i /path/to/the/private/key remote_user@target_host
+```
+
+If you are able to login to the target server without any error
+messages or password prompt, then the key is correct and can be used
+with Ansible and Concord.
+
+The next step is to configure the `user` to use to connect to the servers and
+the key to use with the `privateKey` configuration:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ auth:
+ privateKey:
+ user: "app"
+ secret:
+ org: "myOrg" # optional
+ name: "mySecret"
+ password: mySecretPassword # optional
+```
+
+This exports the key with the provided username and password to the filesystem
+as `temporaryKeyFile` and uses the configured username `app` to connect. The
+equivalent Ansible command is
+
+```
+ansible-playbook --user=app --private-key temporaryKeyFile ...
+```
+
+Alternatively, it is possible to specify the private key file directly:
+```
+- task: ansible
+ in:
+ auth:
+ privateKey:
+ path: "private.key"
+```
+
+The `path` must be relative to the current process' working directory.
+
+### Windows
+
+Upload a [Windows Credential (Group Var)](https://docs.ansible.com/ansible/latest/user_guide/windows_winrm.html#ntlm) as a file secret via the UI or [api](#group-vars).
+
+Example file contents:
+```yaml
+ansible_user: AutomationUser@SUBDOMAIN.DOMAIN.COM
+ansible_password: yourpasshere
+ansible_port: 5985
+ansible_connection: winrm
+ansible_winrm_server_cert_validation: ignore
+ansible_winrm_transport: ntlm
+```
+
+Export this secret as a [Group Var](#group-vars) for an inventory group containing the windows hosts.
+
+## Ansible Vault
+
+[Ansible Vault](https://docs.ansible.com/ansible/latest/vault.html) allows you
+to keep sensitive data in files that can then be accessed in a concord flow.
+The password and the password file for Vault usage can be specified using
+`vaultPassword` or `vaultPasswordFile` parameters:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ # passing the vault's password as a value
+ vaultPassword: "myS3cr3t"
+
+ # or as a file
+ vaultPasswordFile: "get_vault_pwd.py"
+```
+
+Any secret values are then made available for usage in the Ansible playbook as
+usual.
+
+[Multiple vault passwords](https://docs.ansible.com/ansible/latest/user_guide/vault.html#multiple-vault-passwords)
+or password files can also be specified:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ # pass as values
+ vaultPassword:
+ myVaultID: "aStringValue"
+ myOtherVaultId: "otherStringValue"
+
+ # or using files
+ vaultPasswordFile:
+ vaultFile: "get_vault_pwd.py"
+ otherVaultFile: "get_other_vault_pwd.py"
+```
+
+The `vaultPassword` example above is an equivalent of running
+
+```bash
+ansible-playbook --vault-id myVaultId@aStringValue --vault-id myOtherVaultId@otherStringValue ...
+```
+
+The `vaultPasswordFile` must be relative paths inside the process' working
+directory.
+
+Our [ansible_vault example project]({{ site.concord_source}}/tree/master/examples/ansible_vault)
+shows a complete setup and usage.
+
+
+
+## Custom Docker Images
+
+The Ansible task typically runs on the default Docker container used by Concord
+for process executions. In some cases Ansible playbooks require additional
+modules to be installed. You can create a suitable Docker image, publish it to a
+registry and subsequently use it in your flow by specifying it as input
+parameters for the Ansible task:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ dockerImage: "walmartlabs/concord-ansible"
+```
+
+We recommend using `walmartlabs/concord-ansible` as a base for your custom
+Ansible images.
+
+Please refer to our [Docker plugin documentation](./docker.html) for more
+details.
+
+**Note:** Concord mounts the current `${workDir}` into the container as
+`/workspace`. If your `playbook` parameter specified an absolute path or uses
+`${workDir}` value, consider using relative paths:
+
+```yaml
+- task: ansible
+ in:
+ playbook: "${workDir}/myPlaybooks/play.yml" # doesn't work, ${workDir} points to a directory outside of the container
+ dockerImage: "walmartlabs/concord-ansible"
+
+- task: ansible
+ in:
+ playbook: "myPlaybooks/play.yml" # works, the relative path correctly resolves to the path inside the container
+ dockerImage: "walmartlabs/concord-ansible"
+```
+
+
+
+## Retry and Limit Files
+
+Concord provides support for Ansible "retry files". By
+default, when a playbook execution fails, Ansible creates a `*.limit` file which
+can be used to restart the execution for failed hosts.
+
+If the `retry` parameter is set to `true`, Concord automatically uses the
+existing retry file of the playbook:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: playbook/hello.yml
+ retry: true
+```
+
+The equivalent Ansible command is
+
+```bash
+ansible-playbook --limit @${workDir}/playbook/hello.retry
+```
+
+Note that specifying `retry: true` doesn't mean that Ansible automatically
+retries the playbook execution. It only tells Ansible to look for a `*.retry`
+file and if it is there - use it. If there was no `*.retry` files created before
+hand, the task call simply fails. See an
+[example](https://github.com/walmartlabs/concord/tree/master/examples/ansible_retry)
+how to combine the plugin's `retry` and the task call's `retry` attribute to
+automatically re-run a playbook.
+
+Alternatively, the `limit` parameter can be specified directly:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: playbook/hello.yml
+ # uses @${workDir}/my.retry file
+ limit: @my.retry
+```
+
+The equivalent Ansible command is
+
+```bash
+ansible-playbook --limit @my.retry
+```
+
+If the `saveRetryFile` parameter is set to `true`, then the generated `*.retry`
+file is saved as a process attachment and can be retrieved using the REST API:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ saveRetryFile: true
+```
+
+```bash
+curl ... http://concord.example.com/api/v1/process/${processId}/attachments/ansible.retry
+```
+
+## Ansible Lookup Plugins
+
+Concord provides a special
+[Ansible lookup plugin](https://docs.ansible.com/ansible/devel/plugins/lookup.html)
+to retrieve password-protected secrets in playbooks:
+
+
+```yaml
+{% raw %}- hosts: local
+ tasks:
+ - debug:
+ msg: "We got {{ lookup('concord_data_secret', 'myOrg', 'mySecret', 'myPwd') }}"
+ verbosity: 0{% endraw %}
+```
+
+In this example `myOrg` is the name of the organization that owns the secret,
+`mySecret` is the name of the retrieved secret and `myPwd` is the password
+for accessing the secret.
+
+Use `None` to retrieve a secret created without a password:
+
+```yaml
+{% raw %}- hosts: local
+ tasks:
+ - debug:
+ msg: "We got {{ lookup('concord_data_secret', 'myOrg', 'mySecret', None) }}"
+ verbosity: 0{% endraw %}
+```
+
+If the process was started using a project, then the organization name can be
+omitted. Concord will automatically use the name of the project's organization:
+
+```yaml
+{% raw %}- hosts: local
+ tasks:
+ - debug:
+ msg: "We got {{ lookup('concord_data_secret', 'mySecret', 'myPwd') }}"
+ verbosity: 0{% endraw %}
+```
+
+Currently, only simple string value secrets are supported.
+
+See also [the example]({{ site.concord_source }}tree/master/examples/secret_lookup)
+project.
+
+
+
+## Group Vars
+
+Files stored as Concord [secrets](../api/secret.html) can be used as Ansible's
+`group_var` files.
+
+For example, if we have a file stored as a secret like this,
+
+```yaml
+# myVars.yml
+my_name: "Concord"
+
+# saved as:
+# curl ... \
+# -F type=data \
+# -F name=myVars \
+# -F data=@myVars.yml \
+# -F storePassword=myPwd \
+# http://host:port/api/v1/org/Default/secret
+```
+
+it can be exported as a `group_vars` file using `groupVars` parameter:
+
+```yaml
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: myPlaybooks/play.yml
+ ...
+ groupVars:
+ - myGroup:
+ orgName: "Default" # optional
+ secretName: "myVars"
+ password: "myPwd" # optional
+ type: "yml" # optional, default "yml"
+```
+
+In the example above, `myVars` secret is exported as a file into
+`${workDir}/myPlaybooks/group_vars/myGroup.yml` and `my_name` variable is
+available for `myGroup` host group.
+
+Check
+[the official Ansible documentation](http://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#group-variables)
+for more details `group_vars` files.
+
+## Input Variables
+
+To pass variables from the Concord flow to an Ansible playbook execution use
+`extraVars`:
+
+```yaml
+- task: ansible
+ in:
+ playbook: playbook.yml
+ extraVars:
+ message: "Hello from Concord! Process ID: ${txId}"
+```
+
+And the corresponding playbook:
+
+```yaml
+- hosts: all
+ tasks:
+ - debug:
+ msg: "{{ message }}"
+ verbosity: 0
+```
+
+Effectively, it is the same as running this command:
+
+```bash
+ansible-playbook ... -e '{"message": "Hello from..."}' playbook.yml
+```
+
+Any JSON-compatible data type such as strings, numbers, booleans, lists, etc.
+can be used.
+
+Additionally, YAML/JSON files can be used to pass additional variables into the
+playbook execution:
+
+```yaml
+- task: ansible
+ in:
+ playbook: playbook.yml
+ extraVarsFiles:
+ - "myVars.json"
+ - "moreVars.yml"
+```
+
+This is equivalent to running the following command:
+
+```bash
+ansible-playbook ... -e @myVars.json -e @moreVars.yml playbook.yml
+```
+
+
+
+## Output Variables
+
+The `ansible` task can export a list of variable names from the Ansible
+execution back to the Concord process context with the `outVars` parameters.
+
+The Ansible playbook can use the `register` or `set_fact` statements to make
+the variable available:
+
+```yaml
+- hosts: local
+ tasks:
+ - debug:
+ msg: "Hi there!"
+ verbosity: 0
+ register: myVar
+```
+
+In the example above, the `myVar` variable saves a map of host -> value elements.
+If there was a single host 127.0.0.1 in the ansible execution, then the `myVar`
+looks like the following snippet:
+
+```json
+{
+ "127.0.0.1": {
+ "msg": "Hi there!",
+ ...
+ }
+}
+```
+
+The variable is captured in Concord with `outVars` and can be used after the
+ansible task.
+
+```yaml
+- task: ansible
+ in:
+ playbook: playbook/hello.yml
+ inventory:
+ local:
+ hosts:
+ - "127.0.0.1"
+ vars:
+ ansible_connection: "local"
+ outVars:
+ - "myVar"
+ out: ansibleResult
+```
+
+The object can be traversed to access specific values:
+
+```yaml
+- log: ${ansibleResult.myVar['127.0.0.1']['msg']}
+```
+
+Expressions can be used to convert an `outVar` value into a "flat" list of
+values:
+
+```yaml
+# grab a 'msg' value for each host
+- log: |-
+ ${ansibleResult.myVar.entrySet().stream()
+ .map(kv -> kv.value.msg)
+ .toList()}
+```
+
+**Note:** not compatible with `disableConcordCallbacks: true` or
+`enableOutVars: false`. Check the [parameters](#parameters) section for more
+details.
+
+## Extra Modules
+
+The plugin provides two ways of adding 3rd-party modules or using a specific
+version of Ansible:
+
+- using a [custom Docker image](#custom-docker-images);
+- or using the plugin's support for Python's
+ [virtualenv](https://virtualenv.pypa.io/en/latest/).
+
+Virtualenv can be used to install [PIP modules](https://pypi.org/), as well as
+Ansible itself, into a temporary directory inside the process' working
+directory.
+
+For example:
+
+```yaml
+- task: ansible
+ in:
+ virtualenv:
+ packages:
+ - "ansible==2.7.0"
+ - "openshift"
+```
+
+In the example above the plugin creates a new virtual environment and installs
+two packages `ansible`, using the specified version, And `openshift`. This
+environment is then used to run Ansible.
+
+The full syntax:
+
+- `virtualenv`
+ - `packages` - list of PIP packages with optional version qualifiers;
+ - `indexUrl` - optional URL of the Python Package Index, defaults to
+ `https://pypi.org/simple`;
+
+Note that, at the moment the plugin doesn't provide any caching for virtual
+environments. Any requested modules are downloaded each time the task
+executes, which might take significant amount of time depending on the size of
+the packages, their dependencies, network speed, etc.
+
+## External Roles
+
+Ansible roles located in external repositories can be imported using the `roles`
+parameter:
+
+```yaml
+- task: ansible
+ in:
+ playbook: "playbook.yml"
+ roles:
+ - src: "https://github.com/my-org/my-roles.git"
+ name: "roles"
+```
+
+And the corresponding playbook:
+
+```yaml
+- hosts: myHosts
+ roles:
+ - somerole # any role in the repository can be used
+```
+
+Using the configuration above the plugin performs a `git clone` of the
+specified URL into a temporary directory and adds the path to `myrole` into the
+path list of Ansible roles.
+
+The `roles` parameter is a list of role imports with the following syntax:
+
+- `src` - URL of a repository to import;
+- `name` - the name of the directory or a repository shortcut (see below);
+- `path` - a path in the repository to use;
+- `version` - a branch name, a tag or a commit ID to use.
+
+A shortcut can be used to avoid specifying the repository URLs multiple times:
+
+```yaml
+configuration:
+ arguments:
+ ansibleParams:
+ defaultSrc: "https://github.com"
+
+flows:
+ default:
+ - task: ansible
+ in:
+ playbook: playbook.yml
+ roles:
+ - name: "my-org/my-roles"
+```
+
+In the example above the plugin uses `ansibleParams.defaultSrc` and the role's
+`name` to create the repository URL: https://github.com/my-org/my-roles.git
+
+It is possible to put such `ansibleParams` into the [default process
+configuration](../getting-started/configuration.html#default-process-variables)
+and make it the system default. If you're using a hosted Concord instance,
+contact your administrator if such defaults are available.
+
+## Log Filtering
+
+The plugin provides an optional mode when variables that might contain
+sensitive data are prevented from appearing in the log.
+
+To enable this mode, set `enableLogFiltering` to `true` in the task call
+parameters:
+
+```yaml
+- task: ansible
+ in:
+ enableLogFiltering: true
+```
+
+If the filter detects a variable with `password`, `credentials`, `secret`,
+`ansible_password` or `vaultpassword` in its name or value, then the value
+appears as `******` in the log. Additionally, the `no_log` mode is enabled
+for steps that include such variables.
+
+## Limitations
+
+Ansible's `strategy: debug` is not supported. It requires an interactive
+terminal and expects user input and should not be used in Concord's
+environment. Playbooks with `strategy: debug` will hang indefinitely, but can
+be killed using the REST API or the Console.
diff --git a/docs/src/plugins/asserts.md b/docs/src/plugins/asserts.md
new file mode 100644
index 0000000000..7cc12e22ae
--- /dev/null
+++ b/docs/src/plugins/asserts.md
@@ -0,0 +1,12 @@
+# Asserts Plugin
+
+The `asserts` task allows you to verify conditions within your flows.
+It ensures that required variables, inputs, or states are correctly set during process execution.
+If a condition fails, the flow will terminate with an error, preventing further execution
+
+Task provides the following functions:
+
+- `asserts.hasVariable(variableName)` - verifies that a specific variable is present in the process;
+- `asserts.hasFile(path)` - checks if a file exists at the given path;
+- `asserts.assertEquals(expected, actual)` - ensures that two values are equal;
+- `asserts.assertTrue(condition)` - validates that a given condition is true.
diff --git a/docs/src/plugins/concord.md b/docs/src/plugins/concord.md
new file mode 100644
index 0000000000..191783ac88
--- /dev/null
+++ b/docs/src/plugins/concord.md
@@ -0,0 +1,651 @@
+# Concord
+
+The `concord` task allows users to start and manage new processes from within
+running processes.
+
+The task is provided automatically for all flows, no external dependencies
+necessary.
+
+- [Examples](#examples)
+- [Parameters](#parameters)
+- [Starting a Process using a Payload Archive](#start-payload)
+- [Starting a Process using an Existing Project](#start-project)
+- [Starting an External Process](#start-external)
+- [Scheduling a Process](#start-schedule)
+- [Specifying Profiles](#start-profiles)
+- [File Attachments](#start-attachments)
+- [Forking a Process](#fork)
+- [Forking Multiple Instances](#fork-multi)
+- [Synchronous Execution](#sync)
+- [Suspending Parent Process](#start-suspend)
+- [Suspending for Completion](#suspend-for-completion)
+- [Waiting for Completion](#wait-for-completion)
+- [Handling Cancellation and Failures](#handle-onfailure)
+- [Cancelling Processes](#cancel)
+- [Tagging Subprocesses](#tags)
+- [Output Variables](#outvars)
+
+## Parameters
+
+All parameter sorted alphabetically. Usage documentation can be found in the
+following sections:
+
+- `action` - string, name of the action (`start`, `startExternal`, `fork`, `kill`);
+- `activeProfiles` - list of string values, profiles to activate;
+- `apiKey` - string, Concord API key to use. If not specified the task uses
+the current process' session key;
+- `arguments` - input arguments of the starting processes;
+- `disableOnCancel` - boolean, disable `onCancel` flow in forked processes;
+- `disableOnFailure` - boolean, disable `onFailure` flow in forked processes;
+- `entryPoint` - string, name of the starting process' flow;
+- `ignoreFailures` - boolean, ignore failed processes;
+- `instanceId` - UUID, ID of the process to `kill`;
+- `org` - string, name of the process' organization, optional, defaults to the
+organization of the calling process;
+- `outVars` - list of string values, out variables to capture;
+- `payload` - path to a ZIP archive or a directory, the process' payload;
+- `requirements` - object, allows specifying the process'
+[requirements](../processes-v2/configuration.html#requirements);
+- `project` - string, name of the process' project;
+- `repo` - string, name of the project's repository to use;
+- `repoBranchOrTag` - string, overrides the configured branch or tag name of
+the project's repository;
+- `repoCommitId` - string, overrides the configured GIT commit ID of the
+project's repository;
+- `startAt` - [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) date/time
+value, the process' start time;
+- `suspend` - boolean, if `true` and `sync` is enabled the process [suspends](#start-suspend)
+waiting for the child process to complete (only for actions `start` and `fork`);
+- `sync` - boolean, wait for completion if `true`, defaults to `false`;
+- `debug` - boolean, if `true` the plugin logs additional debug information, defaults to `false`;
+- `tags` - list of string values, the process' tags;
+- `attachments` - list of file attachments;
+
+
+
+## Starting a Process using a Payload Archive
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ payload: payload.zip
+ out: jobOut
+```
+
+The `start` action starts a new subprocess using the specified payload archive.
+The ID of the started process is stored in the `id` attribute:
+
+```yaml
+- if: ${jobOut.ok}
+ then:
+ - log: "I've started a new process: ${jobOut.id}"
+ else:
+ - log: "Error with child process: ${jobOut.error}"
+```
+
+
+
+## Starting a Process using an Existing Project
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ org: myOrg
+ project: myProject
+ payload: payload.zip
+ out: jobOut
+```
+
+The `start` expression with a `project` parameter and a `payload` in the form of
+a zip archive or the name of a folder in your project repository, starts a new
+subprocess in the context of the specified project.
+
+Alternatively, if the project has a repository configured, the process can be
+started by configuring the repository:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ project: myProject
+ repo: myRepo
+ out: jobOut
+```
+
+The process is started using the resources provided by the specified archive,
+project and repository.
+
+
+
+## Starting an External Process
+
+To start a process on an external Concord instance use the `startExternal` action:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ baseUrl: "http://another.concord.example.com:8001"
+ apiKey: "myApiKey"
+ action: startExternal
+ project: myProject
+ repo: myRepo
+ out: jobOut
+```
+
+Connection parameters can be overridden using the following keys:
+
+- `baseUrl` - Concord REST API endpoint. Defaults to the current
+ server's API endpoint address;
+- `apiKey` - user's REST API key.
+
+**Note:** The `suspend: true` option is not supported with the `startExternal`
+action.
+
+
+
+## Scheduling a Process
+
+To schedule a process to a specific date and time, use the `startAt` parameter:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ ...
+ startAt: "2018-03-16T23:59:59-05:00"
+ out: jobOut
+```
+
+The `startAt` parameter accepts an ISO-8601 string, `java.util.Date` or
+`java.util.Calendar` values. It is important to include a timezone as the
+server may use a different default timezone value.
+
+
+
+## Specifying Profiles
+
+To specify which profiles are used to start the process, use the
+`activeProfiles` parameter:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ ...
+ activeProfiles:
+ - firstProfile
+ - secondProfile
+ out: jobOut
+```
+
+The parameter accepts either a YAML array or a comma-separated string value.
+
+
+
+## File Attachments
+
+To start a process with file attachments, use the `attachments` parameter. An
+attachment can be a single path to a file, or a map which specifies a source
+and destination filename for the file. If the attachment is a single path, the
+file is placed in the root directory of the new process with the same name.
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ ...
+ attachments:
+ - ${workDir}/someDir/myFile.txt
+ - src: anotherFile.json
+ dest: someFile.json
+ out: jobOut
+```
+
+This is equivalent to the curl command:
+
+```
+curl ... -F myFile.txt=@${workDir}/someDir/myFile.txt -F someFile.json=@anotherFile.json ...
+```
+
+
+
+## Forking a Process
+
+Forking a process creates a copy of the current process. All variables and
+files defined at the start of the parent process are copied to the child process
+as well:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: fork
+ entryPoint: sayHello
+ out: jobOut
+
+ - log: "Forked child process: ${jobOut.ids[0]}"
+
+ sayHello:
+ - log: "Hello from a subprocess!"
+```
+
+The ID of the started process is stored in the `ids` array attribute of the
+returned result.
+
+**Note:** Due to the current limitations, files created after
+the start of a process cannot be copied to child processes.
+
+
+
+## Forking Multiple Instances
+
+It is possible to create multiple forks of a process with a different
+sets of parameters:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: fork
+ forks:
+ - entryPoint: pickAColor
+ arguments:
+ color: "red"
+ - entryPoint: pickAColor
+ arguments:
+ color: "green"
+ - entryPoint: pickAColor
+ instances: 2
+ arguments:
+ color: "blue"
+ out: jobOut
+```
+
+The `instances` parameter allows spawning of more than one copy of a process.
+
+The IDs of the started processes are stored in the `ids` array in the result.
+
+```yaml
+- log: "Forked child processes: ${jobOut.ids}"
+```
+
+
+
+## Synchronous Execution
+
+By default, all subprocesses are started asynchronously. To start a process and
+wait for it to complete, use `sync` parameter:
+
+```yaml
+flows:
+ default:
+ - tasks: concord
+ in:
+ action: start
+ payload: payload.zip
+ sync: true
+ out: jobOut
+```
+
+If a subprocess fails, the task throws an exception. To ignore failed processes
+use `ignoreFailures: true` parameter:
+
+```yaml
+flows:
+ default:
+ - tasks: concord
+ in:
+ action: start
+ payload: payload.zip
+ sync: true
+ ignoreFailures: true
+ out: jobOut
+```
+
+
+
+## Suspending Parent Process
+
+There's an option to suspend the parent process while it waits for the child process
+completion:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ org: myOrg
+ project: myProject
+ repo: myRepo
+ sync: true
+ suspend: true
+ out: jobOut
+
+ - log: "Done: ${jobOut.id}"
+```
+
+This can be very useful to reduce the amount of Concord agents needed. With
+`suspend: true`, the parent process does not consume any resources including
+agent workers, while waiting for the child process.
+
+`suspend` can be used with the `fork` action as well:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: fork
+ forks:
+ - entryPoint: sayHello
+ - entryPoint: sayHello
+ - entryPoint: sayHello
+ sync: true
+ suspend: true
+ out: jobOut
+
+ sayHello:
+ - log: "Hello from a subprocess!"
+```
+
+Currently, `suspend` can only be used with the `start` and `fork` actions.
+
+**Note:** Due to the current limitations, files created after the start of
+the parent process are not preserved. Effectively, the suspend works in the same
+way as the [forms](../getting-started/forms.html).
+
+
+
+## Suspending for Completion
+
+You can use the follow approach to suspend a process until the completion of the
+other process:
+
+```yaml
+flows:
+ default:
+ - set:
+ children: []
+
+ - task: concord
+ in:
+ action: start
+ payload: payload
+ out: jobOut
+
+ - ${children.add(jobOut.id)}
+
+ - task: concord
+ in:
+ action: start
+ payload: payload
+ out: jobOut
+
+ - ${children.add(jobOut.id)}
+
+ - ${concord.suspendForCompletion(children)}
+
+ - log: "process is resumed."
+```
+
+
+
+## Waiting for Completion
+
+To wait for a completion of a process:
+
+```yaml
+flows:
+ default:
+ # wait for one id
+ - ${concord.waitForCompletion(id)}
+
+ # or multiple
+ - ${concord.waitForCompletion(ids)}
+```
+
+The `ids` value is a list (as in `java.util.List`) of process IDs.
+
+The expression returns a map of process entries. To see all returned fields,
+check with the [Process API](../api/process.html#status):
+
+```json
+{
+ "56e3dcd8-a775-11e7-b5d6-c7787447ca6d": {
+ "status": "FINISHED"
+ },
+ "5cd83364-a775-11e7-aadd-53da44242629": {
+ "status": "FAILED",
+ "meta": {
+ "out": {
+ "lastError": {
+ "message": "Something went wrong."
+ }
+ }
+ }
+ }
+}
+```
+
+
+
+## Handling Cancellation and Failures
+
+Just like regular processes, subprocesses can have `onCancel` and `onFailure`
+flows.
+
+However, as process forks share their flows, it may be useful to disable
+`onCancel` or `onFailure` flows in subprocesses:
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: fork
+ disableOnCancel: true
+ disableOnFailure: true
+ entryPoint: sayHello
+
+ sayHello:
+ - log: "Hello!"
+ - throw: "Simulating a failure..."
+
+ # invoked only for the parent process
+ onCancel:
+ - log: "Handling a cancellation..."
+
+ # invoked only for the parent process
+ onFailure:
+ - log: "Handling a failure..."
+```
+
+
+
+## Cancelling Processes
+
+The `kill` action can be used to terminate the execution of a process.
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: kill
+ instanceId: ${someId}
+ sync: true
+```
+
+The `instanceId` parameter can be a single value or a list of process
+IDs.
+
+Setting `sync` to `true` forces the task to wait until the specified
+processes are stopped.
+
+
+
+## Tagging Subprocesses
+
+The `tags` parameters can be used to tag new subprocess with one or multiple
+labels.
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ payload: payload.zip
+ tags: ["someTag", "anotherOne"]
+```
+
+The parameter accepts either a YAML array or a comma-separated string value.
+
+Tags are useful for filtering (sub)processes:
+
+```yaml
+flows:
+ default:
+ # spawn multiple tagged processes
+
+ onCancel:
+ - task: concord
+ in:
+ action: kill
+ instanceId: "${concord.listSubprocesses(parentInstanceId, 'someTag')}"
+```
+
+
+
+## Output Variables
+
+Variables of a child process can be accessed via the `outVars` configuration.
+The functionality requires the `sync` parameter to be set to `true`.
+
+```yaml
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ project: myProject
+ repo: myRepo
+ sync: true
+ # list of variable names
+ outVars:
+ - someVar1
+ - someVar2
+ out: jobOut
+
+ - log: "We got ${jobOut.someVar1} and ${jobOut.someVar2}"
+```
+
+When starting multiple forks their output variables are collected into a nested
+object with fork IDs as keys:
+
+```yaml
+flows:
+ default:
+ # empty list to store fork IDs
+ - set:
+ children: []
+
+ # start the first fork and save the result as "firstResult"
+ - task: concord
+ in:
+ action: fork
+ entryPoint: forkA
+ sync: false
+ outVars:
+ - x
+ out: firstResult
+
+ # save the first fork's ID
+ - ${children.add(firstResult.id)}
+
+ # start the second fork and save the result as "secondResult"
+ - task: concord
+ in:
+ action: fork
+ entryPoint: forkB
+ sync: false
+ outVars:
+ - y
+ out: secondResult
+
+ # save the second fork's ID
+ - ${children.add(secondResult.id)}
+
+ # grab out vars of the forks
+ - expr: ${concord.getOutVars(children)} # accepts a list of process IDs
+ out: forkOutVars
+
+ # print the out-vars grouped by fork
+ - log: "${forkOutVars}"
+
+ forkA:
+ - set:
+ x: 1
+
+ forkB:
+ - set:
+ y: 2
+```
+
+**Note:** the `getOutVars` method waits for the specified processes to finish.
+If one of the specified processes fails, its output is going to be empty.
+
+In [runtime v2](../processes-v2/index.html) all variables are
+"local" -- limited to the scope they were defined in. The `outVars` mechanism
+grabs only the top-level variables, i.e. variables available in the `entryPoint`
+scope:
+
+```yaml
+flows:
+ # caller
+ default:
+ - task: concord
+ in:
+ action: fork
+ sync: true
+ entryPoint: onFork
+ outVars:
+ - foo
+ - bar
+ out: forkResult
+
+ - log: "${concord.getOutVars(forkResult.id)}"
+
+ # callee
+ onFork:
+ - set:
+ foo: "abc" # ok, top-level variable
+
+ - call: anotherFlow # not ok, "bar" stays local to "anotherFlow"
+
+ - call: anotherFlow # ok, "bar" pushed into the current scope, becomes a top-level variable
+ out:
+ - bar
+
+ anotherFlow:
+ - set:
+ bar: "xyz"
+```
diff --git a/docs/src/plugins/crypto.md b/docs/src/plugins/crypto.md
new file mode 100644
index 0000000000..58e0a245cd
--- /dev/null
+++ b/docs/src/plugins/crypto.md
@@ -0,0 +1,199 @@
+# Crypto
+
+The `crypto` task provides methods to work with Concord's
+[secrets store](../api/secret.html) as well as the methods to encrypt and
+decrypt simple values without storing.
+
+- [Exporting a SSH key pair](#ssh-key)
+- [Exporting Credentials](#credentials)
+- [Encrypting and Decrypting Values](#encrypting)
+
+The task is provided automatically by Concord and does not
+require any external dependencies.
+
+
+## Exporting a SSH key pair
+
+A SSH key pair, [stored in the secrets store](../api/secret.html) can
+be exported as a pair of files into a process' working directory:
+
+```yaml
+- ${crypto.exportKeyAsFile('myOrg', 'myKey', 'myKeyPassword')}
+```
+
+This expression returns a map with two keys:
+- `public` - relative path to the public key of the key pair;
+- `private` - same but for the private key.
+
+A full example adds a key via the REST API:
+
+```
+$ curl -u yourusername \
+-F storePassword="myKeyPassword" \
+-F name=myKey \
+-F type=key_pair \
+http://concord.example.com/api/1/org/Default/secret
+
+{
+ "id" : "...",
+ "result" : "CREATED",
+ "name" : "myKey",
+ "publicKey" : "...",
+ "password" : "myKeyPassword",
+ "ok" : true
+}
+```
+
+And subsequently exports the key in the default flow.
+
+```yaml
+flows:
+ default:
+ - expr: ${crypto.exportKeyAsFile('myOrg', 'myKey', 'myKeyPassword')}
+ out: myKeys
+ - log: "Public: ${myKeys.public}"
+ - log: "Private: ${myKeys.private}"
+```
+
+The keypair password itself can be encrypted using a
+[simple single value encryption](#encrypting) described below.
+
+
+## Exporting Credentials
+
+Credentials (username and password pairs) can be exported with:
+
+```yaml
+- ${crypto.exportCredentials('myOrg', 'myCredentials', 'myPassword')}
+```
+
+If it's a non password-protected secret, use `null` instead of password:
+```yaml
+- ${crypto.exportCredentials('myOrg', 'myCredentials', null)}
+```
+
+The expression returns a map with two keys:
+- `username` - username part
+- `password` - password part
+
+You can store the return value in a variable:
+```yaml
+- expr: ${crypto.exportCredentials('myOrg', 'myCredentials', null)}
+ out: myCreds
+
+- log: "Username: ${myCreds.username}"
+- log: "Password: ${myCreds.password}"
+```
+
+Or use it directly. For example, in a `http` task call:
+```yaml
+- task: http
+ in:
+ auth:
+ basic: ${crypto.exportCredentials('myOrg', 'myCredentials', null)}
+ # ...
+```
+
+
+
+## Exporting Plain Secrets
+
+A "plain" secret is a single encrypted value, which is stored using
+the REST API or the UI and retrieved using the
+`crypto.exportAsString` method:
+
+```bash
+$ curl -u myusername \
+-F name=mySecret \
+-F type=data \
+-F data="my value" \
+-F storePassword="myPassword" \
+http://concord.example.com/api/v1/org/Default/secret
+```
+
+```yaml
+- log: "${crypto.exportAsString('myOrg', 'mySecret', 'myPassword')}"
+```
+
+In this example, `my value` will be printed in the log.
+
+Alternatively, the `crypto` task provides a method to export plain secrets as files:
+```yaml
+- log: "${crypto.exportAsFile('MyOrg', 'mySecret', 'myPassword')}"
+```
+or with custom export directory:
+```yaml
+- log: "${crypto.exportAsFile('MyDir', 'MyOrg', 'mySecret', 'myPassword')}"
+```
+
+The method returns a path to the temporary file containing the
+exported secret.
+
+
+## Encrypting and Decrypting Values
+
+A value can be encrypted with a project's key and subsequently
+decrypted in the same project's process. The value is not persistently stored.
+
+You can encrypt a value in your project's settings configuration in the
+Concord Console.
+
+Alternatively, the REST API can be used to encrypt the value using the the project specific key
+and the `encrypt` context:
+
+```bash
+curl -u myusername \
+-H 'Content-Type: text/plain' \
+-d 'my secret value' \
+http://concord.example.com/api/v1/org/MyOrg/project/MyProject/encrypt
+```
+
+(replace `MyOrg` and `MyProject` with the names of your organization and project).
+
+The result returns the encrypted value in the `data` element:
+
+```json
+{
+ "data" : "4d1+ruCra6CLBboT7Wx5mw==",
+ "ok" : true
+}
+```
+
+The value of `data` field can be used as a process variable by adding it as an
+attribute in the Concord file, in the project's configuration or can be supplied
+to a specific process execution in the request JSON.
+
+A value can also be encrypted within a Concord Process with the `encryptString`
+method of the `crypto` task:
+
+```yaml
+- expr: ${crypto.encryptString('my secret value')}
+ out: encryptedValue
+```
+
+A value can be encrypted and decrypted only by the same server.
+
+To decrypt the previously encrypted value:
+
+```yaml
+- ${crypto.decryptString("4d1+ruCra6CLBboT7Wx5mw==")}
+```
+
+Alternatively, the encrypted value can be passed as a variable:
+
+```yaml
+- ${crypto.decryptString(mySecret)}
+```
+
+The following example uses the `decryptString` method of the `crypto` task to set
+the value of the `name` attribute:
+
+```yaml
+flows:
+ default:
+ - log: "Hello, ${name}"
+
+configuration:
+ arguments:
+ name: ${crypto.decryptString("4d1+ruCra6CLBboT7Wx5mw==")}
+```
diff --git a/docs/src/plugins/datetime.md b/docs/src/plugins/datetime.md
new file mode 100644
index 0000000000..ccaba05ed7
--- /dev/null
+++ b/docs/src/plugins/datetime.md
@@ -0,0 +1,50 @@
+# Datetime
+
+The `datetime` task provides methods to populate the current date at the
+time the flow runs.
+
+
+The task is provided automatically by Concord and does not require any
+external dependencies.
+
+## Usage
+
+The current date as a `java.util.Date` object:
+
+```yaml
+${datetime.current()}
+```
+
+The current date/time from a specific zone formatted using the provided pattern:
+
+```yaml
+${datetime.currentWithZone('zone', 'pattern')}
+${datetime.currentWithZone('America/Chicago', 'yyyy/MM/dd HH:mm:ss Z')}
+```
+
+Pattern syntax should follow
+[the standard Java date/time patterns](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
+
+The current date formatted as a string with a pattern:
+
+```yaml
+${datetime.current('pattern')}
+${datetime.current('yyyy/MM/dd HH:mm:ss')}
+```
+
+A `java.util.Date` instance formatted into a string:
+
+```yaml
+${datetime.format(dateValue, 'pattern')}
+${datetime.format(dateValue, 'yyyy/MM/dd HH:mm:ss')}
+```
+
+Parse a string into a `java.util.Date` instance:
+
+```yaml
+${datetime.parse(dateStr, 'pattern')}
+${datetime.parse('2020/02/18 23:59:59', 'yyyy/MM/dd HH:mm:ss')}
+```
+
+If no timezone specified, the `parse` method defaults to the current timezone
+of a Concord agent running the process.
diff --git a/docs/src/plugins/docker.md b/docs/src/plugins/docker.md
new file mode 100644
index 0000000000..4a5b313d64
--- /dev/null
+++ b/docs/src/plugins/docker.md
@@ -0,0 +1,203 @@
+# Docker
+
+Concord supports running [Docker](https://hub.docker.com/) images within a process flow.
+
+- [Usage](#usage)
+- [Parameters](#parameters)
+- [Environment Variables](#environment-variables)
+- [Docker Options](#docker-options)
+ - [Add Host Option](#add-host-option)
+- [Capturing the Output](#capturing-the-output)
+- [Custom Images](#custom-images)
+- [Limitations](#limitations)
+
+## Usage
+
+The `docker` task is called with standard
+[runtime-v2 task call syntax](../processes-v2/flows.html#task-calls).
+
+```yaml
+flows:
+ default:
+ - task: docker
+ in:
+ image: library/alpine
+ cmd: echo '${greeting}'
+ out: dockerResult
+
+configuration:
+ arguments:
+ greeting: "Hello, world!"
+```
+
+The above invocation is equivalent to running
+
+```bash
+docker pull library/alpine && \
+docker run -i --rm \
+-v /path/to/process/workDir:/workspace \
+library/alpine \
+echo 'Hello, world!'
+```
+
+## Parameters
+
+- `image` - mandatory, string. Docker image to use;
+- `cmd` - optional, string. Command to run. If not specified, the image's
+`ENTRYPOINT` is used;
+- `env` - optional, [environment variables](#environment-variables);
+- `envFile` - optional. Path to the file containing
+[environment variables](#environment-variables);
+- `hosts` - optional. Additional [/etc/host entries](#add-host-option);
+- `forcePull` - optional, boolean. If `true` Concord runs
+`docker pull ${image}` before starting the container. Default is `true`;
+- `debug` - optional, boolean. If `true` Concord prints out additional
+information into the log (the command line, parameters, etc);
+- `redirectErrorStream` - optional boolean. Redirect container error output to standard output. Default is `false`;
+- `logOut` - optional boolean. Sends container standard output to Concord process logs. Default is `true`;
+- `logErr` - optional boolean. Sends container error output to Concord process logs. Default is `true`;
+- `saveOut` - optional boolean. Save container standard output in task result, as `stdout` variable. Default is `false`;
+- `saveErr` - optional boolean. Save container error output in task result, as `stderr` variable. Default is `false`;
+- `pullRetryCount` - optional, number. Number of retries if `docker pull`
+fails. Default is `3`;
+- `pullRetryInterval` - optional, number. Delay in milliseconds between
+`docker pull` retries. Default is `10000`.
+
+**Note:** The current process' working directory is mounted as `/workspace`.
+Concord replaces the container's `WORKDIR` with `/workspace`. Depending
+on your setup, you may need to change to a different working directory:
+
+```yaml
+- task: docker
+ in:
+ image: library/alpine
+ cmd: cd /usr/ && echo "I'm in $PWD"
+```
+
+To run multiple commands multiline YAML strings can be used:
+
+```yaml
+- task: docker
+ in:
+ image: library/alpine
+ cmd: |
+ echo "First command"
+ echo "Second command"
+ echo "Third command"
+```
+
+Concord automatically removes the container when the command is complete.
+
+## Environment Variables
+
+Additional environment variables can be specified using `env` parameter:
+
+```yaml
+flows:
+ default:
+ - task: docker
+ in:
+ image: library/alpine
+ cmd: echo $GREETING
+ env:
+ GREETING: "Hello, ${name}!"
+
+configuration:
+ arguments:
+ name: "concord"
+```
+
+Environment variables can contain expressions: all values will be
+converted to strings.
+
+A file containing environment variables can be used by specifying
+the `envFile` parameter:
+
+```yaml
+flows:
+ default:
+ - task: docker
+ in:
+ image: library/alpine
+ cmd: echo $GREETING
+ envFile: "myEnvFile"
+```
+
+The path must be relative to the process' working directory `${workDir}`.
+
+It is an equivalent of running `docker run --env-file=myEnvFile`.
+
+## Docker options
+
+### Add Host Option
+
+Additional `/etc/hosts` lines can be specified using `hosts` parameter:
+
+```yaml
+flows:
+ default:
+ - task: docker
+ in:
+ image: library/alpine
+ cmd: echo '${greeting}'
+ hosts:
+ - foo:10.0.0.3
+ - bar:10.7.3.21
+
+configuration:
+ arguments:
+ greeting: "Hello, world!"
+```
+
+## Capturing the Output
+
+The `stdout` and `stderr` attributes of the task's returned data can be used to
+capture the output of commands running in the Docker container:
+
+```yaml
+- task: docker
+ in:
+ image: library/alpine
+ cmd: echo "Hello, Concord!"
+ saveOut: true
+ out: dockerResult
+
+- log: "Got the greeting: ${dockerResult.stdout.contains('Hello')}"
+```
+
+In the example above, the output (`stdout`) of the command running in the
+container is accessible in the returned object's `stdout` attribute.
+
+The `stderr` parameter can be used to capture the errors of commands running
+in the Docker container:
+
+```yaml
+- task: docker
+ in:
+ image: library/alpine
+ cmd: echo "Hello, ${name}" && (>&2 echo "STDERR WORKS")
+ saveErr: true
+ out: dockerResult
+
+- log: "Errors: ${dockerResult.stderr}"
+```
+
+In the example above the errors (`stderr`) of the command running in the
+container is accessible in the returned object's `stderr` variable.
+
+## Custom Images
+
+Currently, there's only one requirement for custom Docker images: all images
+must provide a standard POSIX shell as `/bin/sh`.
+
+## Limitations
+
+Running containers as `root` user is not supported - all user containers are
+executed using the `concord` user equivalent to a run command like `docker run
+-u concord ... myImage`. The user is created automatically with UID `456`.
+
+As a result any operations in the docker container that require root access,
+such as installing packages, are not supported on Concord. If required, ensure
+that the relevant package installation and other tasks are performed as part of
+your initial container image build and published to the registry from which
+Concord retrieves the image.
diff --git a/docs/src/plugins/files.md b/docs/src/plugins/files.md
new file mode 100644
index 0000000000..9e104b5d85
--- /dev/null
+++ b/docs/src/plugins/files.md
@@ -0,0 +1,71 @@
+# Files
+
+The `files` task provides methods to handle local files within the working directory
+of a Concord workflow process.
+
+- [Usage](#usage)
+- [Methods](#methods)
+ - [Check File Existence](#check-file-existence)
+ - [Move a File](#move-a-file)
+ - [Construct a Relative Path](#construct-a-relative-path)
+
+## Usage
+
+To be able to use the task in a Concord flow, it must be added as a
+[dependency](../processes-v2/configuration.html#dependencies):
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:file-tasks:{{ site.concord_core_version }}
+```
+
+This adds the task to the classpath and allows you to invoke it in any flow.
+
+## Methods
+
+### Check File Existence
+
+The `exists(path)` method returns true when a given path exists within the working
+directory.
+
+```yaml
+- if: "${files.exists('myFile.txt')}"
+ then:
+ - log: "found myFile.txt"
+```
+
+The `notExists(path)` method returns `true` when a given path _does not_ exist
+within the working directory.
+
+```yaml
+- if: "${files.notExists('myFile.txt')}"
+ then:
+ - throw: "Cannot continue without myFile.txt!"
+```
+
+### Move a File
+
+The `moveFile(src, dstDir)` method moves a file to a different directory. The returned
+value is the new path.
+
+```yaml
+- set:
+ myFile: "${files.moveFile('myFile.txt', 'another/dir)}"
+
+# prints: "another/dir/myFile.txt"
+- log: "${myFile}"
+```
+
+### Construct a Relative Path
+
+The `relativize(pathA, pathB)` method returns a relative path from the first
+path to the second path.
+
+```yaml
+- set:
+ relativePath: "${files.relativize('subDirA/myscript', 'subDirB/opts.cfg')}"
+
+- log: "${relativePath}"
+# prints: '../../subDirB/opts.cfg'
+```
diff --git a/docs/src/plugins/http.md b/docs/src/plugins/http.md
new file mode 100644
index 0000000000..eb8d3bfb34
--- /dev/null
+++ b/docs/src/plugins/http.md
@@ -0,0 +1,366 @@
+# HTTP
+
+The HTTP task provides a basic HTTP/RESTful client that allows you to call
+RESTful endpoints. It is provided automatically by Concord, and does not
+require any external dependencies.
+
+RESTful endpoints are very commonly used and often expose an API to work with
+an application. The HTTP task allows you to invoke any exposed functionality in
+third party applications in your Concord project and therefore automate the
+interaction with these applications. This makes the HTTP task a very powerful
+tool to integrate Concord with applications that do not have a custom
+integration with Concord via a specific task.
+
+The HTTP task executes RESTful requests using a HTTP `GET`, `PUT`, `PATCH`,
+`POST`, or `DELETE` method and returns [HTTP response](#http-task-response)
+objects.
+
+> The HTTP Task automatically follows redirect URLs for all methods if
+> the response returns status code 301. To disable this feature, set
+> the **`followRedirects`** task parameter to **`false`**.
+
+- [Usage and Configuration](#usage)
+- [Examples](#examples)
+
+
+
+## Usage and Configuration
+
+As with all tasks you can invoke the HTTP task with a short inline syntax or
+the full `task` syntax.
+
+The simple inline syntax uses an expression with the http task and the
+`asString` method. It uses a HTTP `GET` as a default request method and returns
+the response as string.
+
+```yaml
+- log: "${http.asString('https://api.example.com:port/path/test.txt')}"
+```
+
+The full syntax is preferred since it allows access to all features of the HTTP
+task:
+
+```yaml
+- task: http
+ in:
+ method: GET
+ url: "https://api.example.com:port/path/endpoint"
+ response: string
+ out: response
+- if: ${response.ok}
+ then:
+ - log: "Response received: ${response.content}"
+```
+
+All parameters sorted in alphabetical order.
+
+- `auth`: authentication used for secure endpoints, details in
+ the [Authentication](#authentication) section;
+- `body`: the request body, details in [Body](#body);
+- `connectTimeout`: HTTP connection timeout in ms. Default value is 30000 ms;
+- `debug`: boolean, output the request and response data in the logs;
+- `followRedirects`: boolean, determines whether redirects should be handled automatically.
+Default is `true`. Allows automatic redirection for all `methods` if not
+explicitly set to `false`;
+- `headers`: add additional headers, details in [Headers](#headers);
+- `ignoreErrors`: boolean, instead of throwing exceptions on unauthorized requests,
+ return the result object with the error;
+- `keystorePath`: string, optional, path of a keystore file(.p12 or .pfx),
+ used for client-cert authentication;
+- `keystorePassword`: string, keystore password;
+- `method`: HTTP request method, either `POST`, `PUT`, `PATCH`, `GET`, or
+`DELETE`. Default value is `GET`;
+- `proxy`: HTTP(s) proxy to use (see the [example](#proxy-usage));
+- `proxyAuth`: proxy authentication details in the [Proxy Authentication](#proxy-authentication) section;
+- `query`: request query parameters, details in [Query Parameter](#query-parameters);
+- `request`: type of request data `string`, `json`, or `file`, details available
+ in [Request type](#request-type);
+- `requestTimeout`: request timeout in ms, which is the maximum time spent
+waiting for the response;
+- `response`: type of response data `string`, `json`, or `file` received from
+ the endpoint, details in [Response type](#response-type);
+- `socketTimeout`: socket timeout in ms, which is the maximum time of inactivity
+between two data packets. Default value is `-1`, which means that the default
+value of the Java Runtime Environment running the process is used - common value
+is 60000 ms;
+- `strictSsl`: boolean, set `true` to enable ssl connection. Default value is `false`;
+- `truststorePath`: string, optional, path of a truststore file(.jks), overrides
+ system's default certificate truststore;
+- `truststorePassword`: string, truststore password;
+- `url`: complete URL in string for HTTP request;
+
+### Authentication
+
+The `auth` parameter is optional. When used, it must contain the `basic` nested
+element which contains either the `token` element, or the `username` and
+`password` elements.
+
+Basic auth using `token` syntax:
+
+```yaml
+ auth:
+ basic:
+ token: value
+```
+
+In this case, the `value` is used as Basic authentication token in the `Authorization` header:
+```
+Authorization: Basic value
+```
+
+Basic auth using `username` and `password` syntax:
+
+```yaml
+ auth:
+ basic:
+ username: any_username
+ password: any_password
+```
+
+In this example, `username` and `password` values will be formatted according
+to standard basic authentication rules:
+```
+Authorization: Basic base64(username + ":" + password)
+```
+
+To avoid exposing credentials in your Concord file, replace the actual values
+with usage of the [Crypto task](./crypto.html).
+
+Use valid values for basic authentication parameters. Authentication failure
+causes an `UnauthorizedException` error.
+
+### Body
+
+The HTTP method type `POST`, `PUT` and `PATCH` requires a `body` parameter that
+contains a complex object (map), json sourced from a file, or raw string.
+
+Body for request type `json`:
+
+```yaml
+ request: json
+ body:
+ myObject:
+ nestedVar: 123
+```
+
+The HTTP task converts complex objects like the above into a string and passes
+it into the body of the request. The converted string for the above example is
+`{ "myObject": { "nestedVar": 123 } }`.
+
+The HTTP task accepts raw JSON string, and throws an `incompatible request
+type` error when it detects improper formatting.
+
+Body for Request Type `file`:
+
+```yaml
+ request: file
+ body: "relative_path/file.bin"
+```
+
+Failure to find file of the name given in the referenced location results in
+a`FileNotFoundException` error.
+
+Body for Request Type `string`:
+
+```yaml
+ request: string
+ body: "sample string for body of post request"
+```
+
+### Headers
+
+Extra header values can be specified using `headers` key:
+
+```yaml
+ headers:
+ MyHeader: "a value"
+ X-Some-Header: "..."
+```
+
+### Query Parameters
+
+Query parameters can be specified using `query` key:
+
+```yaml
+ query:
+ param: "Hello Concord"
+ otherParam: "..."
+```
+
+Parameters are automatically encoded and appended to the request URL.
+
+### Request Type
+
+A specific request type in `request` is optional for `GET` method usage, but
+mandatory for `POST` and `PUT`. It maps over to the `CONTENT-TYPE` header of the HTTP
+request.
+
+Types supported currently:
+
+- `string` (converted into `text/plain`)
+- `json` (converted into `application/json`)
+- `file` (converted into `application/octet-stream`)
+- `form` (converted into `application/x-www-form-urlencoded`)
+- `formData` (converted into `multipart/form-data`)
+
+### Response Type
+
+`response` is an optional parameter that maps to the `ACCEPT` header of the HTTP
+request.
+
+Types supported currently:
+
+- `string` (converted into `text/plain`)
+- `json` (converted into `application/json`)
+- `file` (converted into `application/octet-stream`)
+
+### HTTP Task Response
+
+In addition to
+[common task result fields](../processes-v2/flows.html#task-result-data-structure),
+the `http` task returns:
+
+- `ok`: true if status code belongs to success family
+- `error`: Descriptive error message from endpoint
+- `content`: json/string response or relative path (for response type `file`)
+- `headers`: key-value pairs of response headers
+- `statusCode`: HTTP status codes
+
+### Proxy Authentication
+
+Proxy auth using `username` and `password` syntax:
+
+```yaml
+ proxyAuth:
+ user: "user"
+ password: "pass"
+```
+
+## Examples
+
+Following are examples that illustrate the syntax usage for the HTTP task.
+
+#### Full Syntax for GET or DELETE Requests
+
+```yaml
+- task: http
+ in:
+ method: GET # or DELETE
+ url: "https://api.example.com:port/path/endpoint"
+ response: json
+ out: jsonResponse
+- if: ${jsonResponse.ok}
+ then:
+ - log: "Response received: ${jsonResponse.content}"
+```
+
+#### Full Syntax for POST, PATCH or PUT Requests
+
+Using a YAML object for the body:
+
+```yaml
+- task: http
+ in:
+ request: json
+ method: POST # or PATCH or PUT
+ url: "https://api.example.com:port/path/endpoint"
+ body:
+ userObj:
+ name: concord
+ response: json
+ out: jsonResponse
+- if: ${jsonResponse.ok}
+ then:
+ - log: "Response received: ${jsonResponse.content}"
+```
+
+Using raw JSON for the body:
+
+```yaml
+- task: http
+ in:
+ request: json
+ method: POST # `PATCH`, `PUT`
+ url: "https://api.example.com:port/path/endpoint"
+ body: |
+ {
+ "myObject": {
+ "nestedVar": 123
+ }
+ }
+ response: json
+ out: jsonResponse
+- if: ${jsonResponse.ok}
+ then:
+ - log: "Response received: ${jsonResponse.content}"
+```
+
+#### Full Syntax for Multipart Form
+
+Using multipart form for the body
+
+```yaml
+- task: http
+ in:
+ request: formData
+ method: POST
+ url: "https://api.example.com:port/path/endpoint"
+ body:
+ field1: "string value" # text/plain part
+ field2: "@myFile.txt" # application/octet-stream part with a filename
+ field3:
+ type: "text/plain" # manually specify the content-type of the part
+ data: "string value"
+```
+
+#### Full Syntax for Secure Request
+
+Using Basic Authentication with an existing value:
+
+```yaml
+- task: http
+ in:
+ auth:
+ basic:
+ token: base64_encoded_token
+ method: GET
+ url: "https://api.example.com:port/path/endpoint"
+ response: json
+ out: jsonResponse
+- if: ${jsonResponse.ok}
+ then:
+ - log: "Response received: ${jsonResponse.content}"
+```
+
+Using Basic Authentication with a username and a password:
+
+```yaml
+- task: http
+ in:
+ auth:
+ basic:
+ username: username
+ password: password
+ method: GET
+ url: "https://api.example.com:port/path/endpoint"
+ response: json
+ out: jsonResponse
+- if: ${jsonResponse.ok}
+ then:
+ - log: "Response received: ${jsonResponse.content}"
+```
+
+
+
+#### Proxy Usage
+
+```yaml
+- task: http
+ in:
+ method: GET
+ url: "https://api.example.com:port/path/endpoint"
+ proxy: "http://proxy.example.com:8080"
+ proxyAuth:
+ user: username
+ password: password
+```
diff --git a/docs/src/plugins/index.md b/docs/src/plugins/index.md
new file mode 100644
index 0000000000..709ecda6c8
--- /dev/null
+++ b/docs/src/plugins/index.md
@@ -0,0 +1,31 @@
+# Plugins
+
+Concord plugins are implemented for a
+[specific runtime](../getting-started/processes.md#runtime). Task parameters
+and results may differ between runtime versions. Refer to the corresponding
+documentation for the runtime used.
+
+## Standard Plugins
+
+- [Ansible](./ansible.md)
+- [Asserts](./asserts.md)
+- [Concord](./concord.md)
+- [Crypto](./crypto.md)
+- [Datetime](./datetime.md)
+- [Docker](./docker.md)
+- [Files](./files.md)
+- [HTTP](./http.md)
+- [JSON Store](./json-store.md)
+- [Key-value](./key-value.md)
+- [Lock](./lock.md)
+- [Mocks](./mocks.md)
+- [Node Roster](./node-roster.md)
+- [Resource](./resource.md)
+- [Slack](./slack.md)
+- [Sleep](./sleep.md)
+- [SMTP](./smtp.md)
+
+## Community Plugins
+
+* [**community runtime-v2 Plugins**]({{ site.concord_plugins_v2_docs }}/akeyless.html)
+* [**community runtime-v1 Plugins**]({{ site.concord_plugins_v1_docs }}/akeyless.html)
diff --git a/docs/src/plugins/json-store.md b/docs/src/plugins/json-store.md
new file mode 100644
index 0000000000..c8b284b3f9
--- /dev/null
+++ b/docs/src/plugins/json-store.md
@@ -0,0 +1,173 @@
+# JSON Store
+
+The `jsonStore` task provides access to [JSON Stores](../getting-started/json-store.html).
+It allows users to add, update and remove JSON store items using Concord flows.
+
+This task is provided automatically by Concord.
+
+- [Usage](#usage)
+ - [Check if Store Exists](#check-if-store-exists)
+ - [Check if Item Exists](#check-if-item-exists)
+ - [Create or Update an Item](#create-or-update-an-item)
+ - [Retrieve an Item](#remove-an-item)
+ - [Remove an Item](#remove-an-item)
+ - [Create or Update a Named Query](#create-or-update-a-named-query)
+ - [Execute a Named Query](#execute-a-named-query)
+
+## Usage
+
+### Check if Store Exists
+
+Syntax:
+
+```yaml
+- ${jsonStore.isStoreExists(orgName, storeName)}
+- ${jsonStore.isStoreExists(storeName)}
+```
+
+The task uses the current process' organization name if the `orgName` parameter
+is omitted.
+
+The expression returns `true` if specified store exists.
+
+### Check if Item Exists
+
+Syntax:
+
+```yaml
+- ${jsonStore.isExists(orgName, storeName, itemPath)}
+- ${jsonStore.isExists(storeName, itemPath)}
+```
+
+The task uses the current process' organization name if the `orgName` parameter
+is omitted.
+
+The expression returns `true` if specified item exists.
+
+### Create or Update an Item
+
+Syntax:
+
+```yaml
+- ${jsonStore.put(orgName, storeName, itemPath, data)}
+- ${jsonStore.put(storeName, itemPath, data)}
+- ${jsonStore.upsert(orgName, storeName, itemPath, data)}
+- ${jsonStore.upsert(storeName, itemPath, data)}
+```
+
+The `data` parameter must be a Java object. Only types that can be represented
+in JSON are supported: Java lists, maps, strings, numbers, boolean values, etc.
+
+If the `orgName` parameter is omitted the current organization is used.
+The task uses the current process' organization name if the `orgName` parameter
+is omitted.
+
+The `upsert` method creates the specified JSON store if it doesn't exist.
+
+Example:
+
+```yaml
+configuration:
+ arguments:
+ myItem:
+ x: 123
+ nested:
+ value: "abc"
+
+flows:
+ default:
+ # uses the current organization
+ - "${jsonStore.put('myStore', 'anItem', myItem)}"
+```
+
+## Retrieve an Item
+
+Syntax:
+
+```yaml
+- ${jsonStore.get(orgName, storeName, itemPath)}
+- ${jsonStore.get(storeName, itemPath)}
+```
+
+The expression returns the specified item parsed into a Java object or `null`
+if no such item exists.
+
+Example:
+
+```yaml
+flows:
+ default:
+ - expr: "${jsonStore.get('myStore', 'anItem')}"
+ out: anItem
+
+ - if: "${anItem == null}"
+ then:
+ - log: "Can't find the item you asked for."
+ else:
+ - log: "${anItem}"
+```
+
+## Remove an Item
+
+Syntax:
+
+```yaml
+- ${jsonStore.delete(orgName, storeName, itemPath)}
+- ${jsonStore.delete(storeName, itemPath)}
+```
+
+The expression returns `true` if the specified item was removed or `false` if
+it didn't exist.
+
+## Create or Update a Named Query
+
+```yaml
+- ${jsonStore.upsertQuery(orgName, storeName, queryName, queryText)}
+- ${jsonStore.upsertQuery(storeName, queryName, queryText)}
+```
+
+The task uses the current process' organization name if the `orgName` parameter
+is omitted.
+
+## Execute a Named Query
+
+Syntax:
+
+```yaml
+- ${jsonStore.executeQuery(storeName, queryName)}
+- ${jsonStore.executeQuery(storeName, queryName, params)}
+- ${jsonStore.executeQuery(orgName, storeName, queryName)}
+- ${jsonStore.executeQuery(orgName, storeName, queryName, params)}
+```
+
+The expression returns a `List` of items where each item represents a row
+object returned by the query.
+
+Query parameters (the `params` arguments) must be a Java `Map` object that can
+be represented with JSON.
+
+You can also pass the parameters directly in the expression:
+
+```yaml
+- "${jsonStore.executeQuery('myStore', 'lookupServiceByUser', {'users': ['mike']})}"
+```
+
+Example:
+
+```yaml
+configuration:
+ arguments:
+ myQueryParams:
+ users:
+ - "mike"
+
+flows:
+ default:
+ - expr: "${jsonStore.executeQuery('myStore', 'lookupServiceByUser', myQueryParams)}"
+ out: myResults
+
+ - log: "${myResults}"
+```
+
+(see also [the example](../getting-started/json-store.html#example) on the
+JSON Store page).
diff --git a/docs/src/plugins/key-value.md b/docs/src/plugins/key-value.md
new file mode 100644
index 0000000000..8c1cd393da
--- /dev/null
+++ b/docs/src/plugins/key-value.md
@@ -0,0 +1,82 @@
+# Key-value
+
+The key value `kv` task provides access to the server's simple string
+key-value store. All data is project-scoped e.a. processes only see the values
+created by processes of the same project.
+
+This task is provided automatically by Concord.
+
+## Usage
+
+### Setting a Value
+
+Setting a string value:
+```yaml
+- ${kv.putString("myKey", "myValue")}
+```
+
+Setting an integer (64-bit `long`) value:
+```yaml
+- ${kv.putLong("myKey", 1234)}
+```
+
+### Retrieving a Value
+
+Using the OUT syntax of expressions:
+
+```yaml
+- expr: ${kv.getString("myKey")}
+ out: myVar
+
+- log: "I've got ${myVar}"
+```
+
+Using the context:
+
+```yaml
+- ${context.setVariable("myVar", kv.getString("myKey"))}
+- log: "I've got ${myVar}"
+```
+
+In scripts:
+
+```yaml
+- script: groovy
+ body: |
+ def kv = tasks.get("kv")
+
+ def id = kv.inc(execution, "idSeq")
+ println("I've got ${id}")
+```
+
+The `execution` variable is an alias for [context](https://concord.walmartlabs.com/docs/getting-started/processes.html#provided-variables)
+and automatically provided by the runtime for all supported script engines.
+Check out [the source code]({{ site.concord_source }}/blob/master/plugins/tasks/kv/src/main/java/com/walmartlabs/concord/plugins/kv/KvTask.java)
+for all available public methods.
+
+Integer values can be retrieved in the same way:
+
+```yaml
+- log: "I've got ${kv.getLong('myKey')}"
+```
+
+### Removing a Value
+
+```yaml
+- ${kv.remove("myVar")}
+- if: ${kv.getString("myVar") == null}
+ then:
+ - log: "Ciao, myVar! You won't be missed."
+```
+
+### Incrementing a Value
+
+This can be used as a simple sequence number generator.
+
+```yaml
+- expr: ${kv.inc("idSeq")}
+ out: myId
+- log: "We got an ID: ${myId}"
+```
+
+**Warning:** the existing string values can't be incremented.
diff --git a/docs/src/plugins/lock.md b/docs/src/plugins/lock.md
new file mode 100644
index 0000000000..40ef0f2607
--- /dev/null
+++ b/docs/src/plugins/lock.md
@@ -0,0 +1,62 @@
+# Lock
+
+The `lock` and `unlock` tasks provide methods to allow exclusive execution between
+one or more running Concord processes.
+
+- [Usage](#usage)
+- [Parameters](#parameters)
+- [Acquire and Release Lock](#acquire-and-release-lock)
+- [Considerations](#considerations)
+
+## Usage
+
+To be able to use the task in a Concord flow, it must be added as a
+[dependency](../processes-v2/configuration.html#dependencies):
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:lock-tasks:{{ site.concord_core_version }}
+```
+
+This adds the task to the classpath and allows you to invoke it in any flow.
+
+## Parameters
+
+- `name` - string, lock name used across processes to match exclusivity
+- `scope` - string, scope to apply exclusivity. One of `PROJECT`, `ORG`
+
+## Acquire and Release Lock
+
+Acquire a lock in order to execute mutually exclusive steps across individual
+processes across a Concord Project or Organization. For example, deployments to
+a particular environment may need to be blocked while an existing deployment or
+integration testing is active.
+
+```yaml
+# acquire lock so it's safe to deploy
+- task: lock
+ in:
+ name: my-app-deployment
+ scope: PROJECT
+
+# perform the deployment...
+
+# release lock
+- task: unlock
+ in:
+ name: my-app-deployment
+ scope: PROJECT
+```
+
+## Considerations
+
+If the process does not immediately acquire a lock, then it suspends execution
+until the lock is acquired. Temporary files in the working directory created at
+runtime are cleaned up when processes suspend and resume. If set,
+[`suspendTimeout` settings](../processes-v2/configuration.html#suspend-timeout) apply.
+
+[Exclusive process configuration](../processes-v2/configuration.html#exclusive-execution)
+is preferable to locking at runtime due to the more straightforward application
+of exclusivity. The `lock` task enables mutual exclusivity across disparate
+workflow repositories.
diff --git a/docs/src/plugins/mocks.md b/docs/src/plugins/mocks.md
new file mode 100644
index 0000000000..763d2be7d4
--- /dev/null
+++ b/docs/src/plugins/mocks.md
@@ -0,0 +1,217 @@
+# Mocks
+
+- [Usage](#usage)
+- [How to mock a Task](#how-to-mock-a-task)
+ - [Example: Mocking a Task Call](#example-mocking-a-task-call)
+ - [Example: Mocking a Task with Specific Input Parameters](#example-mocking-a-task-with-specific-input-parameters)
+- [How to Mock a Task method](#how-to-mock-a-task-method)
+ - [Example: Mocking a Task Method](#example-mocking-a-task-method)
+ - [Example: Mocking a Task Method with Input Arguments](#example-mocking-a-task-method-with-input-arguments)
+- [How to Verify Task Calls](#how-to-verify-task-calls)
+ - [Example: Verifying a Task Call](#example-verifying-a-task-call)
+ - [Example: Verifying a Task Method Call](#example-verifying-a-task-method-call)
+
+Mocks plugin allow you to:
+
+- **"Mock" tasks or task methods** – replace specific tasks or task methods with
+ predefined results or behavior;
+- **Verify task calls** - verify how many times task was called, what parameters were used during
+ the call.
+
+Mocks help isolate individual components during testing, making tests faster, safer, and more
+focused.
+
+## Usage
+
+To be able to use the task in a Concord flow, it must be added as a
+[dependency](../processes-v2/configuration.html#dependencies):
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:mock-tasks:{{ site.concord_core_version }}
+```
+
+## How to Mock a Task
+
+You can mock specific tasks to simulate their behavior.
+
+### Example: Mocking a Task Call
+
+```yaml
+flows:
+ main:
+ - task: myTask
+ in:
+ param1: "value"
+ out: taskResult
+
+ mainTest:
+ - set:
+ mocks:
+ # Mock the myTask task call
+ - task: "myTask"
+ out:
+ result: 42
+
+ - call: main
+ out: taskResult
+
+ - log: "${taskResult}" # prints out 'result=42'
+```
+
+In `mainTest`, we set up a "mock" for the `myTask` task. This mock intercepts calls to any `myTask`
+instance and overrides the output, setting the result to `42` instead of running the actual task.
+
+### Example: Mocking a Task with Specific Input Parameters
+
+```yaml
+flows:
+ main:
+ - task: myTask
+ in:
+ param1: "value"
+ out: taskResult
+
+ mainTest:
+ - set:
+ mocks:
+ # Mock the myTask task call
+ - task: "myTask"
+ in:
+ param1: "value.*" # regular expression allowed for values
+ out:
+ result: 42
+
+ - call: main
+ out: taskResult
+
+ - log: "${taskResult}" # prints out 'result=42'
+```
+
+In `mainTest`, we set up a mock to only intercept `myTask` calls where param1 matched with regular
+expression "`value.*`". When these parameter match, the mock replaces the task's output with
+`result: 42`
+
+## How to Mock a Task Method
+
+In addition to mocking entire tasks, you can also mock specific methods of a task.
+
+### Example: Mocking a Task Method
+
+```yaml
+flows:
+ main:
+ - expr: ${myTask.myMethod()}
+ out: taskResult
+
+ mainTest:
+ - set:
+ mocks:
+ # Mock the myTask task call
+ - task: "myTask"
+ method: "myMethod"
+ result: 42
+
+ - call: main
+ out: taskResult
+
+ - log: "${taskResult}" # prints out 'result=42'
+```
+
+In `mainTest`, we set up a mock to only intercept `myTask.myMethod` calls.
+When these parameter match, the mock replaces the task's output with `result: 42`
+
+### Example: Mocking a Task Method with Input Arguments
+
+```yaml
+flows:
+ main:
+ - expr: ${myTask.myMethod(1)}
+ out: taskResult
+
+ mainTest:
+ - set:
+ mocks:
+ # Mock the myTask task call
+ - task: "myTask"
+ args:
+ - 1
+ method: "myMethod"
+ result: 42
+
+ - call: main
+ out: taskResult
+
+ - log: "${taskResult}" # prints out 'result=42'
+```
+
+In `mainTest`, we set up a mock to only intercept `myTask.myMethod` calls with input argument `1`.
+When these parameter match, the mock replaces the task's output with `result: 42`
+
+### Example: Mocking a Task Method with Multiple Arguments
+
+```yaml
+flows:
+ main:
+ - expr: ${myTask.myMethod(1, 'someComplexVariableHere')}
+ out: taskResult
+
+ mainTest:
+ - set:
+ mocks:
+ # Mock the myTask task call
+ - task: "myTask"
+ args:
+ - 1
+ - ${mock.any()} # special argument that matches any input argument
+ method: "myMethod"
+ result: 42
+
+ - call: main
+ out: taskResult
+
+ - log: "${taskResult}" # prints out 'result=42'
+```
+
+In `mainTest`, we set up a mock to only intercept `myTask.myMethod` calls with input argument `1`
+and `any` second argument. When these parameter match, the mock replaces the task's output with
+`result: 42`
+
+## How to Verify Task Calls
+
+The `verify` task allows you to check how many times a specific task
+(**not necessarily a mocked task**) with specified parameters was called.
+
+### Example: Verifying a Task Call
+
+```yaml
+flows:
+ main:
+ - task: "myTask"
+ out: taskResult
+
+ mainTest:
+ - call: main
+
+ - expr: "${verify.task('myTask', 1).execute()}"
+```
+
+In `mainTest`, we verify that the `myTask` task was called exactly once without input parameters
+
+### Example: Verifying a Task Method Call
+
+```yaml
+flows:
+ main:
+ - expr: ${myTask.myMethod(1)}
+ out: taskResult
+
+ mainTest:
+ - call: main
+
+ - expr: "${verify.task('myTask', 1).myMethod(1)}"
+```
+
+In `mainTest`, we verify that the `myMethod` method of the `myTask` task was called exactly once
+with a parameter `1`.
diff --git a/docs/src/plugins/node-roster.md b/docs/src/plugins/node-roster.md
new file mode 100644
index 0000000000..f7e2b29679
--- /dev/null
+++ b/docs/src/plugins/node-roster.md
@@ -0,0 +1,150 @@
+# Node Roster
+
+The Node Roster task provides a way to access [Node Roster](../getting-started/node-roster.html)
+data in Concord flows.
+
+- [Usage](#usage)
+ - [Common Parameters](#common-parameters)
+ - [Result Format](#result-format)
+ - [Get Host Facts](#get-host-facts)
+ - [Find Artifacts By Host](#find-artifacts-by-host)
+
+## Usage
+
+To be able to use the task in a Concord flow, it must be added as a
+[dependency](../processes-v2/configuration.html#dependencies):
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:noderoster-tasks:{{ site.concord_core_version }}
+```
+
+This adds the task to the classpath and allows you to invoke the task in
+a flow:
+
+```yaml
+flows:
+ default:
+ - task: nodeRoster
+ in:
+ action: "deployedOnHost"
+ hostName: "myhost.example.com"
+ out: result
+```
+
+### Common Parameters
+
+- `baseUrl` - (optional) string, base URL of the Concord API. If not set uses the
+ current instance's API address.
+- `action` - string, name of the action.
+
+### Result Format
+
+The task's actions return their results in a `result` variable. The variable
+has the following format:
+
+- `ok` - boolean, `true` if the operation was successful - i.e. returned some
+ data;
+- `data` - object, result of the operation.
+
+### Find Hosts by Artifact
+
+Returns a list of hosts which had the specified artifact deployed to.
+
+```yaml
+- task: nodeRoster
+ in:
+ action: "hostsWithArtifacts"
+ artifactPattern: ".*my-app-1.0.0.jar"
+ out: result
+```
+
+Parameters:
+- `artifactPattern` - regex, name or pattern of the artifact's URL;
+- `limit` - number, maximum number of records to return. Default is `30`;
+- `offset` - number, offset of the first record, used for paging. Default
+ is `0`.
+
+The action returns the following `result`:
+
+```json
+{
+ "ok": true,
+ "data": {
+ "artifact A": [
+ { "hostId": "...", "hostName": "..."},
+ { "hostId": "...", "hostName": "..."},
+ ...
+ ],
+ "artifact B": [
+ { "hostId": "...", "hostName": "..."},
+ { "hostId": "...", "hostName": "..."},
+ ...
+ ]
+ }
+}
+```
+
+The `data` is an object where keys are artifact URLs matching the supplied
+`artifactPattern` and values are lists of hosts.
+
+### Get Host Facts
+
+Returns the last registered snapshot of the host's
+[Ansible facts](https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts).
+
+```yaml
+- task: nodeRoster
+ in:
+ action: "facts"
+ hostName: "myhost.example.com"
+ out: result
+```
+
+Parameters:
+- `hostName` - string, name of the host to look up;
+- `hostId` - UUID, id of the host to look up.
+
+The action returns the following `result`:
+
+```json
+{
+ "ok": true,
+ "data": {
+ ...
+ }
+}
+```
+
+The `data` value is the fact's JSON object as it was received from Ansible.
+
+### Find Artifacts By Host
+
+Returns a list of artifacts deployed on the specified host.
+
+```yaml
+- task: nodeRoster
+ in:
+ action: "deployedOnHost"
+ hostName: "myhost.example.com"
+ out: result
+```
+
+Parameters:
+- `hostName` - string, name of the host to look up;
+- `hostId` - UUID, id of the host to look up.
+
+Either `hostName` or `hostId` are required.
+
+The action returns the following `result`:
+
+```json
+{
+ "ok": true,
+ "data": [
+ { "url": "..." },
+ { "url": "..." }
+ ]
+}
+```
diff --git a/docs/src/plugins/resource.md b/docs/src/plugins/resource.md
new file mode 100644
index 0000000000..7380807053
--- /dev/null
+++ b/docs/src/plugins/resource.md
@@ -0,0 +1,142 @@
+# Resource
+
+The `resource` task provides methods to persist data to a file in the scope of a
+process as well as to load data from files. The `resource` task supports JSON,
+YAML and `string` formats.
+
+The task is provided automatically by Concord and does not require any
+external dependencies.
+
+- [Reading a Resource](#reading-a-resource)
+- [Writing a Resource](#writing-a-resource)
+- [Parse JSON String](#parse-json-string)
+- [Format](#format)
+- [Pretty Format](#pretty-format)
+
+## Reading a Resource
+
+The `asJson` method of the `resource` task can read a JSON-file resource and
+create a `json` object.
+
+```yaml
+flows:
+ default:
+ - expr: ${resource.asJson('sample-file.json')}
+ out: jsonObj
+ # we can now use it like a simple object
+ - log: ${jsonObj.any_key}
+```
+
+The `asString` method can read a file resource and create a `string` object with
+the content.
+
+```yaml
+- log: ${resource.asString('sample-file.txt')}
+```
+
+The `asYaml` method supports reading files using the YAML format.
+
+```yaml
+flows:
+ default:
+ - expr: ${resource.asYaml('sample-file.yml')}
+ out: ymlObj
+ # we can now use it like a simple object
+ - log: ${ymlObj.any_key}
+```
+
+## Writing a Resource
+
+The `writeAsJson` method of the `resource` task can write a JSON object into a
+JSON-file resource.
+
+```yaml
+flows:
+ default:
+ - set:
+ newObj:
+ name: testName
+ type: testType
+ - log: ${resource.writeAsJson(newObj)}
+```
+
+The `writeAsString` method is used to write a file with `string` content.
+
+```yaml
+- log: ${resource.writeAsString('test string')}
+```
+
+The `writeAsYaml` method supports the YAML format.
+
+The `writeAs*` methods return the path of the newly created file as
+result. These values can be stored in a variable later be used to read content
+back into the process with the read methods.
+
+## Parse JSON String
+
+The `fromJsonString` method of the `resource` task can parse a JSON string to a
+corresponding Java object.
+
+```yaml
+flows:
+ default:
+ - set:
+ jsonString: '{"name":"Concord"}'
+ - expr: ${resource.fromJsonString(jsonString)}
+ out: jsonObj
+ - log: "Hello ${jsonObj.name}!"
+```
+
+## Format
+
+The `printJson` method of the `resource` task serializes a given object, or JSON
+string, to a condensed JSON-formatted string. This can be useful for generating
+JSON string for other tasks. This is more memory-efficient than the
+[`prettyPrintJson` method](#pretty-format).
+
+```yaml
+- set:
+ jsonString: ${resource.printJson('{"testKey":"testValue"}')}
+```
+
+## Pretty Format
+
+The `prettyPrintJson` method of the `resource` task allows you to create a
+version of a JSON string or an object, that is better readable in a log or other
+output.
+
+```yaml
+- log: ${resource.prettyPrintJson('{"testKey":"testValue"}')}
+```
+
+```yaml
+flows:
+ default:
+ - set:
+ newObj:
+ name: testName
+ type: testType
+ - log: ${resource.prettyPrintJson(newObj)}
+```
+
+The `prettyPrintYaml` method can be used to format data as YAML with an option
+to add additional indentation:
+
+```yaml
+flows:
+ default:
+ - set:
+ data:
+ x: 1
+ y:
+ a: 10
+ b: 20
+
+ - set:
+ result: |
+ data: ${resource.prettyPrintYaml(data, 2)} # adds 2 spaces to each line
+
+ - log: |
+ -------------------------------------------
+ ${result}
+```
diff --git a/docs/src/plugins/slack.md b/docs/src/plugins/slack.md
new file mode 100644
index 0000000000..fbd355bce4
--- /dev/null
+++ b/docs/src/plugins/slack.md
@@ -0,0 +1,263 @@
+# Slack
+
+The `slack` plugin supports interaction with the [Slack](https://slack.com/)
+messaging platform.
+
+- posting messages to a channel with the [slack task](#slack)
+- working with channels and groups with the [slack channel task](#slackChannel)
+
+The task is provided automatically for all flows, no external dependencies
+necessary.
+
+## Configuration
+
+The plugin supports default configuration settings supplied by
+[default process configuration policy](../getting-started/policies.html#default-process-configuration-rule):
+
+```json
+{
+ "defaultProcessCfg": {
+ "defaultTaskVariables": {
+ "slack": {
+ "apiToken": "slack-api-token",
+ "proxyAddress": "proxy.example.com",
+ "proxyPort": 123
+ }
+ }
+ }
+}
+```
+
+The bot user created for the API token configuration e.g. `concord` has to be a
+member of the channel receiving the messages.
+
+## Common Parameters
+
+Common parameters of both `slack` and `slackChannel` tasks:
+- `apiToken`: required, the
+ [slack API token](https://api.slack.com/custom-integrations/legacy-tokens)
+ for authentication and authorization. The owner of the token as has to have
+ sufficient access rights to create or archive channels and groups. Typically
+ this should be provided via usage of the [Crypto task](./crypto.html) or
+ configured in the [default variables](../getting-started/policies.html#default-process-configuration-rule);
+- `proxyAddress`: optional, the proxy's host name;
+- `proxyPort`: optional, the proxy's port.
+
+
+
+## Slack Task
+
+Possible operations are:
+
+- [Send Message](#send-message)
+- [Add Reaction](#add-reaction)
+
+### Send Message
+
+A message `text` can be sent to a specific channel identified by a `channelId`
+with the standard [runtime-v2 task call syntax](../processes-v2/flows.html#task-calls).
+
+```yaml
+flows:
+ default:
+ - task: slack
+ in:
+ channelId: "exampleId"
+ username: "anyCustomString"
+ iconEmoji: ":information_desk_person:"
+ text: "Starting execution on Concord, process ID ${txId}"
+ ignoreErrors: true
+ out: result
+
+ - if: "${!result.ok}"
+ then:
+ - log: "Error while sending a message: ${result.error}"
+
+ ...
+
+ - task: slack
+ in:
+ channelId: "exampleId"
+ ts: ${result.ts}
+ replyBroadcast: false
+ username: "anyCustomString"
+ iconEmoji: ":information_desk_person:"
+ text: "Execution on Concord for process ID ${txId} completed."
+ ignoreErrors: true
+```
+
+The `channelId` can be seen in the URL of the channel or alternatively the name
+of the channel can be used e.g. `C7HNUMYQ1` and `my-project-channel`. To send a
+message to a specific user use `@handle` syntax:
+
+```yaml
+- task: slack
+ in:
+ channelId: "@someone"
+ text: "Hi there!"
+```
+
+> Though using `@handle` does work, it stops working, if the user changes the _Display Name_
+of their Slack profile.
+
+Optionally, the message sender name appearing as the user submitting the post,
+can be changed with `username`. In addition, the optional `iconEmoji` can
+configure the icon to use for the post.
+
+In addition to
+[common task result fields](../processes-v2/flows.html#task-result-data-structure),
+the `slack` task returns:
+
+- `ts` - Timestamp ID of the message that was posted, can be used, in the
+ following slack task of posting message, to make the message a reply or in
+ `addReaction` action.
+- `id` - Channel ID that can be used in subsequent operations.
+
+The optional field from the result object `ts` can be used to create
+a thread and reply. Avoid using a reply's `ts` value; use its parent instead.
+
+The optional field `ignoreErrors` can be used to ignore any failures that
+might occur when sending a Slack message. When the value for this field
+is `true`, Concord flow does not throw any exception and fail when the
+slack task fails.
+
+The value defaults to `false` if `ignoreErrors` field is not specified
+in the parameters.
+
+The optional field `replyBroadcast` is used with `ts` and will also post
+the message to the channel. The value defaults to `false` and has no
+effect if `ts` is not used.
+
+### Add Reaction
+
+The Slack task can be used to add a reaction (emoji) to a posted message using
+`addReaction` action.
+
+- `action` - action to perform `addReaction`;
+- `channelId` - channel ID where the message to add reaction to was posted,
+ e.g. `C7HNUMYQ1`;
+- `ts` - timestamp of a posted message to add reaction to. Usually returned
+ by the [sendMessage](#send-message) action;
+- `reaction` - reaction (emoji) name.
+
+```yaml
+flows:
+ default:
+ - task: slack
+ in:
+ action: addReaction
+ channelId: ${result.id}
+ ts: ${result.ts}
+ reaction: "thumbup"
+ ignoreErrors: true
+ out: result
+
+ - if: "${!result.ok}"
+ then:
+ - log: "Error while adding a reaction: ${result.error}"
+```
+
+The `addReaction` action only returns
+[common task result fields](../processes-v2/flows.html#task-result-data-structure).
+
+
+
+## Slack Channel Task
+
+The `slackChannel` task supports creating and archiving channels and groups of the
+[Slack](https://slack.com/) messaging platform.
+
+Possible operations are:
+
+- [Create a channel](#create)
+- [Archive a channel](#archive)
+- [Create a group](#create-group)
+- [Archive a group](#archive-group)
+
+The `slackChannel` task uses following input parameters
+
+- `action`: required, the name of the operation to perform `create`, `archive`,
+ `createGroup` or `archiveGroup`
+- `channelName` the name of the slack channel or group you want to create,
+ required for `create` and `createGroup` that you want to create or
+- `channelId`: the id of the slack channel that you want to archive, required
+ for `archive` and `archiveGroup`.
+
+
+
+### Create a Channel
+
+This `slackChannel` task can be used to create a new channel with the `create` action.
+
+```yaml
+flows:
+ default:
+ - task: slackChannel
+ in:
+ action: create
+ channelName: myChannelName
+ apiToken: mySlackApiToken
+ out: result
+ - log: "Channel ID: ${result.slackChannelId}"
+```
+
+The identifier of the created channel is available in the returned object in
+the `slackChannelId` field.
+
+
+
+### Archive a Channel
+
+This `slackChannel` task can be used to archive an existing channel with the
+`archive` action.
+
+```yaml
+flows:
+ default:
+ - task: slackChannel
+ in:
+ action: archive
+ channelId: C7HNUMYQ1
+ apiToken: mySlackApiToken
+```
+
+The `channelId` can be seen in the URL of the channel e.g. `C7HNUMYQ1`
+
+
+
+### Create a Group
+
+This `slackChannel` task can be used to create a group with the `createGroup`
+action.
+
+```yaml
+flows:
+ default:
+ - task: slackChannel
+ in:
+ action: createGroup
+ channelName: myChannelName
+ apiToken: mySlackApiToken
+ out: result
+ - log: "Group ID: ${result.slackChannelId}"
+```
+
+The identifier of the created group is available in the returned object in
+the `slackChannelId` field.
+
+
+
+### Archive a Group
+
+This `slackChannel` task can be used to archive an existing group with the
+`archiveGroup` action.
+
+```yaml
+flows:
+ default:
+ - task: slackChannel
+ in:
+ action: archiveGroup
+ channelId: C7HNUMYQ1
+ apiToken: mySlackApiToken
+```
diff --git a/docs/src/plugins/sleep.md b/docs/src/plugins/sleep.md
new file mode 100644
index 0000000000..3897d77a0e
--- /dev/null
+++ b/docs/src/plugins/sleep.md
@@ -0,0 +1,57 @@
+# Sleep
+
+The `sleep` task provides methods to make the process wait or suspend for a
+certain amount of time.
+
+The task is provided automatically by Concord and does not require any
+external dependencies.
+
+## Usage
+
+Sleep for a specific amount of time, for example 10000 ms (10s):
+
+```yaml
+- ${sleep.ms(10000)}
+```
+
+or using the full task syntax:
+
+```yaml
+- task: sleep
+ in:
+ duration: 10
+```
+
+Alternatively, an [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp
+can be used to specify the time in the future until the process should sleep:
+
+```yaml
+- task: sleep
+ in:
+ until: "2019-09-10T16:00:00+00:00"
+```
+
+If the `until` value is in the past, Concord logs a warning message `Skipping
+the sleep, the specified datetime is in the past`.
+
+## Suspend process for sleep duration
+
+Sleeping for long durations wastes Agent resources. The process can be suspended
+for the duration to free the Agent to run other processes in the meantime.
+
+```yaml
+- task: sleep
+ in:
+ suspend: true
+ duration: ${60 * 5} # 5 minutes
+```
+
+Instead of waiting for the specified time, the process can be suspended and
+resumed at the later date:
+
+```yaml
+- task: sleep
+ in:
+ suspend: true
+ until: "2019-09-10T16:00:00+00:00"
+```
diff --git a/docs/src/plugins/smtp.md b/docs/src/plugins/smtp.md
new file mode 100644
index 0000000000..eb0252ee6e
--- /dev/null
+++ b/docs/src/plugins/smtp.md
@@ -0,0 +1,224 @@
+# SMTP
+
+To send email notifications as a step of a flow, use the `smtp` task.
+
+- [Usage](#usage)
+- [Attachments](#attachments)
+- [Optional Parameters](#optional-parameters)
+- [Message Template](#message-template)
+- [SMTP Server](#smtp-server)
+- [SMTP as Default Process Configuration](#smtp-as-default-process-configuration)
+- [Specific SMTP Server in Your Concord File](#specific-smtp-server-in-your-concord-file)
+
+__Parameters__
+- `smtpParams` - SMTP server settings including:
+ - `host` - `String`, server address
+ - `port` - `Number`, server port
+- `mail` - mail settings including:
+ - `from` - `String`, sender's email address
+ - `to` - `String` or `List`, Comma-separated email addresses or list of email addresses to receive the email
+ - `replyTo` - optional `String`, reply-to email address
+ - `cc` - optional `String` or `List`, Comma-separated email addresses or list of email addresses to carbon copy
+ - `bcc` - optional, `String` or `List`, Comma-separated email addresses or list of email addresses to blind carbon copy
+ - `subject` - `String`, email subject
+ - `message` - `String`, plaintext email body (optional if using `template`)
+ - `template` - optional `String`, template file path in working directory
+ - `attachments` - optional `List` of file [attachments](#attachments)
+
+## Usage
+
+To make use of the `smtp` task, first declare the plugin in `dependencies` under
+`configuration`. This allows you to add an `smtp` task in any flow as a step.
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:smtp-tasks:{{ site.concord_core_version }}
+```
+
+This adds the task to the classpath and allows you to invoke the task in a flow:
+
+```yaml
+flows:
+ default:
+ - task: smtp
+ in:
+ mail:
+ from: sender@example.com
+ to: recipient@example.com
+ subject: "Hello from Concord"
+ message: "My message"
+```
+
+The `debug` - boolean parameter, if true the plugin logs additional debug
+information, defaults to `false`.
+
+The `mail` input parameters includes the parameters `from` to specify the email
+address to be used as the sender address, `to` for the recipient address,
+`subject` for the message subject and `message` for the actual message body.
+
+## Attachments
+
+The `attachments` parameter accepts a list of file paths or attachment
+definitions. File paths must be relative to the process' working directory.
+
+```yaml
+flows:
+ default:
+ - task: smtp
+ in:
+ mail:
+ from: sender@example.com
+ # ...other params...
+ attachments:
+ # simple file attachment
+ - "myFile.txt"
+
+ # or get specific
+ - path: "test/myOtherFile.txt"
+ disposition: "attachment"
+ description: "my attached file"
+ name: "my.txt"
+```
+
+The above example attaches two files from the process working directory,
+`myFile.txt` from the directory itself and `myOtherFile.txt` from the `test`
+directory. The `description` and `name` parameters are optional. The
+`disposition` parameter allows the values `attachment` or `inline`. Inline
+inserts the file as part of the email message itself.
+
+## Optional Parameters
+
+You can add `cc` and `bcc` recipient email addresses, and specify
+a `replyTo` address.
+
+In the `to`, `cc`, and `bcc` fields, you can handle multiple addresses, either as
+a comma separated list shown in the following `cc` configuration, or a YAML array
+as in the following `bcc` configuration:
+
+```yaml
+flows:
+ default:
+ - task: smtp
+ in:
+ mail:
+ from: sender@example.com
+ to: recipient-a@example.com
+ cc: abc@example.com,def@example.com,ghi@example.com
+ bcc:
+ - 123@example.com
+ - 456@example.com
+ - 789@example.com
+ replyTo: feedback@example.com
+ subject: "Hello from Concord"
+ message: "My message"
+```
+
+To send an email to the process initiator, you can use the
+attribute `initiator.attributes.mail`.
+
+## Message Template
+
+Concord supports the use of a separate file for longer email messages. As an
+alternative to `message`, specify `template` and point to a file in your project
+that contains the message text:
+
+```yaml
+- task: smtp
+ in:
+ mail:
+ template: mail.mustache
+```
+
+The template engine [Mustache](https://mustache.github.io/) is used to process
+email template files, so you can use any variables from the Concord process
+context in the message.
+
+When creating content in a template file, you can reference any variable that is
+defined in the flow using double open `{` and closing curly braces `}` in the
+template file:
+
+
+
+The process for this project was started by {{ initiator.displayName }}.
+
+
+
+When a template file name ends with `.html`, the email body is sent HTML-formatted.
+
+```yaml
+- task: smtp
+ in:
+ mail:
+ from: sender@example.com
+ to: recipient@example.com
+ subject: "Howdy!"
+ template: "mail.mustache.html"
+```
+
+## SMTP Server
+
+For email notifications with the `smtp` task to work, the connections details
+for your SMTP server must specified using one of the following options:
+
+- as a global default process configuration
+- as a configuration within your Concord file
+
+In most cases, a Concord administrator takes care of this configuration on a
+global default process configuration.
+
+### SMTP as Default Process Configuration
+
+The simplest and cleanest way to activate the task and specify the SMTP server
+connection details is to set up a
+[default process configuration](../getting-started/policies.html#default-process-configuration-rule)
+policy:
+
+1. Under `configuration/dependencies`, specify the `smtp-tasks` plugin.
+2. Add `smtpParams` as an `argument` and specify the SMTP server `host` and
+`port` as attributes:
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:smtp-tasks:{{ site.concord_core_version }}
+ arguments:
+ smtpParams:
+ host: smtp.example.com
+ port: 25
+```
+
+### Specific SMTP Server in Your Concord File
+
+In some cases you might want to specify the SMTP server in your own Concord
+flow, instead of using the global configuration. This approach is required if no
+global configuration is set up.
+
+First, add the plugin as a dependency:
+
+```yaml
+configuration:
+ dependencies:
+ - mvn://com.walmartlabs.concord.plugins.basic:smtp-tasks:{{ site.concord_core_version }}
+```
+
+Then set the `smtpParams` with the connection details for any usage of
+the `smtp` task:
+
+```yaml
+flows:
+ default:
+ - task: smtp
+ in:
+ smtpParams:
+ host: smtp.example.com
+ port: 25
+ mail:
+ from: sender@example.com
+ to: recipient@example.com
+ subject: "Hello from Concord"
+ message: "My message"
+```
+
+Consider using a global variable to store the parameters in case of multiple
+`smtp` invocations.
diff --git a/docs/src/processes-v1/configuration.md b/docs/src/processes-v1/configuration.md
new file mode 100644
index 0000000000..8a1316b291
--- /dev/null
+++ b/docs/src/processes-v1/configuration.md
@@ -0,0 +1,479 @@
+# Configuration
+
+The `configuration` sections contains [dependencies](#dependencies),
+[arguments](#arguments) and other process configuration values.
+
+- [Merge Rules](#merge-rules)
+- [Entry Point](#entry-point)
+- [Arguments](#arguments)
+- [Dependencies](#dependencies)
+- [Requirements](#requirements)
+- [Process Timeout](#process-timeout)
+ - [Running Timeout](#running-timeout)
+ - [Suspend Timeout](#suspend-timeout)
+- [Exclusive Execution](#exclusive-execution)
+- [Metadata](#metadata)
+- [Template](#template)
+- [Runner](#runner)
+- [Debug](#debug)
+
+## Merge Rules
+
+Process `configuration` values can come from different sources: the section in
+the `concord.yml` file, request parameters, policies, etc. Here's the order in
+which all `configuration` sources are merged before the process starts:
+
+- environment-specific [default values](../getting-started/configuration.md#default-process-variables);
+- [defaultCfg](../getting-started/policies.md#default-process-configuration-rule) policy values;
+- the current organization's configuration values;
+- the current [project's configuration](../api/project.md#get-project-configuration) values;
+- values from current active [profiles](./profiles.md)
+- configuration file send in [the process start request](../api/process.md#start);
+- [processCfg](../getting-started/policies.md#process-configuration-rule) policy values.
+
+## Entry Point
+
+The `entryPoint` configuration sets the name of the flow that will be used for
+process executions. If no `entryPoint` is specified the flow labelled `default`
+is used automatically, if it exists.
+
+```yaml
+configuration:
+ entryPoint: "main"
+flows:
+ main:
+ - log: "Hello World"
+```
+
+## Arguments
+
+Default values for arguments can be defined in the `arguments` section of the
+configuration as simple key/value pairs as well as nested values:
+
+```yaml
+configuration:
+ arguments:
+ name: "Example"
+ coordinates:
+ x: 10
+ y: 5
+ z: 0
+flows:
+ default:
+ - log: "Project name: ${name}"
+ - log: "Coordinates (x,y,z): ${coordinates.x}, ${coordinates.y}, ${coordinates.z}"
+```
+
+Values of `arguments` can contain [expressions](./flows.md#expressions). Expressions can
+use all regular tasks:
+
+```yaml
+configuration:
+ arguments:
+ listOfStuff: ${myServiceTask.retrieveListOfStuff()}
+ myStaticVar: 123
+```
+
+The variables are evaluated in the order of definition. For example, it is
+possible to use a variable value in another variable if the former is defined
+earlier than the latter:
+
+```yaml
+configuration:
+ arguments:
+ name: "Concord"
+ message: "Hello, ${name}"
+```
+
+A variable's value can be [defined or modified with the set step](./flows.md#setting-variables) and a
+[number of variables](./index.md#provided-variables) are automatically set in
+each process and available for usage.
+
+## Dependencies
+
+The `dependencies` array allows users to specify the URLs of dependencies such
+as:
+
+- Concord plugins and their dependencies
+- Dependencies needed for specific scripting language support
+- Other dependencies required for process execution
+
+```yaml
+configuration:
+ dependencies:
+ # maven URLs...
+ - mvn://org.codehaus.groovy:groovy-all:2.4.12
+ # or direct URLs
+ - https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-all/2.4.12/groovy-all-2.4.12.jar"
+ - https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.6/commons-lang3-3.6.jar"
+```
+
+The artifacts are downloaded and added to the classpath for process execution
+and are typically used for [task implementations](../getting-started/tasks.md).
+
+Multiple versions of the same artifact are replaced with a single one, following
+ standard Maven resolution rules.
+
+Usage of the `mvn:` URL pattern is preferred since it uses the centrally
+configured [list of repositories](../getting-started/configuration.md#dependencies)
+and downloads not only the specified dependency itself, but also any required
+transitive dependencies. This makes the Concord project independent of access
+to a specific repository URL, and hence more portable.
+
+Maven URLs provide additional options:
+
+- `transitive=true|false` - include all transitive dependencies
+ (default `true`);
+- `scope=compile|provided|system|runtime|test` - use the specific
+ dependency scope (default `compile`).
+
+Additional options can be added as "query parameters" parameters to
+the dependency's URL:
+```yaml
+configuration:
+ dependencies:
+ - "mvn://com.walmartlabs.concord:concord-client:{{ site.concord_core_version }}?transitive=false"
+```
+
+The syntax for the Maven URL uses the groupId, artifactId, optionally packaging,
+and version values - the GAV coordinates of a project. For example the Maven
+`pom.xml` for the Groovy scripting language runtime has the following
+definition:
+
+```xml
+
+ org.codehaus.groovy
+ groovy-all
+ 2.4.12
+ ...
+
+```
+
+This results in the path
+`org/codehaus/groovy/groovy-all/2.4.12/groovy-all-2.4.12.jar` in the
+Central Repository and any repository manager proxying the repository.
+
+The `mvn` syntax uses the short form for GAV coordinates
+`groupId:artifactId:version`, so for example
+`org.codehaus.groovy:groovy-all:2.4.12` for Groovy.
+
+Newer versions of groovy-all use `pom` and define
+dependencies. To use a project that applies this approach, called Bill of
+Material (BOM), as a dependency you need to specify the packaging in between
+the artifactId and version. For example, version `2.5.21` has to be specified as
+`org.codehaus.groovy:groovy-all:pom:2.5.21`:
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.codehaus.groovy:groovy-all:pom:2.5.21"
+```
+
+The same logic and syntax usage applies to all other dependencies including
+Concord plugins.
+
+## Requirements
+
+A process can have a specific set of `requirements` configured. Requirements
+are used to control where the process is executed and what kind of resources it
+requires.
+
+The server uses `requirements.agent` value to determine which agents it can set
+the process to. For example, if the process specifies
+
+```yaml
+configuration:
+ requirements:
+ agent:
+ favorite: true
+```
+
+and there is an agent with
+
+```
+concord-agent {
+ capabilities = {
+ favorite = true
+ }
+}
+```
+
+in its configuration file then it is a suitable agent for the process.
+
+Following rules are used when matching `requirements.agent` values of processes
+and agent `capabilities`:
+- if the value is present in `capabilities` but missing in `requirements.agent`
+is is **ignored**;
+- if the value is missing in `capabilities` but present in `requirements.agent`
+then it is **not a match**;
+- string values in `requirements.agent` are treated as **regular expressions**,
+i.e. in pseudo code `capabilities_value.regex_match(requirements_value)`;
+- lists in `requirements.agent` are treated as "one or more" match, i.e. if one
+or more elements in the list must match the value from `capabilities`;
+- other values are compared directly.
+
+More examples:
+
+```yaml
+configuration:
+ requirements:
+ agent:
+ size: ".*xl"
+ flavor:
+ - "vanilla"
+ - "chocolate"
+```
+
+matches agents with:
+
+```
+concord-agent {
+ capabilities = {
+ size = "xxl"
+ flavor = "vanilla"
+ }
+}
+```
+
+Custom `jvm` arguments can be specified in the `requirements` section of the
+`configuration` object. [Concord Agent](../getting-started/index.md#concord-agent)
+pass these arguments to the process' JVM:
+
+```yaml
+configuration:
+ requirements:
+ jvm:
+ extraArgs:
+ - "-Xms256m"
+ - "-Xmx512m"
+```
+
+**Note:** Processes with custom `jvm` arguments can't use the "pre-fork"
+mechanism and are usually slower to start.
+
+**Note:** Consult with your Concord instance's admin to determine what the limitations
+are for JVM memory and other settings.
+
+### Process Timeout
+
+You can specify the maximum amount of time that a process can be in a some state.
+After this timeout process automatically canceled and marked as `TIMED_OUT`.
+
+Currently, the runtime provides two different timeout parameters:
+- [processTimeout](#running-timeout) - how long the process can stay in
+ the `RUNNING` state;
+- [suspendTimeout](#suspend-timeout) - how long the process can stay in
+ the `SUSPENDED` state.
+
+Both timeout parameters accepts duration in the
+[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format:
+
+```yaml
+configuration:
+ processTimeout: "PT1H" # 1 hour
+```
+
+A special `onTimeout` flow can be used to handle timeouts:
+
+```yaml
+flows:
+ onTimeout:
+ - log: "I'm going to run when my parent process times out"
+```
+
+The way Concord handles timeouts is described in more details in
+the [error handling](./flows.md#handling-cancellations-failures-and-timeouts)
+section.
+
+#### Running Timeout
+
+You can specify the maximum amount of time the process can spend in
+the `RUNNING` state with the `processTimeout` configuration. It can be useful
+to set specific SLAs for deployment jobs or to use it as a global timeout:
+
+```yaml
+configuration:
+ processTimeout: "PT1H"
+flows:
+ default:
+ # a long running process
+```
+
+In the example above, if the process runs for more than 1 hour it is
+automatically cancelled and marked as `TIMED_OUT`.
+
+**Note:** forms waiting for input and other processes in `SUSPENDED` state
+are not affected by the process timeout. I.e. a `SUSPENDED` process can stay
+`SUSPENDED` indefinitely -- up to the allowed data retention period.
+
+#### Suspend Timeout
+
+You can specify the maximum amount of time the process can spend in
+the `SUSPEND` state with the `suspendTimeout` configuration. It can be useful
+to set specific SLAs for forms waiting for input and processes waiting for
+external events:
+
+```yaml
+configuration:
+ suspendTimeout: "PT1H"
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ org: myOrg
+ project: myProject
+ repo: myRepo
+ sync: true
+ suspend: true
+ ...
+```
+
+In the example above, if the process waits for more than 1 hour it is
+automatically cancelled and marked as `TIMED_OUT`.
+
+## Exclusive Execution
+
+The `exclusive` section in the process `configuration` can be used to configure
+exclusive execution of the process:
+
+```yaml
+configuration:
+ exclusive:
+ group: "myGroup"
+ mode: "cancel"
+
+flows:
+ default:
+ - "${sleep.ms(60000)}" # simulate a long-running task
+```
+
+In the example above, if another process in the same project with the same
+`group` value is submitted, it will be immediately cancelled.
+
+If `mode` set to `wait` then only one process in the same `group` is allowed to
+run.
+
+**Note:** this feature available only for project processes.
+
+See also: [Exclusive Triggers](../triggers/index.md#exclusive-triggers).
+
+## Metadata
+
+Flows can expose internal variables as process metadata. Such metadata can be
+retrieved using the [API](../api/process.md#status) or displayed in
+the process list in [Concord Console](../console/process.md#process-metadata).
+
+```yaml
+configuration:
+ meta:
+ myValue: "n/a" # initial value
+
+flows:
+ default:
+ - set:
+ myValue: "hello!"
+```
+
+After each step, Concord sends the updated value back to the server:
+
+```bash
+$ curl -skn http://concord.example.com/api/v1/process/1c50ab2c-734a-4b64-9dc4-fcd14637e36c | jq '.meta.myValue'
+"hello!"
+```
+
+Nested variables and forms are also supported:
+
+```yaml
+configuration:
+ meta:
+ nested.value: "n/a"
+
+flows:
+ default:
+ - set:
+ nested:
+ value: "hello!"
+```
+
+The value is stored under the `nested.value` key:
+
+```bash
+$ curl -skn http://concord.example.com/api/v1/process/1c50ab2c-734a-4b64-9dc4-fcd14637e36c | jq '.meta.["nested.value"]'
+"hello!"
+```
+
+Example with a form:
+
+```yaml
+configuration:
+ meta:
+ myForm.myValue: "n/a"
+
+flows:
+ default:
+ - form: myForm
+ fields:
+ - myValue: { type: "string" }
+```
+
+## Template
+
+A template can be used to allow inheritance of all the configurations of another
+project. The value for the `template` field has to be a valid URL pointing to
+a JAR-archive of the project to use as template.
+
+The template is downloaded for [process execution](./index.md)
+and exploded in the workspace. More detailed documentation, including
+information about available templates, can be found in the
+[templates section](../templates/index.md).
+
+## Runner
+
+[Concord Runner]({{ site.concord_source }}tree/master/runtime/v1/impl) is
+the name of the default runtime used for actual execution of processes. Its
+parameters can be configured in the `runner` section of the `configuration`
+object. Here is an example of the default configuration:
+
+```yaml
+configuration:
+ runner:
+ debug: false
+ logLevel: "INFO"
+ events:
+ recordTaskInVars: false
+ inVarsBlacklist:
+ - "password"
+ - "apiToken"
+ - "apiKey"
+
+ recordTaskOutVars: false
+ outVarsBlacklist: []
+```
+
+- `debug` - enables additional debug logging, `true` if `configuration.debug`
+ enabled;
+- `logLevel` - [logging level](https://logback.qos.ch/manual/architecture.html#effectiveLevel)
+ for the `log` task;
+- `events` - the process event recording parameters:
+ - `recordTaskInVars` - enable or disable recording of input variables in task
+ calls;
+ - `inVarsBlacklist` - list of variable names that must not be recorded if
+ `recordTaskInVars` is `true`;
+ - `recordTaskOutVars` - enable or disable recording of output variables in
+ task calls;
+ - `outVarsBlacklist` - list of variable names that must not be recorded if
+ `recordTaskInVars` is `true`.
+
+See the [Process Events](../getting-started/processes.md#process-events)
+section for more details about the process event recording.
+
+## Debug
+
+Enabling the `debug` configuration option causes Concord to log paths of all
+resolved dependencies. It is useful for debugging classpath conflict issues:
+
+```yaml
+configuration:
+ debug: true
+```
diff --git a/docs/src/processes-v1/flows.md b/docs/src/processes-v1/flows.md
new file mode 100644
index 0000000000..91f655ff4c
--- /dev/null
+++ b/docs/src/processes-v1/flows.md
@@ -0,0 +1,704 @@
+# Flows
+
+Concord flows consist of series of steps executing various actions: calling
+plugins (also known as "tasks"), performing data validation, creating
+[forms](../getting-started/forms.md) and other steps.
+
+The `flows` section should contain at least one flow definition:
+
+```yaml
+flows:
+ default:
+ ...
+
+ anotherFlow:
+ ...
+```
+
+Each flow must have a unique name and at least one [step](#steps).
+
+## Steps
+
+Each flow is a list of steps:
+
+```yaml
+flows:
+ default:
+ - log: "Hello!"
+
+ - if: "${1 > 2}"
+ then:
+ - log: "How is this possible?"
+
+ - log: "Bye!"
+```
+
+Flows can contain any number of steps and call each other. See below for
+the description of available steps and syntax constructs.
+
+- [Expressions](#expressions)
+- [Conditional Expressions](#conditional-expressions)
+- [Return Command](#return-command)
+- [Exit Command](#exit-command)
+- [Groups of Steps](#groups-of-steps)
+- [Calling Other Flows](#calling-other-flows)
+- [Loops](#loops)
+- [Error Handling](#error-handling)
+- [Retry](#retry)
+- [Throwing Errors](#throwing-errors)
+- [Setting Variables](#setting-variables)
+- [Checkpoints](#checkpoints)
+
+### Expressions
+
+Expressions must be valid
+[Java Expression Language EL 3.0](https://github.com/javaee/el-spec) syntax
+and can be simple evaluations or perform actions by invoking more complex code.
+
+Short form:
+```yaml
+flows:
+ default:
+ # calling a method
+ - ${myBean.someMethod()}
+
+ # calling a method with an argument
+ - ${myBean.someMethod(myContextArg)}
+
+ # literal values
+ - ${1 + 2}
+
+ # EL 3.0 extensions:
+ - ${[1, 2, 3].stream().map(x -> x + 1).toList()}
+```
+
+Full form:
+```yaml
+flows:
+ default:
+ - expr: ${myBean.someMethod()}
+ out: myVar
+ error:
+ - ${log.error("something bad happened")}
+```
+
+Full form can optionally contain additional declarations:
+- `out` field: contains the name of a variable, in which a result of
+the expression will be stored;
+- `error` block: to handle any exceptions thrown by the evaluation.
+Exceptions are wrapped in `BpmnError` type.
+
+Literal values, for example arguments or [form](../getting-started/forms.md)
+field values, can contain expressions:
+
+```yaml
+flows:
+ default:
+ - myTask: ["red", "green", "${colors.blue}"]
+ - myTask: { nested: { literals: "${myOtherTask.doSomething()}"} }
+```
+
+Classes from the package `java.lang` can be accessed via EL syntax:
+
+```
+ - log: "Process running on ${System.getProperty('os.name')}"
+```
+
+### Conditional Expressions
+
+```yaml
+flows:
+ default:
+ - if: ${myVar > 0}
+ then: # (1)
+ - log: it's clearly non-zero
+ else: # (2)
+ - log: zero or less
+
+ - ${myBean.acceptValue(myVar)} # (3)
+```
+
+In this example, after `then` (1) or `else` (2) block are completed,
+the execution continues with the next step in the flow (3).
+
+"And", "or" and "not" operations are supported as well:
+```yaml
+flows:
+ default:
+ - if: ${true && true}
+ then:
+ - log: "Right-o"
+ - if: ${true || false}
+ then:
+ - log: "Yep!"
+ - if: ${!false}
+ then:
+ - log: "Correct!"
+```
+
+To compare a value (or the result of an expression) with multiple
+values, use the `switch` block:
+
+```yaml
+flows:
+ default:
+ - switch: ${myVar}
+ red:
+ - log: "It's red!"
+ green:
+ - log: "It's definitely green"
+ default:
+ - log: "I don't know what it is"
+
+ - log: "Moving along..."
+```
+
+In this example, branch labels `red` and `green` are the compared
+values and `default` is the block which will be executed if no other
+value fits.
+
+Expressions can be used as branch values:
+
+```yaml
+flows:
+ default:
+ - switch: ${myVar}
+ ${aKnownValue}:
+ - log: "Yes, I recognize this"
+ default:
+ - log: "Nope"
+```
+
+### Return Command
+
+The `return` command can be used to stop the execution of the current (sub) flow:
+
+```yaml
+flows:
+ default:
+ - if: ${myVar > 0}
+ then:
+ - log: moving along
+ else:
+ - return
+```
+
+The `return` command can be used to stop the current process if called from an
+entry point.
+
+### Exit Command
+
+The `exit` command can be used to stop the execution of the current process:
+
+```yaml
+flows:
+ default:
+ - if: ${myVar > 0}
+ then:
+ - exit
+ - log: "message"
+```
+
+The final status of a process after calling `exit` is `FINISHED`.
+
+### Groups of Steps
+
+Several steps can be grouped into one block. This allows `try-catch`-like
+semantics:
+
+```yaml
+flows:
+ default:
+ - log: a step before the group
+
+ - try:
+ - log: "a step inside the group"
+ - ${myBean.somethingDangerous()}
+ error:
+ - log: "well, that didn't work"
+```
+
+### Calling Other Flows
+
+Flows, defined in the same YAML document, can be called by their names or using
+the `call` step:
+
+```yaml
+flows:
+ default:
+ - log: hello
+
+ # short form: call another flow by its name
+ - mySubFlow
+
+ # full form: use `call` step
+ - call: anotherFlow
+ # (optional) additional call parameters
+ in:
+ msg: "Hello!"
+
+ - log: bye
+
+ mySubFlow:
+ - log: "a message from the sub flow"
+
+ anotherFlow:
+ - log: "message from another flow: ${msg}"
+```
+
+### Loops
+
+Concord flows can iterate through a collection of items in a loop using the
+`call` step and the `withItems` collection of values:
+
+```yaml
+ - call: myFlow
+ withItems:
+ - "first element"
+ - "second element"
+ - 3
+ - false
+
+ # withItems can also be used with tasks
+ - task: myTask
+ in:
+ myVar: ${item}
+ withItems:
+ - "first element"
+ - "second element"
+```
+
+The collection of items to iterate over can be provided by an expression:
+
+```yaml
+configuration:
+ arguments:
+ myItems:
+ - 100500
+ - false
+ - "a string value"
+
+flows:
+ default:
+ - call: myFlow
+ withItems: ${myItems}
+```
+
+The items are referenced in the invoked flow with the `${item}` expression:
+
+```yaml
+ myFlow:
+ - log: "We got ${item}"
+```
+
+Maps (dicts, in Python terms) can also be used:
+
+```yaml
+flows:
+ default:
+ - call: log
+ in:
+ msg: "${item.key} - ${item.value}"
+ withItems:
+ a: "Hello"
+ b: "world"
+```
+
+In the example above `withItems` iterates over the keys of the object. Each
+`${item}` provides `key` and `value` attributes.
+
+Lists of nested objects can be used in loops as well:
+
+```yaml
+flows:
+ default:
+ - call: deployToClouds
+ withItems:
+ - name: cloud1
+ fqdn: cloud1.myapp.example.com
+ - name: cloud2
+ fqdn: cloud2.myapp.example.com
+
+ deployToClouds:
+ - log: "Starting deployment to ${item.name}"
+ - log: "Using fqdn ${item.fqdn}"
+```
+
+### Error Handling
+
+The full form syntax allows using input variables (call arguments) and supports
+error handling.
+
+Task and expression errors are normal Java exceptions, which can be
+\"caught\" and handled using a special syntax.
+
+Expressions, tasks, groups of steps and flow calls can have an
+optional `error` block, which will be executed if an exception occurs:
+
+```yaml
+flows:
+ default:
+ # handling errors in an expression
+ - expr: ${myTask.somethingDangerous()}
+ error:
+ - log: "Gotcha! ${lastError}"
+
+ # handling errors in tasks
+ - task: myTask
+ error:
+ - log: "Fail!"
+
+ # handling errors in groups of steps
+ - try:
+ - ${myTask.doSomethingSafe()}
+ - ${myTask.doSomethingDangerous()}
+ error:
+ - log: "Here we go again"
+
+ # handling errors in flow calls
+ - call: myOtherFlow
+ error:
+ - log: "That failed too"
+```
+
+The `${lastError}` variable contains the last caught
+`java.lang.Exception` object.
+
+If an error was caught, the execution will continue from the next step:
+
+```yaml
+flows:
+ default:
+ - try:
+ - throw: "Catch that!"
+ error:
+ - log: "A"
+
+ - log: "B"
+```
+
+An execution logs `A` and then `B`.
+
+When a process is cancelled (killed) by a user, a special flow
+`onCancel` is executed:
+
+```yaml
+flows:
+ default:
+ - log: "Doing some work..."
+ - ${sleep.ms(60000)}
+
+ onCancel:
+ - log: "Pack your bags. Show's cancelled"
+```
+
+**Note:** `onCancel` handler processes are dispatched immediately when the process
+cancel request is sent. Variables set at runtime may not have been saved to the
+process state in the database and therefore may be unavailable or stale in the
+handler process.
+
+Similarly, `onFailure` flow is executed if a process crashes (moves into the `FAILED` state):
+
+```yaml
+flows:
+ default:
+ - log: "Brace yourselves, we're going to crash!"
+ - throw: "Crash!"
+
+ onFailure:
+ - log: "Yep, we just did"
+```
+
+In both cases, the server starts a _child_ process with a copy of
+the original process state and uses `onCancel` or `onFailure` as an
+entry point.
+
+**Note:** `onCancel` and `onFailure` handlers receive the _last known_
+state of the parent process' variables. This means that changes in
+the process state are visible to the _child_ processes:
+
+```yaml
+flows:
+ default:
+ # let's change something in the process state...
+ - set:
+ myVar: "xyz"
+
+ # will print "The default flow got xyz"
+ - log: "The default flow got ${myVar}"
+
+ # ...and then crash the process
+ - throw: "Boom!"
+
+ onFailure:
+ # will log "I've got xyz"
+ - log: "I've got ${myVar}"
+
+configuration:
+ arguments:
+ # original value
+ myVar: "abc"
+```
+
+In addition, `onFailure` flow receives `lastError` variable which
+contains the parent process' last (unhandled) error:
+
+```yaml
+flows:
+ default:
+ - throw: "Kablam!"
+
+ onFailure:
+ - log: "${lastError.cause}"
+```
+
+Nested data is also supported:
+```yaml
+flows:
+ default:
+ - throw:
+ myCause: "I wanted to"
+ whoToBlame:
+ mainCulpit: "${currentUser.username}"
+
+ onFailure:
+ - log: "The parent process failed because ${lastError.cause.payload.myCause}."
+ - log: "And ${lastError.cause.payload.whoToBlame.mainCulpit} is responsible for it!"
+```
+
+If an `onCancel` or `onFailure` flow fails, it is automatically
+retried up to three times.
+
+### Retry
+
+The `retry` attribute is used to restart the `task`/`flow` automatically
+in case of errors or failures. Users can define the number of times the `task`/`flow` can
+be re-tried and a delay for each retry.
+
+- `delay` - the time span after which it retries. The delay time is always in
+seconds, default value is `5`;
+- `in` - additional parameters for the retry
+- `times` - the number of times a task/flow can be retried;
+
+For example the below section executes the `myTask` using the provided `in`
+parameters. In case of errors, the task retries up to 3 times with 3
+seconds delay each. Additional parameters for the retry are supplied in the
+`in` block.
+
+```yaml
+- task: myTask
+ in:
+ ...
+ retry:
+ in:
+ ...additional parameters...
+ times: 3
+ delay: 3
+```
+Retry flow call:
+
+```yaml
+- call: myFlow
+ in:
+ ...
+ retry:
+ in:
+ ...additional parameters...
+ times: 3
+ delay: 3
+```
+
+The default `in` and `retry` variables with the same values are overwritten.
+
+In the example below the value of `someVar` is overwritten to 321 in the
+`retry` block..
+
+
+```yaml
+- task: myTask
+ in:
+ someVar:
+ nestedValue: 123
+ retry:
+ in:
+ someVar:
+ nestedValue: 321
+ newValue: "hello"
+```
+
+The `retry` block also supports expressions:
+
+```yaml
+configuration:
+ arguments:
+ retryTimes: 3
+ retryDelay: 2
+
+flows:
+ default:
+ - task: myTask
+ retry:
+ times: "${retryTimes}"
+ delay: "${retryDelay}"
+```
+
+### Throwing Errors
+
+The `throw` step can be used to throw a new `RuntimeException` with the supplied message anywhere in a flow including in `error` sections and in
+[conditional expressions](#conditional-expressions) such as if-then or
+switch-case.
+
+```yaml
+flows:
+ default:
+ - try:
+ - log: "Do something dangerous here"
+ error:
+ - throw: "oh, something went wrong."
+```
+
+Alternatively a caught exception can be thrown again using the `lastError` variable:
+
+```yaml
+flows:
+ default:
+ - try:
+ - log: "Do something dangerous here"
+ error:
+ - throw: ${lastError}
+```
+
+### Setting Variables
+
+The `set` step can be used to set variables in the current process context:
+
+```yaml
+flows:
+ default:
+ - set:
+ a: "a-value"
+ b: 3
+ - log: ${a}
+ - log: ${b}
+```
+
+Nested data can be updated using the `.` syntax:
+
+```yaml
+configuration:
+ arguments:
+ myComplexData:
+ nestedValue: "Hello"
+
+flows:
+ default:
+ - set:
+ myComplexData.nestedValue: "Bye"
+
+ # will print "Bye, Concord"
+ - log: "${myComplexData.nestedValue}, Concord"
+```
+
+A [number of variables](./index.md#variables) are automatically set in each
+process and available for usage.
+
+**Note:** all variables are global. Consider the following example:
+
+```yaml
+flows:
+ default:
+ - set:
+ x: "abc"
+
+ - log: "${x}" # prints out "abc"
+
+ - call: aFlow
+
+ - log: "${x}" # prints out "xyz"
+
+ aFlow:
+ - log: "${x}" # prints out "abc"
+
+ - set:
+ x: "xyz"
+```
+
+In the example above, even if the second `set` is called inside a subflow, its
+value becomes available in the caller flow.
+
+The same applies to nested data:
+```yaml
+flows:
+ default:
+ - set:
+ nested:
+ x: "abc"
+
+ - call: aFlow
+
+ - log: "${nested.y}" # prints out "xyz"
+
+ aFlow:
+ - set:
+ nested.y: "xyz"
+```
+
+
+### Checkpoints
+
+A checkpoint is a point defined within a flow at which the process state is
+persisted in Concord. This process state can subsequently be restored and
+process execution can continue. A flow can contain multiple checkpoints.
+
+The [REST API](../api/checkpoint.md) can be used for listing and restoring
+checkpoints. Alternatively you can restore a checkpoint to continue processing
+directly from the Concord Console.
+
+The `checkpoint` step can be used to create a named checkpoint:
+
+```yaml
+flows:
+ default:
+ - log: "Starting the process..."
+ - checkpoint: "first"
+ - log: "Continuing the process..."
+ - checkpoint: "second"
+ - log: "Done!"
+```
+
+The example above creates two checkpoints: `first` and `second`.
+These checkpoints can be used to restart the process from the point after the
+checkpoint's step. For example, if the process is restored using `first`
+checkpoint, all steps starting from `Continuing the process...`
+message and further are executed.
+
+Checkpoint names can contain expressions:
+```yaml
+configuration:
+ arguments:
+ checkpointSuffix: "checkpoint"
+
+flows:
+ default:
+ - log: "Before the checkpoint"
+ - checkpoint: "first_${checkpointSuffix}"
+ - log: "After the checkpoint"
+```
+
+Checkpoint names must start with a (latin) letter or a digit, can contain
+whitespace, underscores `_`, `@`, dots `.`, minus signs `-` and tildes `~`.
+The length must be between 2 and 128 characters. Here's the regular expression
+used for validation:
+
+```
+^[0-9a-zA-Z][0-9a-zA-Z_@.\\-~ ]{1,128}$
+```
+
+Only process initiators, administrators and users with `WRITER` access level to
+the process' project can restore checkpoints with the API or the user console.
+
+After restoring a checkpoint, its name can be accessed using
+the `resumeEventName` variable.
+
+**Note:** files created during the process' execution are not saved during the
+checkpoint creation.
diff --git a/docs/src/processes-v1/imports.md b/docs/src/processes-v1/imports.md
new file mode 100644
index 0000000000..011e540c18
--- /dev/null
+++ b/docs/src/processes-v1/imports.md
@@ -0,0 +1,82 @@
+# Imports
+
+Resources such as flows, forms and other workflow files can be shared between
+Concord projects by using `imports`.
+
+How it works:
+
+- when the process is submitted, Concord reads the root `concord.yml` file
+ and looks for the `imports` declaration;
+- all imports are processed in the order of their declaration;
+- `git` repositories are cloned and their `path` directories are copied into the
+ `dest` directory of the process working directory;
+- `mvn` artifacts are downloaded and extracted into the `dest` directory;
+- any existing files in target directories are overwritten;
+- the processes continues. Any imported resources placed into `concord`,
+ `flows`, `profiles` and `forms` directories will be loaded as usual.
+
+For example:
+
+```yaml
+imports:
+ - git:
+ url: "https://github.com/walmartlabs/concord.git"
+ path: "examples/hello_world"
+
+configuration:
+ arguments:
+ name: "you"
+```
+
+Running the above example produces a `Hello, you!` log message.
+
+The full syntax for imports is:
+
+```yaml
+imports:
+ - type:
+ options
+ - type:
+ options
+```
+
+Note, that `imports` is a top-level objects, similar to `configuration`.
+In addition, only the main YAML file's, the root `concord.yml`, `imports` are
+allowed.
+
+Types of imports and their parameters:
+
+- `git` - imports remote git repositories:
+ - `url` - URL of the repository, either `http(s)` or `git@`;
+ - `name` - the organization and repository names, e.g. `walmartlabs/concord`.
+ Automatically expanded into the full URL based on the server's configuration.
+ Mutually exclusive with `url`;
+ - `version` - (optional) branch, tag or a commit ID to use. Default `master`;
+ - `path` - (optional) path in the repository to use as the source directory;
+ - `dest` - (optional) path in the process' working directory to use as the
+ destination directory. Defaults to the process workspace `./concord/`;
+ - `exclude` - (optional) list of regular expression patterns to exclude some files when importing;
+ - `secret` - reference to `KEY_PAIR` or a `USERNAME_PASSWORD` secret. Must be
+ a non-password protected secret;
+- `mvn` - imports a Maven artifact:
+ - `url` - the Artifact's URL, in the format of `mvn://groupId:artifactId:version`.
+ Only JAR and ZIP archives are supported;
+ - `dest` - (optional) path in the process' working directory to use as the
+ destination directory. Default `./concord/`.
+
+The `secret` reference has the following syntax:
+- `org` - (optional) name of the secret's org. Uses the process's organization
+if not specified;
+- `name` - name of the secret;
+- `password` - (optional) password for password-protected secrets. Accepts
+literal values only, expressions are not supported.
+
+An example of a `git` import using custom authentication:
+
+```yaml
+imports:
+ - git:
+ url: "https://github.com/me/my_private_repo.git"
+ secret:
+ name: "my_secret_key"
+```
diff --git a/docs/src/processes-v1/index.md b/docs/src/processes-v1/index.md
new file mode 100644
index 0000000000..7bb9af27f2
--- /dev/null
+++ b/docs/src/processes-v1/index.md
@@ -0,0 +1,319 @@
+# Overview
+
+- [Directory Structure](#directory-structure)
+- [Additional Concord Files](#additional-concord-files)
+- [DSL](#dsl)
+- [Public Flows](#public-flows)
+- [Variables](#variables)
+ - [Provided Variables](#provided-variables)
+ - [Context](#context)
+ - [Output Variables](#output-variables)
+
+## Directory Structure
+
+Regardless of how the process was started -- using
+[a project and a Git repository](../api/process.md#form-data) or by
+[sending a payload archive](../api/process.md#zip-file), Concord assumes
+a certain structure of the process's working directory:
+
+- `concord.yml`: a Concord [DSL](#dsl) file containing the main flow,
+configuration, profiles and other declarations;
+- `concord/*.yml`: directory containing [extra Concord YAML files](#additional-concord-files);
+- `forms`: directory with [custom forms](../getting-started/forms.md#custom).
+
+Anything else is copied as-is and available for the process.
+[Plugins]({{ site.concord_plugins_v1_docs }}/index.md) can require other files to be present in
+the working directory.
+
+The same structure should be used when storing your project in a Git repository.
+Concord clones the repository and recursively copies the specified directory
+[path](../api/repository.md#create-a-repository) (`/` by default which includes
+all files in the repository) to the working directory for the process. If a
+subdirectory is specified in the Concord repository's configuration, any paths
+outside the configuration-specified path are ignored and not copied. The repository
+name it _not_ included in the final path.
+
+## Additional Concord Files
+
+The default use case with the Concord DSL is to maintain everything in the one
+`concord.yml` file. The usage of a `concord` folder and files within it allows
+you to reduce the individual file sizes.
+
+`./concord/test.yml`:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ name: "stranger"
+flows:
+ default:
+ - log: "Hello, ${nested.name}!"
+```
+
+`./concord.yml`:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ name: "Concord"
+```
+
+The above example prints out `Hello, Concord!`, when running the default flow.
+
+Concord folder merge rules:
+
+- Files are loaded in alphabetical order, including subdirectories.
+- Flows and forms with the same names are overridden by their counterpart from
+ the files loaded previously.
+- All triggers from all files are added together. If there are multiple trigger
+ definitions across several files, the resulting project contains all of
+ them.
+- Configuration values are merged. The values from the last loaded file override
+ the values from the files loaded earlier.
+- Profiles with flows, forms and configuration values are merged according to
+ the rules above.
+
+## DSL
+
+Concord DSL files contain [configuration](./configuration.md),
+[flows](./flows.md), [profiles](./profiles.md) and other declarations.
+
+The top-level syntax of a Concord DSL file is:
+
+```yaml
+configuration:
+ ...
+
+flows:
+ ...
+
+publicFlows:
+ ...
+
+forms:
+ ...
+
+triggers:
+ ...
+
+profiles:
+ ...
+
+resources:
+ ...
+
+imports:
+ ...
+```
+
+Let's take a look at each section:
+- [configuration](./configuration.md) - defines process configuration,
+dependencies, arguments and other values;
+- [flows](./flows.md) - contains one or more Concord flows;
+- [publicFlows](#public-flows) - list of flow names which may be used as an [entry point](./configuration.md#entry-point);
+- [forms](../getting-started/forms.md) - Concord form definitions;
+- [triggers](../triggers/index.md) - contains trigger definitions;
+- [profiles](./profiles.md) - declares profiles that can override
+declarations from other sections;
+- [resources](./resources.md) - configurable paths to Concord resources;
+- [imports](./imports.md) - allows referencing external Concord definitions.
+
+## Public Flows
+
+Flows listed in the `publicFlows` section are the only flows allowed as
+[entry point](./configuration.md#entry-point) values. This also limits the
+flows listed in the repository run dialog. When the `publicFlows` is omitted,
+all flows are considered public.
+
+Flows from an [imported repository](./imports.md) are subject to the same
+setting. `publicFlows` defined in the imported repository are merged
+with those defined in the main repository.
+
+```yaml
+publicFlows:
+ - default
+ - enterHere
+
+flows:
+ default:
+ - log: "Hello!"
+ - call: internalFlow
+
+ enterHere:
+ - "Using alternative entry point."
+
+ # not listed in the UI repository start popup
+ internalFlow:
+ - log: "Only callable from another flow."
+```
+
+## Variables
+
+Process arguments, saved process state and
+[automatically provided variables](#provided-variables) are exposed as flow
+variables:
+
+```yaml
+flows:
+ default:
+ - log: "Hello, ${initiator.displayName}"
+```
+
+In the example above the expression `${initator.displayName}` references an
+automatically provided variable `inititator` and retrieves it's `displayName`
+field value.
+
+Flow variables can be defined using the DSL's [set step](./flows.md#setting-variables),
+the [arguments](./configuration.md#arguments) section in the process
+configuration, passed in the API request when the process is created, etc.
+
+### Provided Variables
+
+Concord automatically provides several built-in variables upon process
+execution in addition to the defined [variables](#variables):
+
+- `execution` or `context`: a reference to the current execution's [context](#context),
+instance of [com.walmartlabs.concord.sdk.Context](https://github.com/walmartlabs/concord/blob/master/sdk/src/main/java/com/walmartlabs/concord/sdk/Context.java);
+- `txId` - an unique identifier of the current process;
+- `parentInstanceId` - an identifier of the parent process;
+- `tasks` - allows access to available tasks (for example:
+ `${tasks.get('oneops')}`);
+- `workDir` - path to the working directory of a current process;
+- `initiator` - information about the user who started a process:
+ - `initiator.username` - login, string;
+ - `initiator.displayName` - printable name, string;
+ - `initiator.email` - email address, string;
+ - `initiator.groups` - list of user's groups;
+ - `initiator.attributes` - other LDAP attributes; for example
+ `initiator.attributes.mail` contains the email address.
+- `currentUser` - information about the current user. Has the same structure
+ as `initiator`;
+- `requestInfo` - additional request data (see the note below):
+ - `requestInfo.query` - query parameters of a request made using user-facing
+ endpoints (e.g. the portal API);
+ - `requestInfo.ip` - client IP address, where from request is generated.
+ - `requestInfo.headers` - headers of request made using user-facing endpoints.
+- `projectInfo` - project's data:
+ - `projectInfo.orgId` - the ID of the project's organization;
+ - `projectInfo.orgName` - the name of the project's organization;
+ - `projectInfo.projectId` - the project's ID;
+ - `projectInfo.projectName` - the project's name;
+ - `projectInfo.repoId` - the project's repository ID;
+ - `projectInfo.repoName` - the repository's name;
+ - `projectInfo.repoUrl` - the repository's URL;
+ - `projectInfo.repoBranch` - the repository's branch;
+ - `projectInfo.repoPath` - the repository's path (if configured);
+ - `projectInfo.repoCommitId` - the repository's last commit ID;
+ - `projectInfo.repoCommitAuthor` - the repository's last commit author;
+ - `projectInfo.repoCommitMessage` - the repository's last commit message.
+- `processInfo` - the current process' data:
+ - `processInfo.activeProfiles` - list of active profiles used for the current
+ execution;
+ - `processInfo.sessionToken` - the current process'
+ [session token](../getting-started/security.md#using-session-tokens) can be
+ used to call Concord API from flows.
+
+LDAP attributes must be allowed in [the configuration](../getting-started/configuration.md#server-configuration-file).
+
+**Note:** only the processes started using [the browser link](../api/process.md#browser)
+provide the `requestInfo` variable. In other cases (e.g. processes
+[triggered by GitHub](../triggers/github.md)) the variable might be undefined
+or empty.
+
+Availability of other variables and "beans" depends on the installed Concord
+plugins and the arguments passed in at the process invocation and stored in the
+request data.
+
+### Context
+
+The `context` variable provides access to the current process' state:
+variables, current flow name, etc. The `context` variable is available at
+any moment during the flow execution and can be accessed using expressions,
+[scripts](../getting-started/scripting.md) or in
+[tasks](../getting-started/tasks.md):
+
+```yaml
+flows:
+ default:
+ - log: "All variables: ${context.toMap()}"
+
+ - script: javascript
+ body: |
+ var allVars = execution.toMap();
+ print('Getting all variables in a JavaScript snippet: ' + allVars);
+```
+
+**Note:** in the `script` environment the `context` variable called `execution`
+to avoid clashes with the JSR 223 scripting context.
+
+### Output Variables
+
+Concord has the ability to return process data when a process completes.
+The names of returned variables should be declared in the `configuration` section:
+
+```yaml
+configuration:
+ out:
+ - myVar1
+```
+
+Output variables may also be declared dynamically using `multipart/form-data`
+parameters if allowed in a Project's configuration. **CAUTION: this is a not
+secure if secret values are stored in process variables**
+
+```bash
+$ curl ... -F out=myVar1 https://concord.example.com/api/v1/process
+{
+ "instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b"
+}
+```
+
+Retrieve the output variable value(s) after the process finishes:
+
+```bash
+# wait for completion...
+$ curl .. https://concord.example.com/api/v2/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b
+{
+ "instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b",
+ "meta": {
+ out" : {
+ "myVar1" : "my value"
+ },
+ }
+}
+```
+
+It is also possible to retrieve a nested value:
+
+```yaml
+configuration:
+ out:
+ - a.b.c
+
+flows:
+ default:
+ - set:
+ a:
+ b:
+ c: "my value"
+ d: "ignored"
+```
+
+```bash
+$ curl ... -F out=a.b.c https://concord.example.com/api/v1/process
+```
+
+In this example, Concord looks for variable `a`, its field `b` and
+the nested field `c`.
+
+Additionally, the output variables can be retrieved as a JSON file:
+
+```bash
+$ curl ... https://concord.example.com/api/v1/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b/attachment/out.json
+
+{"myVar1":"my value"}
+```
+
+Any value type that can be represented as JSON is supported.
diff --git a/docs/src/processes-v1/profiles.md b/docs/src/processes-v1/profiles.md
new file mode 100644
index 0000000000..3f42490ba7
--- /dev/null
+++ b/docs/src/processes-v1/profiles.md
@@ -0,0 +1,73 @@
+# Profiles
+
+Profiles are named collections of configuration, forms and flows and can be used
+to override defaults set in the top-level content of the Concord file. They are
+created by inserting a name section in the `profiles` top-level element.
+
+Profile selection is configured when a process is
+[executed](../getting-started/processes.md#overview).
+
+For example, if the process below is executed using the `myProfile` profile,
+the value of `foo` is `bazz` and appears in the log instead of the default
+`bar`:
+
+```yaml
+configuration:
+ arguments:
+ foo: "bar"
+
+profiles:
+ myProfile:
+ configuration:
+ arguments:
+ foo: "bazz"
+flows:
+ default:
+ - log: "${foo}"
+```
+
+The `activeProfiles` parameter is a list of project file's profiles that is
+used to start a process. If not set, a `default` profile is used.
+
+The active profile's configuration is merged with the default values
+specified in the top-level `configuration` section. Nested objects are
+merged, lists of values are replaced:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ x: 123
+ y: "abc"
+ aList:
+ - "first item"
+ - "second item"
+
+profiles:
+ myProfile:
+ configuration:
+ arguments:
+ nested:
+ y: "cba"
+ z: true
+ aList:
+ - "primer elemento"
+ - "segundo elemento"
+
+flows:
+ default:
+ # Expected next log output: 123 cba true
+ - log: "${nested.x} ${nested.y} ${nested.z}"
+ # Expected next log output: ["primer elemento", "segundo elemento"]
+ - log: "${aList}"
+```
+
+Multiple active profiles are merged in the order they are specified in
+`activeProfiles` parameter:
+
+```bash
+$ curl ... -F activeProfiles=a,b http://concord.example.com/api/v1/process
+```
+
+In this example, values from `b` are merged with the result of the merge
+of `a` and the default configuration.
diff --git a/docs/src/processes-v1/resources.md b/docs/src/processes-v1/resources.md
new file mode 100644
index 0000000000..16ca521c27
--- /dev/null
+++ b/docs/src/processes-v1/resources.md
@@ -0,0 +1,39 @@
+# Resources
+
+Resource directory path such as `./concord` can be configured in
+the `resources` top-level element in the concord file.
+
+Concord loads the root `concord.yml` first and subsequently looks for the
+resources paths under the `resources` section.
+
+The following resources configuration causes all flows to be loaded
+from the `myFlows` folder instead of the default `concord` folder
+using the pattern `./concord/**/*.yml`.
+
+```yaml
+resources:
+ concord: "myFlows"
+```
+
+Multiple resource paths per category are also supported:
+
+```yaml
+resources:
+ concord:
+ - "myFlowDirA"
+ - "myFlowDirB"
+```
+
+Resource loading can be disabled by providing the list of disabled resources.
+
+```yaml
+resources:
+ concord: "myFlows"
+ disabled:
+ - "profiles" # deprecated folders
+ - "processes"
+```
+
+In the above example, flows are picked from `./myFlows` instead of the `./concord`
+directory and loading of resources from the `./profiles` and `./processes`
+directories is disabled.
diff --git a/docs/src/processes-v1/tasks.md b/docs/src/processes-v1/tasks.md
new file mode 100644
index 0000000000..8533d00bac
--- /dev/null
+++ b/docs/src/processes-v1/tasks.md
@@ -0,0 +1,441 @@
+# Tasks
+
+- [Using Tasks](#use-task)
+- [Development](#development)
+ - [Creating Tasks](#create-task)
+ - [Using External Artifacts](#using-external-artifacts)
+ - [Best Practices](#best-practices)
+
+
+
+## Using Tasks
+
+In order to be able to use a task a URL to the JAR containing the implementation
+has to be added as a [dependency](../processes-v1/configuration.md#dependencies).
+Typically, the JAR is published to a repository manager and a URL pointing to
+the JAR in the repository is used.
+
+You can invoke a task via an expression or with the `task` step type.
+
+Following are a number of examples:
+
+```yaml
+configuration:
+ dependencies:
+ - "http://repo.example.com/myConcordTask.jar"
+flows:
+ default:
+ # invoking via usage of an expression and the call method
+ - ${myTask.call("hello")}
+
+ # calling a method with a single argument
+ - myTask: hello
+
+ # calling a method with a single argument
+ # the value will be a result of expression evaluation
+ - myTask: ${myMessage}
+
+ # calling a method with two arguments
+ # same as ${myTask.call("warn", "hello")}
+ - myTask: ["warn", "hello"]
+
+ # calling a method with a single argument
+ # the value will be converted into Map
+ - myTask: { "urgency": "high", message: "hello" }
+
+ # multiline strings and string interpolation is also supported
+ - myTask: |
+ those line breaks will be
+ preserved. Here will be a ${result} of EL evaluation.
+```
+
+If a task implements the `#execute(Context)` method, some additional
+features like in/out variables mapping can be used:
+
+```yaml
+flows:
+ default:
+ # calling a task with in/out variables mapping
+ - task: myTask
+ in:
+ taskVar: ${processVar}
+ anotherTaskVar: "a literal value"
+ out:
+ processVar: ${taskVar}
+ error:
+ - log: something bad happened
+```
+
+## Development
+
+
+
+### Creating Tasks
+
+Tasks must implement `com.walmartlabs.concord.sdk.Task` Java interface.
+
+The Task interface is provided by the `concord-sdk` module:
+
+```xml
+
+ com.walmartlabs.concord
+ concord-sdk
+ {{ site.concord_core_version }}
+ provided
+
+```
+
+Some dependencies are provided by the runtime. It is recommended to mark them
+as `provided` in the POM file:
+- `com.fasterxml.jackson.core/*`
+- `javax.inject/javax.inject`
+- `org.slf4j/slf4j-api`
+
+Here's an example of a simple task:
+
+```java
+import com.walmartlabs.concord.sdk.Task;
+import javax.inject.Named;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ public void sayHello(String name) {
+ System.out.println("Hello, " + name + "!");
+ }
+
+ public int sum(int a, int b) {
+ return a + b;
+ }
+}
+```
+
+This task can be called using an [expression](../processes-v1/flows.md#expressions)
+in short or long form:
+
+```yaml
+flows:
+ default:
+ - ${myTask.sayHello("world")} # short form
+
+ - expr: ${myTask.sum(1, 2)} # full form
+ out: mySum
+ error:
+ - log: "Wham! ${lastError.message}"
+```
+
+If a task implements `Task#execute` method, it can be started using
+`task` step type:
+
+```java
+import com.walmartlabs.concord.sdk.Task;
+import com.walmartlabs.concord.sdk.Context;
+import javax.inject.Named;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public void execute(Context ctx) throws Exception {
+ System.out.println("Hello, " + ctx.getVariable("name"));
+ ctx.setVariable("success", true);
+ }
+}
+```
+
+```yaml
+flows:
+ default:
+ - task: myTask
+ in:
+ name: world
+ out:
+ success: callSuccess
+ error:
+ - log: "Something bad happened: ${lastError}"
+```
+
+This form allows use of `in` and `out` variables and error-handling blocks.
+
+The `task` syntax is recommended for most use cases, especially when dealing
+with multiple input parameters.
+
+If a task contains method `call` with one or more arguments, it can
+be called using the _short_ form:
+
+```java
+import com.walmartlabs.concord.common.Task;
+import javax.inject.Named;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ public void call(String name, String place) {
+ System.out.println("Hello, " + name + ". Welcome to " + place);
+ }
+}
+```
+
+```yaml
+flows:
+ default:
+ - myTask: ["user", "Concord"] # using an inline YAML array
+
+ - myTask: # using a regular YAML array
+ - "user"
+ - "Concord"
+```
+
+Context variables can be automatically injected into task fields or
+method arguments:
+
+```java
+import com.walmartlabs.concord.common.Task;
+import com.walmartlabs.concord.common.InjectVariable;
+import com.walmartlabs.concord.sdk.Context;
+import javax.inject.Named;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ @InjectVariable("context")
+ private Context ctx;
+
+ public void sayHello(@InjectVariable("greeting") String greeting, String name) {
+ String s = String.format(greeting, name);
+ System.out.println(s);
+
+ ctx.setVariable("success", true);
+ }
+}
+```
+
+```yaml
+flows:
+ default:
+ - ${myTask.sayHello("Concord")}
+
+configuration:
+ arguments:
+ greeting: "Hello, %s!"
+```
+
+### Using External Artifacts
+
+The runtime provides a way for tasks to download and cache external artifacts:
+```java
+import com.walmartlabs.concord.sdk.DependencyManager;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ private final DependencyManager dependencyManager;
+
+ @Inject
+ public MyTask(DependencyManager dependencyManager) {
+ this.dependencyManager = dependencyManager;
+ }
+
+ @Override
+ public void execute(Context ctx) throws Exception {
+ URI uri = ...
+ Path p = dependencyManager.resolve(uri);
+ // ...do something with the returned path
+ }
+}
+```
+
+The `DependencyManager` is an `@Inject`-able service that takes care of
+resolving, downloading and caching URLs. It supports all URL types as
+the regular [dependencies](../processes-v1/configuration.md#dependencies)
+section in Concord YAML files - `http(s)`, `mvn`, etc.
+
+Typically, cached copies are persistent between process executions (depends on
+the Concord's environment configuration).
+
+The tasks shouldn't expect the returning path to be writable (i.e. read-only
+access).
+
+`DependencyManager` shouldn't be used as a way to download deployment
+artifacts. It's not a replacement for [Ansible]({{ site.concord_plugins_v1_docs }}/ansible.md) or any
+other deployment tool.
+
+
+
+### Best Practices
+
+Here are some of the best practices when creating a new plugin with one or
+multiple tasks.
+
+#### Environment Defaults
+
+Instead of hard coding parameters like endpoint URLs, credentials and other
+environment-specific values, use injectable defaults:
+
+```java
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public void execute(Context ctx) throws Exception {
+ Map defaults = ctx.getVariable("myTaskDefaults");
+
+ String value = (String) ctx.getVariable("myVar");
+ if (value == null) {
+ // fallback to the default value
+ value = (String) defaults.get("myVar");
+ }
+ System.out.println("Got " + value);
+ }
+}
+```
+
+The environment-specific defaults are provided using
+the [Default Process Variables](../getting-started/configuration.md#default-process-variables)
+file.
+
+The task's default can also be injected using `@InjectVariable`
+annotation - check out the [GitHub task]({{ site.concord_plugins_source }}blob/master/tasks/git/src/main/java/com/walmartlabs/concord/plugins/git/v1/GitHubTaskV1.java#L37-L38)
+as the example.
+
+#### Full Syntax vs Expressions
+
+There are two ways how the task can be invoked: the `task` syntax and
+using expressions. Consider the `task` syntax for tasks with multiple
+parameters and expressions for tasks that return data and should be used inline:
+
+```yaml
+# use the `task` syntax when you need to pass multiple parameters and/or complex data structures
+- task: myTask
+ in:
+ param1: 123
+ param2: "abc"
+ nestedParams:
+ x: true
+ y: false
+
+# use expressions for tasks returning data
+- log: "${myTask.getAListOfThings()}"
+```
+
+#### Task Output and Error Handling
+
+Consider storing the task's results in a `result` variable of the following
+structure:
+
+Successful execution:
+
+```yaml
+result:
+ ok: true
+ data: "the task's output"
+```
+
+Failed execution:
+
+```yaml
+result:
+ ok: false
+ errorCode: 404
+ error: "Not found"
+```
+
+The `ok` parameter allows users to quickly test whether the execution was
+successful or not:
+
+```yaml
+- task: myTask
+
+- if: ${!result.ok}
+ then:
+ - throw: "Something went wrong: ${result.error}"
+```
+
+By default the task should throw an exception in case of any execution errors
+or invalid input parameters. Consider adding the `ignoreErrors` parameter to
+catch all execution errors, but not the invalid arguments errors. Store
+the appropriate error message and/or the error code in the `result` variable:
+
+Throw an exception:
+
+```yaml
+- task: myTask
+ in:
+ url: "https://httpstat.us/404"
+```
+
+Save the error in the `result` variable:
+
+```yaml
+- task: myTask
+ in:
+ url: "https://httpstat.us/404"
+ ignoreErrors: true
+
+- log: "${result.errorCode}"
+```
+
+Use the standard JRE classes in the task's results. Custom types can cause
+serialization issues when the process suspends, e.g. on a [form](../getting-started/forms.md)
+call. If you need to return some complex data structure, consider converting it
+to regular Java collections. The runtime provides
+[Jackson](https://github.com/FasterXML/jackson) as the default JSON/YAML library
+which can also be used to convert arbitrary data classes into regular Map's and
+List's:
+
+```java
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public void execute(Context ctx) throws Exception {
+ MyResult result = new MyResult();
+ ObjectMapper om = new ObjectMapper();
+ ctx.setVariable("result", om.convertValue(result, Map.class));
+ }
+
+ public static class MyResult implements Serializable {
+ boolean ok;
+ String data;
+ }
+}
+```
+
+#### Unit Tests
+
+Consider using unit tests to quickly test the task without publishing SNAPSHOT
+versions. Use a library like [Mockito](https://site.mockito.org/) to replace
+the dependencies in your task with "mocks":
+
+```java
+@Test
+public void test() throws Exception {
+ SomeService someService = mock(SomeService.class);
+
+ Map params = new HashMap();
+ params.put("url", "https://httpstat.us/404");
+ Context ctx = new MockContext(params);
+
+ MyTask t = new MyTask(someService);
+ t.execute(ctx);
+
+ assertNotNull(ctx.getVariable("result"));
+}
+```
+
+#### Integration Tests
+
+It is possible to test a task using a running Concord instance without
+publishing the task's JAR. Concord automatically adds `lib/*.jar` files from
+[the payload archive](../api/process.md#zip-file) to the process'
+classpath. This mechanism can be used to upload local JAR files and,
+consequently, to test locally-built JARs. Check out the
+[custom_task]({{ site.concord_source }}/tree/master/examples/custom_task)
+example. It uses Maven to collect all `compile` dependencies of the task
+and creates a payload archive with the dependencies and the task's JAR.
+
+**Note:** It is important to use `provided` scope for the dependencies that are
+already included in the runtime. See [Creating Tasks](#create-task) section for
+the list of provided dependencies.
diff --git a/docs/src/processes-v2/configuration.md b/docs/src/processes-v2/configuration.md
new file mode 100644
index 0000000000..638218374e
--- /dev/null
+++ b/docs/src/processes-v2/configuration.md
@@ -0,0 +1,463 @@
+# Configuration
+
+The `configuration` sections contains [dependencies](#dependencies),
+[arguments](#arguments) and other process configuration values.
+
+- [Merge Rules](#merge-rules)
+- [Runtime](#runtime)
+- [Entry Point](#entry-point)
+- [Arguments](#arguments)
+- [Dependencies](#dependencies)
+- [Requirements](#requirements)
+- [Process Timeout](#process-timeout)
+ - [Running Timeout](#running-timeout)
+ - [Suspend Timeout](#suspend-timeout)
+- [Exclusive Execution](#exclusive-execution)
+- [Metadata](#metadata)
+- [Events](#events)
+
+## Merge Rules
+
+Process `configuration` values can come from different sources: the section in
+the `concord.yml` file, request parameters, policies, etc. Here's the order in
+which all `configuration` sources are merged before the process starts:
+
+- environment-specific [default values](./configuration.md#default-process-variables);
+- [defaultCfg](../getting-started/policies.md#default-process-configuration-rule) policy values;
+- the current organization's configuration values;
+- the current [project's configuration](../api/project.md#get-project-configuration) values;
+- values from current active [profiles](./profiles.md)
+- configuration file send in [the process start request](../api/process.md#start);
+- [processCfg](../getting-started/policies.md#process-configuration-rule) policy values.
+
+## Runtime
+
+The `runtime` parameter can be used to specify the execution runtime:
+
+```yaml
+configuration:
+ runtime: "concord-v2"
+```
+
+Currently, the default `runtime` is `concord-v1` which is considered stable and
+production-ready. It will remain available for the foreseeable future, but will
+see fewer (if any) feature updates. This section describes the new and improved
+`concord-v2` runtime. There are breaking changes to syntax and execution semantics
+from `concord-v1` which require [migration](./migration.md) considerations.
+
+See the [Processes (v1)](../processes-v1/index.md) section for more details
+about `concord-v1` runtime.
+
+## Entry Point
+
+The `entryPoint` configuration sets the name of the flow that will be used for
+process executions. If no `entryPoint` is specified the flow labelled `default`
+is used automatically, if it exists.
+
+```yaml
+configuration:
+ entryPoint: "main" # use "main" instead of "default"
+
+flows:
+ main:
+ - log: "Hello World"
+```
+
+**Note:** some flow names have special meaning, such as `onFailure`, `onCancel`
+and `onTimeout`. See the [error handling](./flows.md#error-handling) section
+for more details.
+
+## Arguments
+
+Default values for arguments can be defined in the `arguments` section of the
+configuration as simple key/value pairs as well as nested values:
+
+```yaml
+configuration:
+ arguments:
+ name: "Example"
+ coordinates:
+ x: 10
+ y: 5
+ z: 0
+
+flows:
+ default:
+ - log: "Project name: ${name}"
+ - log: "Coordinates (x,y,z): ${coordinates.x}, ${coordinates.y}, ${coordinates.z}"
+```
+
+Values of `arguments` can contain [expressions](./flows.md#expressions). Expressions can
+use all regular tasks:
+
+```yaml
+configuration:
+ arguments:
+ listOfStuff: ${myServiceTask.retrieveListOfStuff()}
+ myStaticVar: 123
+```
+
+Concord evaluates arguments in the order of definition. For example, it is
+possible to use a variable value in another variable if the former is defined
+earlier than the latter:
+
+```yaml
+configuration:
+ arguments:
+ name: "Concord"
+ message: "Hello, ${name}"
+```
+
+A variable's value can be [defined or modified with the set step](./flows.md#setting-variables)
+and a [number of variables](./index.md#provided-variables) are automatically
+set in each process and available for usage.
+
+## Dependencies
+
+The `dependencies` array allows users to specify the URLs of dependencies such
+as:
+
+- plugins ([tasks](./tasks.md)) and their dependencies;
+- dependencies needed for specific scripting language support;
+- other dependencies required for process execution.
+
+```yaml
+configuration:
+ dependencies:
+ # maven URLs...
+ - "mvn://org.codehaus.groovy:groovy-all:2.4.12"
+ # or direct URLs
+ - "https://repo1.maven.org/maven2/org/codehaus/groovy/groovy-all/2.4.12/groovy-all-2.4.12.jar"
+ - "https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.6/commons-lang3-3.6.jar"
+```
+
+Concord downloads the artifacts and adds them to the process' classpath.
+
+Multiple versions of the same artifact are replaced with a single one,
+following standard Maven resolution rules.
+
+Usage of the `mvn:` URL pattern is preferred since it uses the centrally
+configured [list of repositories](./configuration.md#dependencies)
+and downloads not only the specified dependency itself, but also any required
+transitive dependencies. This makes the Concord project independent of access
+to a specific repository URL, and hence more portable.
+
+Maven URLs provide additional options:
+
+- `transitive=true|false` - include all transitive dependencies
+ (default `true`);
+- `scope=compile|provided|system|runtime|test` - use the specific
+ dependency scope (default `compile`).
+
+Additional options can be added as "query parameters" parameters to
+the dependency's URL:
+```yaml
+configuration:
+ dependencies:
+ - "mvn://com.walmartlabs.concord:concord-client:{{site.concord_core_version}}?transitive=false"
+```
+
+The syntax for the Maven URL uses the groupId, artifactId, optionally packaging,
+and version values - the GAV coordinates of a project. For example the Maven
+`pom.xml` for the Groovy scripting language runtime has the following
+definition:
+
+```xml
+
+ org.codehaus.groovy
+ groovy-all
+ 2.4.12
+ ...
+
+```
+
+This results in the path
+`org/codehaus/groovy/groovy-all/2.4.12/groovy-all-2.4.12.jar` in the
+Central Repository and any repository manager proxying the repository.
+
+The `mvn` syntax uses the short form for GAV coordinates
+`groupId:artifactId:version`, so for example
+`org.codehaus.groovy:groovy-all:2.4.12` for Groovy.
+
+Newer versions of groovy-all use `pom` and define
+dependencies. To use a project that applies this approach, called Bill of
+Material (BOM), as a dependency you need to specify the packaging in between
+the artifactId and version. For example, version `2.5.21` has to be specified as
+`org.codehaus.groovy:groovy-all:pom:2.5.21`:
+
+```yaml
+configuration:
+ dependencies:
+ - "mvn://org.codehaus.groovy:groovy-all:pom:2.5.21"
+```
+
+The same logic and syntax usage applies to all other dependencies including
+Concord [plugins]({{ site.concord_plugins_v2_docs }}/index.md).
+
+## Requirements
+
+A process can have a specific set of `requirements` configured. Concord uses
+requirements to control where the process should be executed and what kind of
+resources it gets. For example, if the process specifies
+
+```yaml
+configuration:
+ requirements:
+ agent:
+ favorite: true
+```
+
+and if there is an agent with
+
+```
+concord-agent {
+ capabilities = {
+ favorite = true
+ }
+}
+```
+
+in its configuration file then it is a suitable agent for the process.
+
+Following rules are used when matching `requirements.agent` values of processes
+and agent `capabilities`:
+- if the value is present in `capabilities` but missing in `requirements.agent`
+is is **ignored**;
+- if the value is missing in `capabilities` but present in `requirements.agent`
+then it is **not a match**;
+- string values in `requirements.agent` are treated as **regular expressions**,
+i.e. in pseudo code `capabilities_value.regex_match(requirements_value)`;
+- lists in `requirements.agent` are treated as "one or more" match, i.e. if one
+or more elements in the list must match the value from `capabilities`;
+- other values are compared directly.
+
+More examples:
+
+```yaml
+configuration:
+ requirements:
+ agent:
+ size: ".*xl"
+ flavor:
+ - "vanilla"
+ - "chocolate"
+```
+
+matches agents with:
+
+```
+concord-agent {
+ capabilities = {
+ size = "xxl"
+ flavor = "vanilla"
+ }
+}
+```
+
+### Process Timeout
+
+You can specify the maximum amount of time that a process can be in a some state.
+After this timeout process automatically canceled and marked as `TIMED_OUT`.
+
+Currently, the runtime provides two different timeout parameters:
+- [processTimeout](#running-timeout) - how long the process can stay in
+ the `RUNNING` state;
+- [suspendTimeout](#suspend-timeout) - how long the process can stay in
+ the `SUSPENDED` state.
+
+Both timeout parameters accepts duration in the
+[ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) format:
+
+```yaml
+configuration:
+ processTimeout: "PT1H" # 1 hour
+```
+
+A special `onTimeout` flow can be used to handle timeouts:
+
+```yaml
+flows:
+ onTimeout:
+ - log: "I'm going to run when my parent process times out"
+```
+
+The way Concord handles timeouts is described in more details in
+the [error handling](./flows.md#handling-cancellations-failures-and-timeouts)
+section.
+
+#### Running Timeout
+
+You can specify the maximum amount of time the process can spend in
+the `RUNNING` state with the `processTimeout` configuration. It can be useful
+to set specific SLAs for deployment jobs or to use it as a global timeout:
+
+```yaml
+configuration:
+ processTimeout: "PT1H"
+flows:
+ default:
+ # a long running process
+```
+
+In the example above, if the process runs for more than 1 hour it is
+automatically cancelled and marked as `TIMED_OUT`.
+
+**Note:** forms waiting for input and other processes in `SUSPENDED` state
+are not affected by the process timeout. I.e. a `SUSPENDED` process can stay
+`SUSPENDED` indefinitely -- up to the allowed data retention period.
+
+#### Suspend Timeout
+
+You can specify the maximum amount of time the process can spend in
+the `SUSPEND` state with the `suspendTimeout` configuration. It can be useful
+to set specific SLAs for forms waiting for input and processes waiting for
+external events:
+
+```yaml
+configuration:
+ suspendTimeout: "PT1H"
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ org: myOrg
+ project: myProject
+ repo: myRepo
+ sync: true
+ suspend: true
+ ...
+```
+
+In the example above, if the process waits for more than 1 hour it is
+automatically cancelled and marked as `TIMED_OUT`.
+
+## Exclusive Execution
+
+The `exclusive` section in the process `configuration` can be used to configure
+exclusive execution of the process:
+
+```yaml
+configuration:
+ exclusive:
+ group: "myGroup"
+ mode: "cancel"
+
+flows:
+ default:
+ - ${sleep.ms(60000)} # simulate a long-running task
+```
+
+In the example above, if another process in the same project with the same
+`group` value is submitted, it will be immediately cancelled.
+
+If `mode` set to `wait` then only one process in the same `group` is allowed to
+run.
+
+**Note:** this feature available only for processes running in a project.
+
+See also: [Exclusive Triggers](../triggers/index.md#exclusive-triggers).
+
+## Metadata
+
+Flows can expose internal variables as process metadata. Such metadata can be
+retrieved using the [API](../api/process.md#status) or displayed in
+the process list in [Concord Console](../console/process.md#process-metadata).
+
+```yaml
+configuration:
+ meta:
+ myValue: "n/a" # initial value
+
+flows:
+ default:
+ - set:
+ myValue: "hello!"
+```
+
+After each step, Concord sends the updated value back to the server:
+
+```bash
+$ curl -skn http://concord.example.com/api/v1/process/1c50ab2c-734a-4b64-9dc4-fcd14637e36c |
+ jq '.meta.myValue'
+
+"hello!"
+```
+
+Nested variables and forms are also supported:
+
+```yaml
+configuration:
+ meta:
+ nested.value: "n/a"
+
+flows:
+ default:
+ - set:
+ nested:
+ value: "hello!"
+```
+
+The value is stored under the `nested.value` key:
+
+```bash
+$ curl -skn http://concord.example.com/api/v1/process/1c50ab2c-734a-4b64-9dc4-fcd14637e36c |
+ jq '.meta.["nested.value"]'
+
+"hello!"
+```
+
+Example with a form:
+
+```yaml
+configuration:
+ meta:
+ myForm.myValue: "n/a"
+
+flows:
+ default:
+ - form: myForm
+ fields:
+ - myValue: { type: "string" }
+```
+
+## Events
+
+The [process event recording](../getting-started/processes.md#process-events)
+can be configured using the `events` section. Here is an example of the default
+configuration:
+
+```yaml
+configuration:
+ events:
+ recordTaskInVars: false
+ truncateInVars: true
+ recordTaskOutVars: false
+ truncateOutVars: true
+ truncateMaxStringLength: 1024
+ truncateMaxArrayLength: 32
+ truncateMaxDepth: 10
+ inVarsBlacklist:
+ - "apiKey"
+ - "apiToken"
+ - "password"
+ - "privateKey"
+ - "vaultPassword"
+ outVarsBlacklist: []
+```
+
+- `recordTaskInVars`, `recordTaskOutVars` - enable or disable recording of
+input/output variables in task calls;
+- `truncateInVars`, `truncateOutVars` - if `true` the runtime truncates
+the recorded values to prevent spilling large values into process events;
+- `inVarsBlacklist`, `outVarsBlacklist` - list of variable names that must
+not be included recorded;
+- `truncateMaxStringLength` - maximum allowed length of string values.
+The runtime truncates strings larger than the specified value;
+- `truncateMaxArrayLength` - maximum allowed length of array (list) values;
+- `truncateMaxDepth` - maximum allowed depth of nested data structures (e.g.
+nested `Map` objects).
+
+**Note:** in the [runtime v1](../processes-v1/configuration.md#runner)
+the event recording configuration was a subsection of the `runner` section.
+In the runtime v2 it is a direct subsection of the `configuration` block.
diff --git a/docs/src/processes-v2/flows.md b/docs/src/processes-v2/flows.md
new file mode 100644
index 0000000000..9bf0b4136a
--- /dev/null
+++ b/docs/src/processes-v2/flows.md
@@ -0,0 +1,1222 @@
+# Flows
+
+Concord flows consist of series of steps executing various actions: calling
+plugins (also known as "tasks"), performing data validation, creating
+[forms](../getting-started/forms.md) and other steps.
+
+- [Structure](#structure)
+- [Steps](#steps)
+ - [Task Calls](#task-calls)
+ - [Task Result Data Structure](#task-result-data-structure)
+ - [Expressions](#expressions)
+ - [Conditional Execution](#conditional-execution)
+ - [Return Command](#return-command)
+ - [Exit Command](#exit-command)
+ - [Groups of Steps](#groups-of-steps)
+ - [Calling Other Flows](#calling-other-flows)
+ - [Setting Variables](#setting-variables)
+ - [Checkpoints](#checkpoints)
+ - [Parallel Execution](#parallel-execution)
+- [Loops](#loops)
+- [Error Handling](#error-handling)
+ - [Handling Errors In Flows](#handling-errors-in-flows)
+ - [Handling Cancellations, Failures and Timeouts](#handling-cancellations-failures-and-timeouts)
+ - [Retry](#retry)
+ - [Throwing Errors](#throwing-errors)
+- [Testing in Concord](#testing-in-concord)
+
+## Structure
+
+The `flows` section should contain at least one flow definition:
+
+```yaml
+flows:
+ default:
+ ...
+
+ anotherFlow:
+ ...
+```
+
+Each flow must have a unique name and at least one [step](#steps).
+
+## Steps
+
+Each flow is a list of steps:
+
+```yaml
+flows:
+ default:
+ - log: "Hello!"
+
+ - if: ${1 > 2}
+ then:
+ - log: "How is this possible?"
+
+ - log: "Bye!"
+```
+
+Flows can contain any number of steps and call each other. See below for
+the description of available steps and syntax constructs.
+
+### Task Calls
+
+The `task` syntax can be used to call Concord [tasks](./tasks.md):
+
+```yaml
+flows:
+ default:
+ - task: log
+ in:
+ msg: "Hello!"
+```
+
+Input parameters must be explicitly passed in the `in` block. If the task
+produces a result, it can be saved as a variable by using the `out` syntax:
+
+```yaml
+flows:
+ default:
+ - task: http
+ in:
+ url: "https://google.com"
+ out: result
+
+ - if: ${not result.ok}
+ then:
+ - log: "task failed: ${result.error}"
+```
+
+#### Task Result Data Structure
+
+All returned result data from tasks compatible with runtime-v2 contain a common
+set of fields, in addition to any task-specific data:
+
+- `ok` - boolean, true when task executes without error;
+- `error` - string, an error message when `ok` is `false`;
+
+### Expressions
+
+Expressions must be valid
+[Java Expression Language EL 3.0](https://github.com/javaee/el-spec) syntax
+and can be simple evaluations or perform actions by invoking more complex code.
+
+Short form:
+```yaml
+flows:
+ default:
+ # calling a method
+ - ${myBean.someMethod()}
+
+ # calling a method with an argument
+ - ${myBean.someMethod(myContextArg)}
+
+ # literal values
+ - ${1 + 2}
+
+ # EL 3.0 extensions:
+ - ${[1, 2, 3].stream().map(x -> x + 1).toList()}
+```
+
+Full form:
+```yaml
+flows:
+ default:
+ - expr: ${myBean.someMethod()}
+ out: myVar
+ error:
+ - log: "whoops, something happened"
+```
+
+Full form can optionally contain additional declarations:
+- `out` field - contains the name of a variable to store the result
+of the expression;
+- `error` block - to handle any exceptions thrown by the evaluation.
+
+Literal values, for example arguments or [form](../getting-started/forms.md)
+field values, can contain expressions:
+
+```yaml
+configuration:
+ arguments:
+ colors:
+ blue: "blue"
+ aFieldsInitialValue: "hello!"
+
+flows:
+ default:
+ - task: myTask
+ in:
+ colors: ["red", "green", "${colors.blue}"]
+
+ - task: myTask
+ in:
+ nested:
+ literals: "${myOtherTask.doSomething()}"
+
+forms:
+ myForm:
+ - aField: { type: "string", value: "${aFieldsInitialValue}" }
+```
+
+Classes from the package `java.lang` can be accessed via EL syntax:
+
+```yaml
+flows:
+ default:
+ - log: "Process running on ${System.getProperty('os.name')}"
+```
+
+#### Builtin functions
+
+- `allVariables` - returns a Java Map object with all current variables;
+
+```yaml
+flows:
+ default:
+ # prints out: {projectInfo={orgId=0fac1b18-d179-11e7-b3e7-d7df4543ed4f, orgName=Default} ...}
+ - log: ${allVariables()}
+```
+
+- `hasVariable` - accepts a variable name or nested variable path (as a string parameter)
+ and returns true if the variable exists;
+
+```yaml
+flows:
+ default:
+ # prints out: false
+ - log: ${hasVariable('myVar')}
+
+ - set:
+ myVar2: "value"
+
+ # prints out: true
+ - log: ${hasVariable('myVar2')}
+
+ - set:
+ nullVar: ${null}
+
+ # prints out: true
+ - log: ${hasVariable('nullVar')}
+
+ - set:
+ a:
+ b: 1
+
+ # prints out: true
+ - log: ${hasVariable('a.b')}
+```
+
+- `currentFlowName` - returns current flow name as string;
+
+```yaml
+flows:
+ default:
+ # prints out: default
+ - log: ${currentFlowName()}
+
+ - call: myFlow
+
+ myFlow:
+ # prints out: myFlow
+ - log: ${currentFlowName()}
+```
+
+- `evalAsMap` - evaluates the specified value as an expression,
+ returns a Java Map object;
+
+```yaml
+flows:
+ default:
+ - script: js
+ body: |
+ var x = {'a.b': 1, 'a.c': 2, 'a.d': '${a.b}', 'y': 'boo'};
+ context.variables().set('x', x);
+
+ # prints out: {a={b=1, d=1, c=2}, y=boo}
+ - log: ${evalAsMap(x)}
+```
+
+- `hasNonNullVariable` - returns `true` if the process has the specified variable and its value
+ is not `null`;
+
+- `hasFlow` - returns `true` if the process has a specified flow
+
+```yaml
+flows:
+ default:
+ # prints out: 'true'
+ - log: "'${hasFlow('myFlow')}'"
+
+ # prints out: 'false'
+ - log: "'${hasFlow('someUndefinedFlow')}'"
+ myFlow:
+ - log: "In my flow"
+```
+
+- `isDebug` - returns `true` if process started with debug flag;
+- `isDryRun` - returns `true` if the process was started in dry-run mode;
+
+- `orDefault` - accepts a variable name (as a string parameter), default value and
+ returns variable value or default value;
+
+```yaml
+flows:
+ default:
+ # prints out: '1'
+ - log: "'${orDefault('myVar', 1)}'"
+
+ - set:
+ myVar2: "boo"
+
+ # prints out: 'boo'
+ - log: "'${orDefault('myVar2', 'xyz')}'"
+
+
+ - set:
+ nullVar: ${null}
+
+ # prints out: ''
+ - log: "'${orDefault('nullVar', 1)}'"
+```
+
+- `throw` - accepts error message (as a string parameter) and throws exception with this message;
+
+```yaml
+flows:
+ default:
+ - expr: ${throw('Stop IT')}
+
+ - log: "Unreachable"
+```
+- `uuid` - returns a randomly generated UUID as a string;
+
+### Conditional Execution
+
+Concord supports both `if-then-else` and `switch` steps:
+
+```yaml
+configuration:
+ arguments:
+ myInt: 123
+
+flows:
+ default:
+ - if: ${myInt > 0}
+ then: # (1)
+ - log: it's clearly non-zero
+ else: # (2)
+ - log: zero or less
+
+ - log: "myInt: ${myVar}" # (3)
+```
+
+In this example, after `then` (1) or `else` (2) block are completed,
+the execution continues with the next step in the flow (3).
+
+The `if` expressions must evaluate to a boolean value or to string values
+"true" or "false" (case-insensitive).
+
+For example, the following flow prints out "Yep!":
+
+```yaml
+configuration:
+ arguments:
+ myString: "tRuE"
+
+flows:
+ default:
+ - if: ${myString}
+ then:
+ - log: "Yep!"
+ else:
+ - log: "Nope!"
+```
+
+"And", "or" and "not" operations are supported as well:
+```yaml
+flows:
+ default:
+ - if: ${true && true}
+ then:
+ - log: "Right-o"
+
+ - if: ${true || false}
+ then:
+ - log: "Yep!"
+
+ - if: ${!false}
+ then:
+ - log: "Correct!"
+```
+
+To compare a value (or the result of an expression) with multiple
+values, use the `switch` block:
+
+```yaml
+configuration:
+ arguments:
+ myVar: "green"
+
+flows:
+ default:
+ - switch: ${myVar}
+ red:
+ - log: "It's red!"
+ green:
+ - log: "It's definitely green"
+ default:
+ - log: "I don't know what it is"
+
+ - log: "Moving along..."
+```
+
+In this example, branch labels `red` and `green` are the compared
+values and `default` is the block which is executed if no other
+value fits.
+
+Expressions can be used as branch values:
+
+```yaml
+configuration:
+ arguments:
+ myVar: "red"
+ aKnownValue: "red"
+
+flows:
+ default:
+ - switch: ${myVar}
+ ${aKnownValue}:
+ - log: "Yes, I recognize this"
+ default:
+ - log: "Nope"
+```
+
+### Return Command
+
+The `return` command can be used to stop the execution of the current (sub) flow:
+
+```yaml
+flows:
+ default:
+ - if: ${myVar > 0}
+ then:
+ - log: moving along
+ else:
+ - return
+```
+
+The `return` command can be used to stop the current process if called from an
+entry point.
+
+### Exit Command
+
+The `exit` command can be used to stop the execution of the current process:
+
+```yaml
+flows:
+ default:
+ - if: ${myVar > 0}
+ then:
+ - exit
+ - log: "message"
+```
+
+The final status of a process after calling `exit` is `FINISHED`.
+
+### Groups of Steps
+
+Several steps can be grouped into one block. This allows `try-catch`-like
+semantics:
+
+```yaml
+flows:
+ default:
+ - log: "a step before the group"
+
+ - try:
+ - log: "a step inside the group"
+ - ${myBean.somethingDangerous()}
+ error:
+ - log: "well, that didn't work"
+```
+
+See the [Error Handling](#error-handling) section for more details.
+
+### Calling Other Flows
+
+Flows can be called using the `call` step:
+
+```yaml
+flows:
+ default:
+ - log: hello
+
+ - call: anotherFlow
+ # (optional) additional call parameters
+ in:
+ msg: "Hello!"
+
+ - log: bye
+
+ anotherFlow:
+ - log: "message from another flow: ${msg}"
+```
+
+A `call` step can optionally contain additional declarations:
+- `in` - input parameters (arguments) of the call;
+- `loop` - see the [Loops](#loops) section;
+- `retry` - see [Retry](#retry) section.
+
+### Setting Variables
+
+The `set` step can be used to set variables in the current process context:
+
+```yaml
+flows:
+ default:
+ - set:
+ a: "a-value"
+ b: 3
+ - log: ${a}
+ - log: ${b}
+```
+
+Nested data can be updated using the `.` syntax:
+
+```yaml
+configuration:
+ arguments:
+ myComplexData:
+ nestedValue: "Hello"
+
+flows:
+ default:
+ - set:
+ myComplexData.nestedValue: "Bye"
+
+ # prints out "Bye, Concord"
+ - log: "${myComplexData.nestedValue}, Concord"
+```
+
+A [number of variables](./index.md#variables) are automatically set in each
+process and available for usage.
+
+**Note:** comparing to [the runtime v1](../processes-v1/flows.md#setting-variables),
+the scoping rules are different - all variables, except for
+`configuration.arguments` and automatically provided ones, are local variables
+and must be explicitly returned using `out` syntax. For flow `calls` inputs are
+implicit - all variables available at the call site are available inside
+the called flow:
+
+```yaml
+flows:
+ default:
+ - set:
+ x: "abc"
+
+ - log: "${x}" # prints out "abc"
+
+ - call: aFlow # implicit "in"
+
+ - log: "${x}" # still prints out "abc"
+
+ - call: aFlow
+ out:
+ - x # explicit "out"
+ aFlow:
+ - log: "${x}" # prints out "abc"
+
+ - set:
+ x: "xyz"
+```
+
+The same rules apply to nested data - top-level elements are local variables
+and any changes to them will not be visible unless exposed using `out`:
+
+```yaml
+flows:
+ default:
+ - set:
+ myComplexData:
+ nested: "abc"
+
+ - log: "${myComplexData.nested}" # prints out "abc"
+
+ - call: aFlow
+
+ - log: "${myComplexData.nested}" # still prints out "abc"
+
+ - call: aFlow
+ out:
+ - myComplexData
+
+ - log: "${myComplexData.nested}" # prints out "xyz"
+ aFlow:
+ - set:
+ myComplexData.nested: "xyz"
+```
+
+### Checkpoints
+
+A checkpoint is a point defined within a flow at which Concord persists
+the process state. This process state can subsequently be restored and
+process execution can continue. A flow can contain multiple checkpoints.
+
+The [REST API](../api/checkpoint.md) can be used for listing and restoring
+checkpoints. Alternatively you can restore a checkpoint to continue processing
+directly from the Concord Console.
+
+The `checkpoint` step can be used to create a named checkpoint:
+
+```yaml
+flows:
+ default:
+ - log: "Starting the process..."
+ - checkpoint: "first"
+ - log: "Continuing the process..."
+ - checkpoint: "second"
+ - log: "Done!"
+```
+
+The example above creates two checkpoints: `first` and `second`.
+These checkpoints can be used to restart the process from the point after the
+checkpoint's step. For example, if the process is restored using `first`
+checkpoint, all steps starting from `Continuing the process...`
+message and further are executed.
+
+Checkpoint names can contain expressions:
+```yaml
+configuration:
+ arguments:
+ checkpointSuffix: "checkpoint"
+
+flows:
+ default:
+ - log: "Before the checkpoint"
+ - checkpoint: "first_${checkpointSuffix}"
+ - log: "After the checkpoint"
+```
+
+Checkpoint names must start with a (latin) letter or a digit, can contain
+whitespace, underscores `_`, `@`, dots `.`, minus signs `-` and tildes `~`.
+The length must be between 2 and 128 characters. Here's the regular expression
+used for validation:
+
+```
+^[0-9a-zA-Z][0-9a-zA-Z_@.\\-~ ]{1,128}$
+```
+
+Only process initiators, administrators and users with `WRITER` access level to
+the process' project can restore checkpoints with the API or the user console.
+
+After restoring a checkpoint, its name can be accessed using
+the `resumeEventName` variable.
+
+**Note:** files created during the process' execution are not saved during the
+checkpoint creation.
+
+### Parallel Execution
+
+The `parallel` block executes all step in parallel:
+
+```yaml
+flows:
+ default:
+ - parallel:
+ - ${sleep.ms(3000)}
+ - ${sleep.ms(3000)}
+
+ - log: "Done!"
+```
+
+The runtime executes each step in its own Java thread.
+
+Variables that exist at the start of the `parallel` block are copied into each
+thread.
+
+The `out` block can be used to return variables from the `parallel`
+block back into the flow:
+
+```yaml
+- parallel:
+ - task: http
+ in:
+ url: https://google.com/
+ out: googleResponse
+
+ - task: http
+ in:
+ url: https://bing.com/
+ out: bingResponse
+ out:
+ - googleResponse
+ - bingResponse
+
+- log: |
+ Google: ${googleResponse.statusCode}
+ Bing: ${bingResponse.statusCode}
+```
+
+**Note:** currently, to pass current variables into a `parallel` block,
+the runtime performs a "shallow copy". If you're passing collections or
+non-primitive objects in or out of the `parallel` block, you can
+still modify the original variable:
+
+```yaml
+- set:
+ anObject:
+ aList: [ ]
+
+- parallel:
+ - ${anObject.aList.add(1)}
+ - ${anObject.aList.add(2)}
+
+- log: ${anObject.aList}
+```
+
+While `parallel` executes _steps_ in parallel, `loop` with `parallel mode` can be used
+to perform same steps for each item in a collection. See the [Loops](#loops)
+section for more details.
+
+## Loops
+
+Concord flows can iterate through a collection of items in a loop using
+the `loop` syntax:
+
+```yaml
+- call: myFlow
+ loop:
+ items:
+ - "first element" # string item
+ - "second element"
+ - 3 # a number
+ - false # a boolean value
+
+# loop can also be used with tasks
+- task: myTask
+ in:
+ myVar: ${item}
+ loop:
+ items:
+ - "first element"
+ - "second element"
+```
+
+The collection of items to iterate over can be provided by an expression:
+
+```yaml
+configuration:
+ arguments:
+ myItems:
+ - 100500
+ - false
+ - "a string value"
+
+flows:
+ default:
+ - call: myFlow
+ loop:
+ items: ${myItems}
+```
+
+The items are referenced in the invoked flow with the `${item}` expression:
+
+```yaml
+ myFlow:
+ - log: "We got ${item}"
+```
+
+Maps (dicts, in Python terms) can also be used:
+
+```yaml
+flows:
+ default:
+ - task: log
+ in:
+ msg: "${item.key} - ${item.value}"
+ loop:
+ items:
+ a: "Hello"
+ b: "world"
+```
+
+In the example above `loop` iterates over the keys of the object. Each
+`${item}` provides `key` and `value` attributes.
+
+Lists of nested objects can be used in loops as well:
+
+```yaml
+flows:
+ default:
+ - call: deployToClouds
+ loop:
+ items:
+ - name: cloud1
+ fqdn: cloud1.myapp.example.com
+ - name: cloud2
+ fqdn: cloud2.myapp.example.com
+
+ deployToClouds:
+ - log: "Starting deployment to ${item.name}"
+ - log: "Using FQDN ${item.fqdn}"
+```
+
+The `loop` syntax can be used to process items in parallel.
+Consider the following example:
+
+```yaml
+configuration:
+ runtime: concord-v2
+ dependencies:
+ - "mvn://com.walmartlabs.concord.plugins.basic:http-tasks:1.73.0"
+
+flows:
+ default:
+ - task: http
+ in:
+ # imagine a slow API call here
+ url: "https://jsonplaceholder.typicode.com/todos/${item}"
+ response: json
+ out: results # loop turns "results" into a list of results for each item
+ loop:
+ items:
+ - "1"
+ - "2"
+ - "3"
+ mode: parallel
+ parallelism: 2 # optional number of threads
+
+ # grab titles from all todos
+ - log: ${results.stream().map(o -> o.content.title).toList()}
+```
+
+In the example above, each item is processed in parallel in a separate OS
+thread.
+
+The parallel `loop` syntax is supported for the same steps as `loop`:
+tasks, flow calls, groups of steps, etc.
+
+## Error Handling
+
+### Handling Errors In Flows
+
+Task and expression errors are regular Java exceptions, which can be
+"caught" and handled using a special syntax.
+
+[Expressions](#expressions), tasks, [groups of steps](#groups-of-steps) and
+[flow calls](#calling-other-flows) can have an optional `error` block, which
+is executed if an exception occurs:
+
+```yaml
+flows:
+ default:
+ # handling errors in an expression
+ - expr: ${myTask.somethingDangerous()}
+ error:
+ - log: "Gotcha! ${lastError}"
+
+ # handling errors in tasks
+ - task: myTask
+ error:
+ - log: "Fail!"
+
+ # handling errors in groups of steps
+ - try:
+ - ${myTask.doSomethingSafe()}
+ - ${myTask.doSomethingDangerous()}
+ error:
+ - log: "Here we go again"
+
+ # handling errors in flow calls
+ - call: myOtherFlow
+ error:
+ - log: "That failed too"
+```
+
+The `${lastError}` variable contains the last caught `java.lang.Exception`
+object.
+
+If an error is caught, the execution continues from the next step:
+
+```yaml
+flows:
+ default:
+ - try:
+ - throw: "Catch that!"
+ error:
+ - log: "Caught an error: ${lastError}"
+
+ - log: "Continue the execution..."
+```
+
+An execution logs `Caught an error` message and then `Continue the execution`.
+
+### Handling Cancellations, Failures and Timeouts
+
+When a process is `CANCELLED` (killed) by a user, a special flow
+`onCancel` is executed:
+
+```yaml
+flows:
+ default:
+ - log: "Doing some work..."
+ - ${sleep.ms(60000)}
+
+ onCancel:
+ - log: "Pack your bags. Show's cancelled"
+```
+
+**Note:** `onCancel` handler processes are dispatched immediately when the process
+cancel request is sent. Variables set at runtime may not have been saved to the
+process state in the database and therefore may be unavailable or stale in the
+handler process.
+
+Similarly, `onFailure` flow executes if a process crashes (moves into
+the `FAILED` state):
+
+```yaml
+flows:
+ default:
+ - log: "Brace yourselves, we're going to crash!"
+ - throw: "Crash!"
+
+ onFailure:
+ - log: "Yep, we just crashed."
+```
+
+In both cases, the server starts a _child_ process with a copy of
+the original process state and uses `onCancel` or `onFailure` as an
+entry point.
+
+**Note:** `onCancel` and `onFailure` handlers receive the _last known_
+state of the parent process' variables. This means that changes in
+the process state are visible to the _child_ processes:
+
+```yaml
+configuration:
+ arguments:
+ # original value
+ myVar: "abc"
+
+flows:
+ default:
+ # let's change something in the process state...
+ - set:
+ myVar: "xyz"
+
+ # prints out "The default flow got xyz"
+ - log: "The default flow got ${myVar}"
+
+ # ...and then crash the process
+ - throw: "Boom!"
+
+ onFailure:
+ # logs "I've got xyz"
+ - log: "I've got ${myVar}"
+```
+
+In addition, `onFailure` flow receives `lastError` variable which
+contains the parent process' last (unhandled) error:
+
+```yaml
+flows:
+ default:
+ - throw: "Kablamo!"
+
+ onFailure:
+ - log: "${lastError.cause}"
+```
+
+Nested data is also supported:
+```yaml
+flows:
+ default:
+ - throw:
+ myCause: "I wanted to"
+ whoToBlame:
+ mainCulpit: "${currentUser.username}"
+
+ onFailure:
+ - log: "The parent process failed because ${lastError.cause.payload.myCause}."
+ - log: "And ${lastError.cause.payload.whoToBlame.mainCulpit} is responsible for it!"
+```
+
+If the process runs longer than the specified [timeout](./configuration.md#running-timeout),
+Concord cancels it and executes the special `onTimeout` flow:
+
+```yaml
+configuration:
+ processTimeout: "PT1M" # 1 minute timeout
+
+flows:
+ default:
+ - ${sleep.ms(120000)} # sleep for 2 minutes
+
+ onTimeout:
+ - log: "I'm going to run when my parent process times out"
+```
+
+If the process suspended longer that the specified [timeout](./configuration.md#suspend-timeout)
+Concord cancels it and executes the special `onTimeout` flow:
+
+```yaml
+configuration:
+ suspendTimeout: "PT1M" # 1 minute timeout
+
+flows:
+ default:
+ - task: concord
+ in:
+ action: start
+ org: myOrg
+ project: myProject
+ repo: myRepo
+ sync: true
+ suspend: true
+
+ onTimeout:
+ - log: "I'm going to run when my parent process times out"
+```
+
+If an `onCancel`, `onFailure` or `onTimeout` flow fails, it is automatically
+retried up to three times.
+
+### Retry
+
+The `retry` attribute can be used to re-run a `task`, group of steps or
+a `flow` automatically in case of errors. Users can define the number of times
+the call can be re-tried and a delay for each retry.
+
+- `delay` - the time span after which it retries. The delay time is always in
+seconds, default value is `5`;
+- `in` - additional parameters for next attempt;
+- `times` - the number of times a task/flow can be retried.
+
+For example the below section executes the `myTask` using the provided `in`
+parameters. In case of errors, the task retries up to 3 times with 3
+seconds delay each. Additional parameters for the retry are supplied in the
+`in` block.
+
+```yaml
+- task: myTask
+ in:
+ ...
+ retry:
+ in:
+ ...additional parameters...
+ times: 3
+ delay: 3
+```
+Retry flow call:
+
+```yaml
+- call: myFlow
+ in:
+ ...
+ retry:
+ in:
+ ...additional parameters...
+ times: 3
+ delay: 3
+```
+
+The default `in` and `retry` variables with the same values are overwritten.
+
+In the example below the value of `someVar` is overwritten to 321 in the
+`retry` block.
+
+
+```yaml
+- task: myTask
+ in:
+ someVar:
+ nestedValue: 123
+ retry:
+ in:
+ someVar:
+ nestedValue: 321
+ newValue: "hello"
+```
+
+The `retry` block also supports expressions:
+
+```yaml
+configuration:
+ arguments:
+ retryTimes: 3
+ retryDelay: 2
+
+flows:
+ default:
+ - task: myTask
+ retry:
+ times: "${retryTimes}"
+ delay: "${retryDelay}"
+```
+
+### Throwing Errors
+
+The `throw` step can be used to throw a new `RuntimeException` with
+the supplied message anywhere in a flow including in `error` sections and in
+[conditional expressions](#conditional-execution) such as `if-then` or
+`switch-case`.
+
+```yaml
+flows:
+ default:
+ - try:
+ - log: "Do something dangerous here"
+ error:
+ - throw: "Oh no, something went wrong."
+```
+
+Alternatively a caught exception can be thrown again using the `lastError` variable:
+
+```yaml
+flows:
+ default:
+ - try:
+ - log: "Do something dangerous here"
+ error:
+ - throw: ${lastError}
+```
+
+### Testing in Concord
+
+Testing in Concord offers a comprehensive and flexible approach by leveraging powerful tools like
+[mocks]({{ site.concord_plugins_v2_docs }}/mocks.md),
+the [verify task]({{ site.concord_plugins_v2_docs }}/mocks.md#how-to-verify-task-calls),
+[asserts]({{ site.concord_plugins_v2_docs }}/asserts.md) and [dry-run mode](../processes-v2/index.md#dry-run-mode).
+These tools enable you to test process logic, simulate external dependencies, and validate
+interactions without impacting real systems or data.
+
+With these features, you can:
+
+- **Isolate components** for targeted testing using mocks.
+- **Verify task interactions** to ensure correct behavior.
+- **Run entire flows safely** with dry-run mode to prevent side effects.
+
+This combination allows for thorough, controlled testing of complex processes, ensuring your flows
+are reliable and production-ready
+
+#### Example Concord Flow: Application Deployment and Testing
+
+This example demonstrates a complete Concord flow that simulates an application deployment process
+followed by application tests. It utilizes mocks, the verify task, asserts task and dry-run mode to
+ensure the deployment and testing logic work correctly without real side effects.
+
+```yaml
+flows:
+ ##
+ # Simple blue-green deployment process.
+ # in:
+ # environment: string, mandatory, environment to deploy app.
+ ##
+ blueGreenDeployment:
+ # Assert that we do not miss an input parameter
+ - expr: "${asserts.hasVariable('environment')}"
+
+ # Step 1: Check out code from Git
+ - name: "Checkout source from GH"
+ task: git
+ in:
+ action: "clone"
+ url: "git.example.com:example-org/git-project"
+ workingDir: "git-project"
+
+ # Step 2: Build the application
+ - name: "Building application"
+ task: buildApplication
+ in:
+ sourcePath: "git-project"
+
+ # Step 3: Deploy the application
+ - name: "Deploying application"
+ task: deployApplication
+ in:
+ environment: "${environment}"
+
+ # Step 4: Run tests on the deployed application
+ - call: runTests
+ in:
+ environment: "${environment}"
+ out: testResult
+
+ # Step 5: Conditional promotion or rollback
+ - if: "${testResult == 'All tests passed'}"
+ then:
+ - name: "Promoting '${environment}' environment to production."
+ task: promoteEnvironment
+ in:
+ environment: "${environment}"
+ else:
+ - name: "Tests failed. Rolling back deployment."
+ task: rollbackEnvironment
+ in:
+ environment: "${environment}"
+
+ ##
+ # Simple test for blueGreenDeployment flow
+ ##
+ blueGreenDeploymentFlowTest:
+ # Assert that we running this flow in a dry-run mode, to prevent unintended changes to external systems
+ - expr: "${asserts.dryRunMode}"
+
+ - set:
+ mocks:
+ # Mock git task
+ - task: git
+ in:
+ action: "clone"
+
+ # Mock the build step
+ - task: "buildApplication"
+ in:
+ sourcePath: "git-project"
+
+ # Mock the deployment step
+ - task: "deployApplication"
+ in:
+ environment: "test-env"
+
+ # Mock the promote step
+ - task: "promoteEnvironment"
+
+ # call real deployment flow
+ - call: blueGreenDeployment
+ in:
+ environment: "test-env"
+
+ # verify git task called one time
+ - expr: "${verify.task('git', 1).execute({'action': 'clone'})}"
+ # verify buildApplication task called one time
+ - expr: "${verify.task('buildApplication', 1).execute(mock.any())}"
+ # verify deployApplication task called one time
+ - expr: "${verify.task('deployApplication', 1).execute({'environment': 'test-env'})}"
+ # verify promoteEnvironment task called one time
+ - expr: "${verify.task('promoteEnvironment', 1).execute({'environment': 'test-env'})}"
+
+profiles:
+ blueGreenDeploymentFlowTest:
+ flows:
+ # Mock runTests flow
+ runTests:
+ - expr: "${asserts.assertEquals('test-env', environment)}"
+
+ # emulate result from original flow
+ - set:
+ testResult: 'All tests passed'
+
+configuration:
+ runtime: concord-v2
+ dependencies:
+ - "mvn://com.walmartlabs.concord.plugins.basic:mock-tasks:{{site.concord_core_version}}"
+ - "mvn://com.walmartlabs.concord.plugins.basic:asserts-tasks:{{site.concord_core_version}}"
+```
+
+This flow can be safely tested in dry-run mode to validate logic without making real deployments:
+
+```bash
+curl ... -FdryRun=true -FactiveProfiles=blueGreenDeploymentFlowTest -FentryPoint=blueGreenDeploymentFlowTest https://concord.example.com/api/v1/process
+```
diff --git a/docs/src/processes-v2/imports.md b/docs/src/processes-v2/imports.md
new file mode 100644
index 0000000000..70c343191b
--- /dev/null
+++ b/docs/src/processes-v2/imports.md
@@ -0,0 +1,82 @@
+# Imports
+
+Resources such as flows, forms and other workflow files can be shared between
+Concord projects by using `imports`.
+
+How it works:
+
+- when the process is submitted, Concord reads the root `concord.yml` file
+ and looks for the `imports` declaration;
+- all imports are processed in the order of their declaration;
+- `git` repositories are cloned and their `path` directories are copied into the
+ `dest` directory of the process working directory;
+- `mvn` artifacts are downloaded and extracted into the `dest` directory;
+- any existing files in target directories are overwritten;
+- the processes continues. Any imported resources placed into `concord`,
+ `flows`, `profiles` and `forms` directories will be loaded as usual.
+
+For example:
+
+```yaml
+imports:
+ - git:
+ url: "https://github.com/walmartlabs/concord.git"
+ path: "examples/hello_world"
+
+configuration:
+ arguments:
+ name: "you"
+```
+
+Running the above example produces a `Hello, you!` log message.
+
+The full syntax for imports is:
+
+```yaml
+imports:
+ - type:
+ options
+ - type:
+ options
+```
+
+Note that `imports` is a top-level object, similar to `configuration`.
+In addition, only the main YAML file's, the root `concord.yml`, `imports` are
+allowed.
+
+Types of imports and their parameters:
+
+- `git` - imports remote git repositories:
+ - `url` - URL of the repository, either `http(s)` or `git@`;
+ - `name` - the organization and repository names, e.g. `walmartlabs/concord`.
+ Automatically expanded into the full URL based on the server's configuration.
+ Mutually exclusive with `url`;
+ - `version` - (optional) branch, tag or a commit ID to use. Default `master`;
+ - `path` - (optional) path in the repository to use as the source directory;
+ - `dest` - (optional) path in the process' working directory to use as the
+ destination directory. Defaults to the process workspace `./concord/`;
+ - `exclude` - (optional) list of regular expression patterns to exclude some files when importing;
+ - `secret` - reference to `KEY_PAIR` or a `USERNAME_PASSWORD` secret. Must be
+ a non-password protected secret;
+- `mvn` - imports a Maven artifact:
+ - `url` - the Artifact's URL, in the format of `mvn://groupId:artifactId:version`.
+ Only JAR and ZIP archives are supported;
+ - `dest` - (optional) path in the process' working directory to use as the
+ destination directory. Default `./concord/`.
+
+The `secret` reference has the following syntax:
+- `org` - (optional) name of the secret's org. Uses the process's organization
+if not specified;
+- `name` - name of the secret;
+- `password` - (optional) password for password-protected secrets. Accepts
+literal values only, expressions are not supported.
+
+An example of a `git` import using custom authentication:
+
+```yaml
+imports:
+ - git:
+ url: "https://github.com/me/my_private_repo.git"
+ secret:
+ name: "my_secret_key"
+```
diff --git a/docs/src/processes-v2/index.md b/docs/src/processes-v2/index.md
new file mode 100644
index 0000000000..5d7635ec53
--- /dev/null
+++ b/docs/src/processes-v2/index.md
@@ -0,0 +1,403 @@
+# Overview
+
+- [Directory Structure](#directory-structure)
+- [Additional Concord Files](#additional-concord-files)
+- [DSL](#dsl)
+- [Public Flows](#public-flows)
+- [Variables](#variables)
+ - [Provided Variables](#provided-variables)
+ - [Output Variables](#output-variables)
+- [Dry-run mode](#dry-run-mode)
+ - [Running a Flow in Dry-Run Mode](#running-a-flow-in-dry-run-mode)
+ - [Behavior of Tasks and Scripts in Dry-Run mode](#behavior-of-tasks-and-scripts-in-dry-run-mode)
+
+**Note:** if you used Concord before, check [the migration guide](./migration.md).
+It describes key differences between Concord flows v1 and v2.
+
+## Directory Structure
+
+Regardless of how the process starts -- using
+[a project and a Git repository](../api/process.md#form-data) or by
+[sending a payload archive](../api/process.md#zip-file), Concord assumes
+a certain structure of the process's working directory:
+
+- `concord.yml` - a Concord [DSL](#dsl) file containing the main flow,
+configuration, profiles and other declarations;
+- `concord/**/*.concord.yml` - directory containing
+[extra Concord YAML files](#additional-concord-files);
+- `forms` - directory with [custom forms](../getting-started/forms.md#custom).
+
+Anything else is copied as-is and available for the process.
+[Plugins]({{ site.concord_plugins_v2_docs }}/index.md) can require other files to be present in
+the working directory.
+
+The same structure should be used when storing your project in a Git repository.
+Concord clones the repository and recursively copies the specified directory
+[path](../api/repository.md#create-a-repository) (`/` by default which includes
+all files in the repository) to the working directory for the process. If a
+subdirectory is specified in the Concord repository's configuration, any paths
+outside the configuration-specified path are ignored and not copied. The repository
+name it _not_ included in the final path.
+
+## Additional Concord Files
+
+The default use case with the Concord DSL is to maintain everything in the one
+`concord.yml` file. The usage of a `concord` folder and files within it allows
+you to reduce the individual file sizes.
+
+`./concord/test.concord.yml`:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ name: "stranger"
+
+flows:
+ default:
+ - log: "Hello, ${nested.name}!"
+```
+
+`./concord.yml`:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ name: "Concord"
+```
+
+The above example prints out `Hello, Concord!`, when running the default flow.
+
+Concord folder merge rules:
+
+- Concord loads `concord/**/*.concord.yml` files in alphabetical order,
+ including subdirectories;
+- flows and forms with the same names are overridden by their counterpart from
+ the files loaded previously;
+- all triggers from all files are added together. If there are multiple trigger
+ definitions across several files, the resulting process contains all of
+ them;
+- configuration values are merged. The values from the last loaded file override
+ the values from the files loaded earlier;
+- profiles with flows, forms and configuration values are merged according to
+ the rules above.
+
+The path to additional Concord files can be configured using
+[the resources block](./resources.md).
+
+## DSL
+
+Concord DSL files contain [configuration](./configuration.md),
+[flows](./flows.md), [profiles](./profiles.md) and other declarations.
+
+The top-level syntax of a Concord DSL file is:
+
+```yaml
+configuration:
+ ...
+
+flows:
+ ...
+
+publicFlows:
+ ...
+
+forms:
+ ...
+
+triggers:
+ ...
+
+profiles:
+ ...
+
+resources:
+ ...
+
+imports:
+ ...
+```
+
+Let's take a look at each section:
+- [configuration](./configuration.md) - defines process configuration,
+dependencies, arguments and other values;
+- [flows](./flows.md) - contains one or more Concord flows;
+- [publicFlows](#public-flows) - list of flow names which may be used as an [entry point](./configuration.md#entry-point);
+- [forms](../getting-started/forms.md) - Concord form definitions;
+- [triggers](../triggers/index.md) - contains trigger definitions;
+- [profiles](./profiles.md) - declares profiles that can override
+declarations from other sections;
+- [resources](./resources.md) - configurable paths to Concord resources;
+- [imports](./imports.md) - allows referencing external Concord definitions.
+
+## Public Flows
+
+Flows listed in the `publicFlows` section are the only flows allowed as
+[entry point](./configuration.md#entry-point) values. This also limits the
+flows listed in the repository run dialog. When the `publicFlows` is omitted,
+Concord considers all flows as public.
+
+Flows from an [imported repository](./imports.md) are subject to the same
+setting. `publicFlows` defined in the imported repository are merged
+with those defined in the main repository.
+
+```yaml
+publicFlows:
+ - default
+ - enterHere
+
+flows:
+ default:
+ - log: "Hello!"
+ - call: internalFlow
+
+ enterHere:
+ - "Using alternative entry point."
+
+ # not listed in the UI repository start popup
+ internalFlow:
+ - log: "Only callable from another flow."
+```
+
+## Variables
+
+Process arguments, saved process state and
+[automatically provided variables](#provided-variables) are exposed as flow
+variables:
+
+```yaml
+flows:
+ default:
+ - log: "Hello, ${initiator.displayName}"
+```
+
+In the example above the expression `${initator.displayName}` references an
+automatically provided variable `inititator` and retrieves it's `displayName`
+field value.
+
+Flow variables can be defined in multiple ways:
+- using the DSL's [set step](./flows.md#setting-variables);
+- the [arguments](./configuration.md#arguments) section in the process
+configuration;
+- passed in the API request when the process is created;
+- produced by tasks;
+- etc.
+
+Variables can be accessed using expressions,
+[scripts](../getting-started/scripting.md) or in
+[tasks](./tasks.md).
+
+```yaml
+flows:
+ default:
+ - log: "All variables: ${allVariables()}"
+
+ - if: ${hasVariable('var1')}
+ then:
+ - log: "Yep, we got 'var1' variable with value ${var1}"
+ else:
+ - log: "Nope, we do not have 'var1' variable"
+
+ - script: javascript
+ body: |
+ var allVars = execution.variables().toMap();
+ print('Getting all variables in a JavaScript snippet: ' + allVars);
+
+ execution.variables().set('newVar', 'hello');
+```
+
+The `allVariables` function returns a Java Map object with all current
+variables.
+
+The `hasVariable` function accepts a variable name (as a string parameter) and
+returns `true` if the variable exists.
+
+### Provided Variables
+
+Concord automatically provides several built-in variables upon process
+execution in addition to the defined [variables](#variables):
+
+- `txId` - an unique identifier of the current process;
+- `parentInstanceId` - an identifier of the parent process;
+- `workDir` - path to the working directory of a current process;
+- `initiator` - information about the user who started a process:
+ - `initiator.username` - login, string;
+ - `initiator.displayName` - printable name, string;
+ - `initiator.email` - email address, string;
+ - `initiator.groups` - list of user's groups;
+ - `initiator.attributes` - other LDAP attributes; for example
+ `initiator.attributes.mail` contains the email address.
+- `currentUser` - information about the current user. Has the same structure
+ as `initiator`;
+- `requestInfo` - additional request data (see the note below):
+ - `requestInfo.query` - query parameters of a request made using user-facing
+ endpoints (e.g. the portal API);
+ - `requestInfo.ip` - client IP address, where from request is generated.
+ - `requestInfo.headers` - headers of request made using user-facing endpoints.
+- `projectInfo` - project's data:
+ - `projectInfo.orgId` - the ID of the project's organization;
+ - `projectInfo.orgName` - the name of the project's organization;
+ - `projectInfo.projectId` - the project's ID;
+ - `projectInfo.projectName` - the project's name;
+ - `projectInfo.repoId` - the project's repository ID;
+ - `projectInfo.repoName` - the repository's name;
+ - `projectInfo.repoUrl` - the repository's URL;
+ - `projectInfo.repoBranch` - the repository's branch;
+ - `projectInfo.repoPath` - the repository's path (if configured);
+ - `projectInfo.repoCommitId` - the repository's last commit ID;
+ - `projectInfo.repoCommitAuthor` - the repository's last commit author;
+ - `projectInfo.repoCommitMessage` - the repository's last commit message.
+- `processInfo` - the current process' information:
+ - `processInfo.activeProfiles` - list of active profiles used for the current
+ execution;
+ - `processInfo.sessionToken` - the current process'
+ [session token](../getting-started/security.md#using-session-tokens) can be
+ used to call Concord API from flows.
+
+LDAP attributes must be allowed in [the configuration](./configuration.md#server-configuration-file).
+
+**Note:** only the processes started using [the browser link](../api/process.md#browser)
+provide the `requestInfo` variable. In other cases (e.g. processes
+[triggered by GitHub](../triggers/github.md)) the variable might be undefined
+or empty.
+
+Availability of other variables and "beans" depends on the installed Concord
+plugins, the arguments passed in at the process invocation, and stored in the
+request data.
+
+### Output Variables
+
+Concord has the ability to return process data when a process completes.
+The names of returned variables should be declared in the `configuration` section:
+
+```yaml
+configuration:
+ out:
+ - myVar1
+```
+
+Output variables may also be declared dynamically using `multipart/form-data`
+parameters if allowed in a Project's configuration. **CAUTION: this is a not
+secure if secret values are stored in process variables**
+
+```bash
+$ curl ... -F out=myVar1 https://concord.example.com/api/v1/process
+{
+ "instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b"
+}
+```
+
+Retrieve the output variable value(s) after the process finishes:
+
+```bash
+# wait for completion...
+$ curl .. https://concord.example.com/api/v2/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b
+{
+ "instanceId" : "5883b65c-7dc2-4d07-8b47-04ee059cc00b",
+ "meta": {
+ out" : {
+ "myVar1" : "my value"
+ },
+ }
+}
+```
+
+It is also possible to retrieve a nested value:
+
+```yaml
+configuration:
+ out:
+ - a.b.c
+
+flows:
+ default:
+ - set:
+ a:
+ b:
+ c: "my value"
+ d: "ignored"
+```
+
+```bash
+$ curl ... -F out=a.b.c https://concord.example.com/api/v1/process
+```
+
+In this example, Concord looks for variable `a`, its field `b` and
+the nested field `c`.
+
+Additionally, the output variables can be retrieved as a JSON file:
+
+```bash
+$ curl ... https://concord.example.com/api/v1/process/5883b65c-7dc2-4d07-8b47-04ee059cc00b/attachment/out.json
+
+{"myVar1":"my value"}
+```
+
+Any value type that can be represented as JSON is supported.
+
+### Dry-run mode
+
+The dry-run mode allows you to execute a process without making any dangerous side-effects.
+This is useful for testing and validating the flow logic before running it in production.
+
+> Note that correctness of the flow execution in dry-run mode depend on how tasks and scripts
+> handle dry-run mode in your flow. Make sure all tasks and scripts involved are properly handled
+> dry-run mode to prevent unintended side effects
+
+#### Running a Flow in Dry-Run Mode
+
+To enable dry-run mode, set the `dryRun` flag to `true` in the process request:
+
+```bash
+curl ... -FdryRun=true -F out=myVar1 https://concord.example.com/api/v1/process
+```
+
+When the process is launched in dry-run mode, the `system` log segment of the process will include
+the following line:
+
+```
+Dry-run mode: enabled
+```
+
+### Behavior of Tasks and Scripts in Dry-Run mode
+
+#### Tasks
+
+Standard Concord tasks support dry-run mode and will not make any changes outside the process.
+For example, the `http` task will not make any non-GET requests in dry-run mode, the `s3` task will
+not actually upload files in dry-run mode, etc.
+
+If a task does not support dry-run mode, the process will terminate with the following error:
+
+```
+Dry-run mode is not supported for '' task (yet)
+```
+
+If a task does not support dry-run mode, but you are confident that it can be executed in dry-run
+mode, you can mark the task step as ready for dry-run mode:
+
+```yaml
+flows:
+ myFlow:
+ - task: "myTaskAndImSureWeCanExecuteItInDryRunMode"
+ meta:
+ dryRunReady: true # dry-run ready marker for this step
+```
+
+> **Important**: Use the `meta.dryRunReady` only if you are certain that the task is safe to run in
+> dry-run mode and cannot be modified to support it explicitly.
+
+To add dry-run mode support to a task, see the [task documentation](../processes-v2/tasks.md#dry-run-mode).
+
+#### Scripts
+
+By default, script steps do not support dry-run mode and the process will terminate with
+the following error:
+
+```
+Dry-run mode is not supported for this 'script' step
+```
+
+To add dry-run mode support to a script, see
+the [scripting documentation](../getting-started/scripting.md#dry-run-mode).
diff --git a/docs/src/processes-v2/migration.md b/docs/src/processes-v2/migration.md
new file mode 100644
index 0000000000..d009b8497d
--- /dev/null
+++ b/docs/src/processes-v2/migration.md
@@ -0,0 +1,511 @@
+# Migration from v1
+
+## Overview
+
+Starting from version 1.57.0, Concord introduces a new runtime for process
+execution.
+
+The new runtime features require changes in flows and plugins. That's why
+initially it will be an opt-in feature - both v1 and v2 versions will coexist
+for foreseeable future.
+
+To enable the v2 runtime, add the following to your `concord.yml` file:
+
+```yaml
+configuration:
+ runtime: "concord-v2"
+```
+
+Alternatively, it is possible to specify the runtime directly in the API request:
+
+```bash
+$ curl ... -F runtime=concord-v2 http://concord.example.com/api/v1/process
+```
+
+Check out below for new and updated features of v2.
+
+## New Directory Structure
+
+The v1 runtime supports loading additional files from the
+`${workDir}/concord/*.yml` directory. Any YAML file in the `concord` directory
+treated as a Concord YAML file. Sometimes it might get in the way,
+especially when [imports](../processes-v1/imports.md) are used -- the
+`${workDir}/concord` directory is the default target for `imports` and
+you might end up with other YAML files imported into the directory.
+
+The v2 runtime requires Concord YAML files in the `concord` directory to have
+a special `.concord.yml` extension, for example:
+
+```
+# the main file
+/concord.yml
+
+# additional files
+/concord/my.concord.yml
+/concord/extra.concord.yml
+
+# not a Concord file, won't be loaded
+/concord/values.yaml
+```
+
+See more details in the [Directory structure](./index.md#directory-structure)
+documentation.
+
+## New SDK
+
+Tasks that wish to use all features provided by the v2 runtime must use
+the new SDK module:
+
+```xml
+
+ com.walmartlabs.concord.runtime.v2
+ concord-runtime-sdk-v2
+ {{site.concord_core_version}}
+ provided
+
+```
+
+Notable differences:
+- `in` variables passed as a `Variables` object. Additionally, all task inputs
+must be explicit: task `in` parameters and flow variables are now separate;
+- tasks can now return a single `Serializable` value;
+- `Context` can now be `@Inject`-ed.
+
+Task classes can implement both the new `com.walmartlabs.concord.runtime.v2.sdk.Task`
+and the old `com.walmartlabs.concord.sdk.Task` interfaces simultaneously, but
+it is recommended to keep the common logic separate and create two classes
+each implementing a single `Task` interface:
+
+```java
+// common logic, abstracted away from the differences between v1 and v2
+class MyTaskCommon {
+ TaskResult doTheThing(Map input) {
+ return "I did the thing!";
+ }
+}
+
+// v1 version of the task
+@Named("myTask")
+class MyTaskV1 implements com.walmartlabs.concord.sdk.Task {
+ void execute(Context ctx) {
+ Map result = new MyTaskCommon()
+ .doTheThing(ctx.toMap())
+ .toMap();
+
+ // result is saved as a flow variable
+ ctx.setVariable("result", result);
+ }
+}
+
+// v2 version of the task
+@Named("myTask")
+class MyTaskV2 implements com.walmartlabs.concord.runtime.v2.sdk.Task {
+ Serializable execute(Variables input) {
+ // return the value instead of setting a flow variable
+ return new MyTaskCommon().doTheThing(input.toMap());
+ }
+}
+```
+
+More details in the [Tasks v2](./tasks.md) documentation.
+
+## Variable Scoping Rules
+
+In the v1 runtime all flow variables are global variables:
+
+```yaml
+configuration:
+ runtime: concord-v1
+
+flows:
+ default:
+ - set:
+ x: 123
+
+ - log: "${x}" # prints out "123"
+
+ - call: anotherFlow
+
+ - log: "${x}" # prints out "234"
+
+ anotherFlow:
+ - log: "${x}" # prints out "123"
+ - set:
+ x: 234
+```
+
+In addition, task inputs are implicit:
+
+```yaml
+configuration:
+ runtime: concord-v1
+
+flows:
+ default:
+ - set:
+ url: https://google.com
+
+ - task: http
+ in:
+ method: "GET" # 'url' is passed implicitly
+```
+
+There is no difference in v1 between task inputs and regular variables. From
+the task's perspective values in the `in` block, variables defined in the flow
+prior to the task's call and process `arguments` are the same thing.
+
+This could sometimes lead to hard-to-debug issues when one part of
+the flow reuses a variable with the same name as one of the task's inputs.
+
+In v2 we changed the rules for variable scoping. Let's take a look at the same
+example, but running in v2:
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - set:
+ x: 123
+
+ - log: "${x}" # prints out "123"
+
+ - call: anotherFlow
+
+ - log: "${x}" # prints out "123"
+
+ anotherFlow:
+ - log: "${x}" # prints out "123"
+ - set:
+ x: 234
+```
+
+In v2 variables `set` in a flow visible only in the same flow or in flows
+called from the current one. To "get" a flow variable "back" into the callee's
+flow you need to use the `out` syntax:
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - call: anotherFlow
+ out: x
+
+ - log: "${x}"
+
+ anotherFlow:
+ - set:
+ x: 123
+```
+
+Task inputs are now explicit -- all required parameters must be specified in
+the `in` block:
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - set:
+ url: https://google.com
+
+ - task: http
+ in:
+ url: "${url}" # ok!
+ method: "GET"
+
+ - task: http
+ in:
+ method: "GET" # error: 'url' is required
+```
+
+## Scripting
+
+In v1 the `Context` object injected into scripts provides methods to get and set
+flow variables.
+
+```yaml
+configuration:
+ runtime: concord-v1
+
+flows:
+ default:
+ - script: groovy
+ body: |
+ // get a variable
+ def v = execution.getVariable('myVar')
+ // set a variable
+ execution.setVariable('newVar', 'hello')
+```
+
+In v2, the injected `Context` object has a `variables()` method which returns a
+[`Variables` object](https://github.com/walmartlabs/concord/blob/master/runtime/v2/sdk/src/main/java/com/walmartlabs/concord/runtime/v2/sdk/Variables.java). This object includes a number of methods for interacting with flow variables.
+
+```yaml
+configuration:
+ runtime: concord-v2
+
+flows:
+ default:
+ - script: groovy
+ body: |
+ // get a variable
+ def v = execution.variables().get('myVar')
+ // get a String, or default value
+ String s = execution.variables().getString("aString", "default value")
+ // get a required integer
+ int n = execution.variables().assertInt('myInt')
+ // set a variable
+ execution.variables().set('newVar', 'Hello, world!')
+```
+
+## Segmented Logging
+
+In v1 the process log is a single steam of text - every task and `log`
+statement writes their output into a single log file. In v2 most of the flow
+elements get their own log "segment" -- a separate log "file":
+
+
+
+This feature is enabled by default and should work "out of the box" for
+most plugins that use `org.slf4j.Logger` for logging.
+
+The runtime also redirects Java's `System.out` and `System.err` into
+appropriate log segments. For example, if you use `puts` in
+[JRuby](../getting-started/scripting.md#ruby) or `println` in
+[Groovy](../getting-started/scripting.md#groovy), you should see those lines
+in correct segments.
+
+Segments can be named:
+
+```yaml
+flows:
+ default:
+ - name: Log something
+ task: log
+ in:
+ msg: "Hello! I'm being logged in a separate (and named!) segment!"
+ level: "WARN"
+
+ - log: "Just a regular log statement"
+```
+
+Should produce a log looking like this:
+
+
+
+The `name` field also supports expressions:
+
+```yaml
+flows:
+ default:
+ - name: Processing '${item}'
+ task: log
+ in:
+ msg: "We got: ${item}"
+ loop:
+ items:
+ - "red"
+ - "green"
+ - "blue"
+```
+
+Currently, the following steps can use `name`:
+- `task`
+- `call`
+- `expr`
+- `log`
+- `throw`
+
+If `name` is not specified, the runtime pick a default value, e.g.
+`task: <...>` for task calls.
+
+The toolbar on the segments allows various actions to be performed on the logs.
+Users can expand the segment, auto scroll to the end, see YAML info, download
+the log segment as a file and generate a unique URL for the segment to
+facilitate ease of sharing logs.
+
+## Parallel Execution
+
+The v1 runtime provides no satisfactory ways to run flow steps in parallel
+in one single process. For parallel deployments it is possible to use [Ansible]({{ site.concord_plugins_v1_docs }}/ansible.md)
+and its `forks` feature. There's also
+[a way to "fork" a process]({{ site.concord_plugins_v2_docs }}/concord.md#fork), i.e. to run a flow
+in another process while inheriting current flow variables.
+
+The v2 runtime was designed with parallel execution in mind. It adds a new
+step - `parallel`:
+
+```yaml
+flows:
+ default:
+ - parallel:
+ - task: http
+ in:
+ url: https://google.com/
+ out: googleResponse
+
+ - task: http
+ in:
+ url: https://bing.com/
+ out: bingResponse
+
+ - log: |
+ Google: ${googleResponse.statusCode}
+ Bing: ${bingResponse.statusCode}
+```
+
+Check [the documentation for the `parallel` step](./flows.md#parallel-execution)
+for more details.
+
+## Better Syntax Errors
+
+There are multiple improvements in v2 in the Concord DSL syntax validation and
+error reporting.
+
+Let's take this simple YAML file as an example:
+
+```yaml
+flows:
+ - default:
+ - log: "Hello!"
+```
+
+The `flows` block should be a YAML object, but in this example it is a list.
+
+Here's how v1 reports the error (minus the stack traces):
+```
+Error while loading the project, check the syntax. (concord.yml): Error @ [Source: (File); line: 2, column: 3].
+Cannot deserialize instance of `java.util.LinkedHashMap` out of START_ARRAY token
+```
+
+For comparison, here's how v2 reports the same error:
+```
+Error while loading the project, check the syntax. (concord.yml): Error @ line: 2, col: 3. Invalid value type, expected: FLOWS, got: ARRAY
+ while processing steps:
+ 'flows' @ line: 1, col: 1
+```
+
+Another example:
+
+```yaml
+flows:
+ default:
+ - if: "${true}"
+ then:
+ log: "It's true!"
+```
+
+In this example the `then` block should've been a list.
+
+Here's how v1 reports the error:
+
+```
+Error while loading the project, check the syntax. (concord.yml): Error @ [Source: (File); line: 6, column: 1].
+Expected: Process definition step (complex).
+Got [Atom{location=[Source: (File); line: 3, column: 7], token=START_OBJECT, name='null', value=null}, Atom{location=[Source: (File); line: 3, column: 7], token=FIELD_NAME, name='if', value=null}, Atom{location=[Source: (File); line: 3, column: 11], token=VALUE_STRING, name='if', value=${true}}, Atom{location=[Source: (File); line: 4, column: 7], token=FIELD_NAME, name='then', value=null}, Atom{location=[Source: (File); line: 5, column: 9], token=START_OBJECT, name='then', value=null}, Atom{location=[Source: (File); line: 5, column: 9], token=FIELD_NAME, name='log', value=null}, Atom{location=[Source: (File); line: 5, column: 14], token=VALUE_STRING, name='log', value=It's true!}, Atom{location=[Source: (File); line: 6, column: 1], token=END_OBJECT, name='then', value=null}, Atom{location=[Source: (File); line: 6, column: 1], token=END_OBJECT, name='null', value=null}]
+```
+
+The same YAML in v2:
+
+```
+Error while loading the project, check the syntax. (concord.yml): Error @ line: 5, col: 9. Invalid value type, expected: ARRAY_OF_STEP, got: OBJECT
+ while processing steps:
+ 'then' @ line: 4, col: 7
+ 'if' @ line: 3, col: 7
+ 'default' @ line: 2, col: 3
+ 'flows' @ line: 1, col: 1
+```
+
+Not only it makes more sense for users unfamiliar with the internals of
+the Concord DSL parsing, but it also shows the path to the problematic element.
+
+Future versions will further improve the parser and the parsing error reporting.
+
+## Better Flow Errors
+
+In Concord flows all exceptions are typically handled in `error` blocks. To
+reference the last raised exception one can use the `${lastError}` variable.
+
+In v1, Concord wraps all exceptions into an internal error type - `BpmnError`.
+To get the original exception object users required to use `${lastError.cause}`
+expression.
+
+In v2 all `${lastError}` values are the original exceptions thrown by tasks or
+expressions. Those values can still be wrapped into multiple exception types,
+but Concord no longer adds its own.
+
+For example:
+
+```yaml
+flows:
+ default:
+ - try:
+ - log: "${invalid expression}"
+ error:
+ - log: "${lastError}"
+```
+
+This is how it looks when executed in v1:
+
+```
+10:23:31 [INFO ] c.w.concord.plugins.log.LogUtils - io.takari.bpm.api.BpmnError: Error at default/e_0: __default_error_ref
+```
+
+The exception message doesn't contain any useful information. It is being
+hidden in the `lastError.cause` object. If we try to log `lastError.cause`, we
+get a slightly better result:
+
+```
+10:26:46 [INFO ] c.w.concord.plugins.log.LogUtils - javax.el.ELException: Error Parsing: ${invalid expression}
+```
+
+Here's the v2 output:
+
+```
+10:24:46 [ERROR] (concord.yml): Error @ line: 4, col: 11. Error Parsing: ${invalid expression}
+10:24:46 [INFO ] {}
+com.sun.el.parser.ParseException: Encountered "expression" at line 1, column 11.
+Was expecting one of:
+ "}" ...
+ "." ...
+ "[" ...
+ ...skipped...
+ "+=" ...
+ "=" ...
+```
+
+Not only it contains the line and column numbers where the exception
+(approximately) occurred, it is also more detailed and contains the original error.
+
+## Run Flows Locally
+
+The v2 runtime significantly simplifies embedding - the runtime itself can be
+used as a regular Java library.
+
+The updated [Concord CLI tool](../cli/index.md) is leveraging this
+ability to provide a way to run Concord flows locally, without the need for a
+Concord cluster instance:
+
+```yaml
+# concord.yml
+flows:
+ default:
+ - log: "Hello!"
+```
+
+```
+$ concord run
+Starting...
+16:41:34.894 [main] Hello!
+...done!
+```
+
+Most of the regular features are supported: secrets, `decryptString`, external
+`dependencies`, etc.
+
+For more details, check [the updated Concord CLI documentation](../cli/running-flows.md).
diff --git a/docs/src/processes-v2/profiles.md b/docs/src/processes-v2/profiles.md
new file mode 100644
index 0000000000..3f42490ba7
--- /dev/null
+++ b/docs/src/processes-v2/profiles.md
@@ -0,0 +1,73 @@
+# Profiles
+
+Profiles are named collections of configuration, forms and flows and can be used
+to override defaults set in the top-level content of the Concord file. They are
+created by inserting a name section in the `profiles` top-level element.
+
+Profile selection is configured when a process is
+[executed](../getting-started/processes.md#overview).
+
+For example, if the process below is executed using the `myProfile` profile,
+the value of `foo` is `bazz` and appears in the log instead of the default
+`bar`:
+
+```yaml
+configuration:
+ arguments:
+ foo: "bar"
+
+profiles:
+ myProfile:
+ configuration:
+ arguments:
+ foo: "bazz"
+flows:
+ default:
+ - log: "${foo}"
+```
+
+The `activeProfiles` parameter is a list of project file's profiles that is
+used to start a process. If not set, a `default` profile is used.
+
+The active profile's configuration is merged with the default values
+specified in the top-level `configuration` section. Nested objects are
+merged, lists of values are replaced:
+
+```yaml
+configuration:
+ arguments:
+ nested:
+ x: 123
+ y: "abc"
+ aList:
+ - "first item"
+ - "second item"
+
+profiles:
+ myProfile:
+ configuration:
+ arguments:
+ nested:
+ y: "cba"
+ z: true
+ aList:
+ - "primer elemento"
+ - "segundo elemento"
+
+flows:
+ default:
+ # Expected next log output: 123 cba true
+ - log: "${nested.x} ${nested.y} ${nested.z}"
+ # Expected next log output: ["primer elemento", "segundo elemento"]
+ - log: "${aList}"
+```
+
+Multiple active profiles are merged in the order they are specified in
+`activeProfiles` parameter:
+
+```bash
+$ curl ... -F activeProfiles=a,b http://concord.example.com/api/v1/process
+```
+
+In this example, values from `b` are merged with the result of the merge
+of `a` and the default configuration.
diff --git a/docs/src/processes-v2/resources.md b/docs/src/processes-v2/resources.md
new file mode 100644
index 0000000000..7b2191d5cf
--- /dev/null
+++ b/docs/src/processes-v2/resources.md
@@ -0,0 +1,44 @@
+# Resources
+
+Concord loads the root `concord.yml` first and subsequently looks for the
+resource paths under the `resources` section.
+
+If not specified, Concord uses the default `resources` value:
+
+```yaml
+resources:
+ concord:
+ - "glob:concord/{**/,}{*.,}concord.yml"
+```
+
+Thus, by default Concord looks for:
+- the root `concord.yml` or `.concord.yml` file;
+- `${workDir}/concord/concord.yml`;
+- any file with `.concord.yml` extension in the `${workDir}/concord`
+directory.
+
+Each element of the `resources.concord` list must be a valid path pattern.
+In addition to `glob`, Concord supports `regex` patterns:
+
+```yaml
+resources:
+ concord:
+ - "regex:extraFiles/.*\\.my\\.yml"
+```
+
+With the example above Concord loads all files in the `extraFiles` directory
+with the `.my.yml` extension. Note that in this case Concord won't look in
+the subdirectories.
+
+Multiple patterns can be specified:
+
+```yaml
+resources:
+ concord:
+ - "glob:myConcordFlows/*.concord.yml"
+ - "regex:extra/[a-z]\\.concord.yml"
+```
+
+See the [FileSystem#getPathMatcher](https://docs.oracle.com/javase/8/docs/api/java/nio/file/FileSystem.html#getPathMatcher-java.lang.String-)
+documentation for more details on the `glob` and `regex` syntax.
+
diff --git a/docs/src/processes-v2/tasks.md b/docs/src/processes-v2/tasks.md
new file mode 100644
index 0000000000..2b96d7802c
--- /dev/null
+++ b/docs/src/processes-v2/tasks.md
@@ -0,0 +1,531 @@
+# Tasks
+
+- [Using Tasks](#using-tasks)
+- [Full Syntax vs Expressions](#full-syntax-vs-expressions)
+- [Development](#development)
+ - [Complete Example](#complete-example)
+ - [Creating Tasks](#creating-tasks)
+ - [Dry-run mode](dry-run-mode)
+ - [Task Output](#task-output)
+ - [Injectable Services](#injectable-services)
+ - [Call Context](#call-context)
+ - [Using External Artifacts](#using-external-artifacts)
+ - [Environment Defaults](#environment-defaults)
+ - [Task Output and Error Handling](#task-output)
+ - [Unit Tests](#unit-tests)
+ - [Integration Tests](#integration-tests)
+
+## Using Tasks
+
+In order to be able to use a task a URL to the JAR containing
+the implementation has to be added as a
+[dependency](./configuration.md#dependencies).
+
+Typically, the JAR is published to a Maven repository or a remote host and a
+URL pointing to the JAR in the repository is used.
+
+You can invoke tasks in multiple ways. Following are a number of examples,
+check the [Task Calls](./flows.md#task-calls) section for more details:
+
+```yaml
+configuration:
+ dependencies:
+ - "http://repo.example.com/my-concord-task.jar"
+
+flows:
+ default:
+ # call methods directly using expressions
+ - ${myTask.call("hello")}
+
+ # call the task using "task" syntax
+ # use "out" to save the task's output and "error" to handle errors
+ - task: myTask
+ in:
+ taskVar: ${processVar}
+ anotherTaskVar: "a literal value"
+ out: myResult
+ error:
+ - log: myTask failed with ${lastError}
+```
+
+## Full Syntax vs Expressions
+
+There are two ways how the task can be invoked: the `task` syntax and
+using expressions. Consider the `task` syntax for tasks with multiple
+parameters. Use expressions for simple tasks that return data:
+
+```yaml
+# use the `task` syntax when you need to pass multiple parameters and/or complex data structures
+- task: myTask
+ in:
+ param1: 123
+ param2: "abc"
+ nestedParams:
+ x: true
+ y: false
+
+# use expressions for tasks returning data
+- log: "${myTask.getAListOfThings()}"
+```
+
+## Development
+
+We recommend running Concord using Java 17.
+
+### Complete Example
+
+Check out the [hello-world-task](https://github.com/concord-workflow/hello-world-task)
+project for a complete example of a Concord task including end to end testing
+using [testcontainers-concord](https://github.com/concord-workflow/testcontainers-concord).
+
+### Creating Tasks
+
+Tasks must implement `com.walmartlabs.concord.runtime.v2.sdk.Task` Java
+interface and must be annotated with `javax.inject.Named`.
+
+The following section describes the necessary Maven project setup steps.
+
+Add `concord-targetplatform` to your `dependencyManagement` section:
+
+```xml
+
+
+
+ com.walmartlabs.concord
+ concord-targetplatform
+ {{site.concord_core_version}}
+ pom
+ import
+
+
+
+```
+
+Add the following dependencies to your `pom.xml`:
+
+```xml
+
+
+ com.walmartlabs.concord.runtime.v2
+ concord-runtime-sdk-v2
+ {{site.concord_core_version}}
+ provided
+
+
+
+ javax.inject
+ javax.inject
+ 1
+ provided
+
+
+```
+
+Add `sisu-maven-plugin` to the `build` section:
+
+```xml
+
+
+
+ org.eclipse.sisu
+ sisu-maven-plugin
+
+
+
+```
+
+Some dependencies are provided by the runtime. It is recommended to mark such
+dependencies as `provided` in the POM file to avoid classpath conflicts:
+- `com.fasterxml.jackson.core/*`
+- `javax.inject/javax.inject`
+- `org.slf4j/slf4j-api`
+
+Implement the `com.walmartlabs.concord.runtime.v2.sdk.Task` interface and add
+`javax.inject.Named` annotation with the name of the task.
+
+Here's an example of a simple task:
+
+```java
+import com.walmartlabs.concord.runtime.v2.sdk.*;
+import javax.inject.Named;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ public void sayHello(String name) {
+ System.out.println("Hello, " + name + "!");
+ }
+
+ public int sum(int a, int b) {
+ return a + b;
+ }
+}
+```
+
+This task can be called using an [expression](./flows.md#expressions):
+```yaml
+flows:
+ default:
+ - ${myTask.sayHello("world")} # short form
+
+ - expr: ${myTask.sum(1, 2)} # full form
+ out: mySum
+```
+
+If a task implements `Task#execute` method, it can be started using
+`task` step type:
+
+```java
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ String name = input.assertString("name");
+ return TaskResult.success()
+ .value("msg", "Hello, " + name + "!");
+ }
+}
+```
+
+The task receives a `Variables` object as input. It contains all `in`
+parameters of the call and provides some utility methods to validate
+the presence of required parameters, convert between types, etc.
+
+Tasks can use the `TaskResult` object to return data to the flow. See
+the [Task Output](#task-output) section for more details.
+
+To call a task with an `execute` method, use the `task` syntax:
+
+```yaml
+flows:
+ default:
+ - task: myTask
+ in:
+ name: "world"
+ out: myResult
+
+ - log: "${myResult.msg}" # prints out "Hello, world!"
+```
+
+This form allows use of `in` and `out` variables and error-handling blocks.
+See the [Task Call](./flows.md#task-calls) section for more details.
+
+In the example above, the task's result is saved as `myResult` variable.
+The runtime converts the `TaskResult` object into a regular Java `Map` object:
+
+```json
+{
+ "ok": true,
+ "msg": "Hello, world!"
+}
+```
+
+The `ok` value depends on whether the result was constructed as
+`TaskResult#success()` or `TaskResult#error(String)`. In the latter case,
+the resulting object also contains an `error` key with the specified error
+message.
+
+The `task` syntax is recommended for most use cases, especially when dealing
+with multiple input parameters.
+
+### Dry-run mode
+
+[Dry-run mode](../processes-v2/index.md#dry-run-mode) is useful for testing and validating
+the flow and task logic before running it in production.
+
+To mark a task as ready for execution in dry-run mode, you need to annotate the task with
+`com.walmartlabs.concord.runtime.v2.sdk.DryRunReady` annotation:
+
+```java
+@DryRunReady
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ String name = input.assertString("name");
+ return TaskResult.success()
+ .value("msg", "Hello, " + name + "!");
+ }
+}
+```
+
+If you need to change the logic in the task depending on whether it is running in dry-run mode or not,
+you can use the `context.processConfiguration().dryRun()`. it indicate whether the process is running
+in dry-run mode:
+
+```java
+@DryRunReady
+@Named("myTask")
+public class MyTask implements Task {
+
+ private final boolean dryRunMode;
+
+ @Inject
+ public MyTask(Context context) {
+ this.dyrRunMode = context.processConfiguration().dryRun();
+ }
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ if (dryRunMode) {
+ return TaskResult.success();
+ }
+
+ // here is the logic that can't be executed in dry-run mode
+ // ...
+ }
+}
+```
+
+### Task Output
+
+The task must return a `TaskResult` instance. The `TaskResult` class
+provides methods to return additional values as the task call's result. A task
+can return multiple values:
+
+```java
+return TaskResult.success()
+ .value("foo", "bar")
+ .value("baz", 123);
+```
+
+Values of any type can be returned, but we recommend returning standard JDK
+types. Preferably `Serializable` to avoid serialization issues (e.g. when
+using [forms](../getting-started/forms.md)).
+
+If you need to return some complex data structure, consider converting it
+to regular Java collections. The runtime provides
+[Jackson](https://github.com/FasterXML/jackson) as the default JSON/YAML library
+which can also be used to convert arbitrary data classes into regular Map's and
+List's:
+
+```java
+import com.fasterxml.jackson.databind.ObjectMapper;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ MyResult result = new MyResult();
+ ObjectMapper om = new ObjectMapper();
+ return TaskResult.success()
+ .values(om.convertValue(result, Map.class));
+ }
+
+ public static class MyResult implements Serializable {
+ String data;
+ List stuff;
+ }
+}
+```
+
+In the example above, the properties of `MyResult` instance became values in
+the result Map:
+
+```yaml
+- task: myTask
+ out: result
+
+- log: |
+ data = ${result.data}
+ stuff = ${result.stuff}
+```
+
+### Injectable Services
+
+The SDK provides a number of services that can be injected into task
+classes using the `javax.inject.Inject` annotation:
+
+- `Context` - provides access to the current call's environment, low-level
+ access to the runtime, etc. See the [Call Context](#call-context) section for
+ more details;
+- `DependencyManager` - a common way for tasks to work with external
+ dependencies. See the [Using External Artifacts](#using-external-artifacts)
+ section for details.
+
+### Call Context
+
+To access the current task call's environment,
+`com.walmartlabs.concord.runtime.v2.sdk.Context` can be injected into the task
+class:
+
+```java
+@Named("myTask")
+public class MyTask implements Task {
+
+ private final Context ctx;
+
+ @Inject
+ public MyTask(Context ctx) {
+ this.ctx = ctx;
+ }
+}
+```
+
+The `Context` object provides access to multiple features, such as:
+
+- `workingDirectory()` - returns `Path`, the working directory of the current
+ process;
+- `processInstanceId()` - returns `UUID`, the current process' unique
+ indentifier;
+- `variables()` - provides access to the current flow's `Variables`, i.e. all
+ variables defined before the current task call;
+- `defaultVariables()` - default input parameters for the current task. See
+ the [Environment Defaults](#environment-defaults) section for more details.
+
+For the complete list of provided features please refer to Javadoc of
+the `Context` interface.
+
+### Using External Artifacts
+
+The runtime provides a way for tasks to download and cache external artifacts:
+```java
+import com.walmartlabs.concord.runtime.v2.sdk.*;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ private final DependencyManager dependencyManager;
+
+ @Inject
+ public MyTask(DependencyManager dependencyManager) {
+ this.dependencyManager = dependencyManager;
+ }
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ URI uri = ...
+ Path p = dependencyManager.resolve(uri);
+ // ...do something with the returned path
+ }
+}
+```
+
+The `DependencyManager` is an `@Inject`-able service that takes care of
+resolving, downloading and caching URLs. It supports all URL types as
+the regular [dependencies](./configuration.md#dependencies) section in
+Concord YAML files - `http(s)`, `mvn`, etc.
+
+Typically, cached copies are persistent between process executions (depends on
+the Concord's environment configuration).
+
+The tasks shouldn't expect the returning path to be writable (i.e. assume only
+read-only access).
+
+`DependencyManager` shouldn't be used as a way to download deployment
+artifacts. It's not a replacement for [Ansible]({{ site.concord_plugins_v1_docs }}/ansible.md) or any
+other deployment tool.
+
+### Environment Defaults
+
+Instead of hard coding parameters like endpoint URLs, credentials and other
+environment-specific values, use `Context#defaultVariables`:
+
+```java
+import com.walmartlabs.concord.runtime.v2.sdk.*;
+
+@Named("myTask")
+public class MyTask implements Task {
+
+ private final Context ctx;
+
+ @Inject
+ public MyTask(Context ctx) {
+ this.ctx = ctx;
+ }
+
+ @Override
+ public TaskResult execute(Variables input) throws Exception {
+ Map defaults = ctx.defaultVariables().toMap();
+ ...
+ }
+}
+```
+
+The environment-specific defaults are provided using a
+[Default Process Configuration Rule](../getting-started/policies.md#default-process-configuration-rule)
+policy. A `defaultTaskVariables` entry matching the plugin's `@Named` value is
+provided to the plugin at runtime via the `ctx.defaultVariables()` method.
+
+```json
+{
+ "defaultProcessCfg": {
+ "defaultTaskVariables": {
+ "github": {
+ "apiUrl": "https://github.example.com/api/v3"
+ }
+ }
+ }
+}
+```
+
+Check out the
+[GitHub task]({{site.concord_plugins_source}}blob/master/tasks/git/src/main/java/com/walmartlabs/concord/plugins/git/v2/GithubTaskV2.java#L43)
+as the example.
+
+### Error Handling
+
+By default, the task should throw an exception in case of any execution errors
+or invalid input parameters. Consider adding the `ignoreErrors` parameter to
+catch all execution errors except for input validation errors.
+
+Throw an exception:
+
+```yaml
+- task: myTask
+ in:
+ url: "https://httpstat.us/404"
+```
+
+Save the error in the `result` variable:
+
+```yaml
+- task: myTask
+ in:
+ url: "https://httpstat.us/404"
+ ignoreErrors: true
+ out: result
+
+- log: "${result.errorCode}"
+```
+
+### Unit Tests
+
+Consider using unit tests to quickly test the task without publishing SNAPSHOT
+versions. Use a library like [Mockito](https://site.mockito.org/) to replace
+the dependencies in your task with "mocks":
+
+```java
+@Test
+public void test() throws Exception {
+ Map input = new HashMap();
+ input.put("name", "Concord");
+
+ MyTask t = new MyTask(someService);
+ TaskResult.SimpleResult result = (TaskResult.SimpleResult) t.execute(new MapBackedVariables(input));
+
+ assertEquals("Hello, Concord", result.toMap().get("msg"));
+}
+```
+
+### Integration Tests
+
+The [testcontainers-concord](https://github.com/concord-workflow/testcontainers-concord)
+project provides a JUnit4 test rule to run Concord in Docker. See
+[the complete example](#complete-example) for more details.
+
+Alternatively, it is possible to test a task using a running Concord instance
+without publishing the task's JAR. Concord automatically adds `lib/*.jar` files
+from [the payload archive](../api/process.md#zip-file) to the process'
+classpath. This mechanism can be used to upload local JAR files and,
+consequently, to test locally-built JARs. Check out the
+[custom_task]({{ site.concord_source }}/tree/master/examples/custom_task)
+example. It uses Maven to collect all `compile` dependencies of the task
+and creates a payload archive with the dependencies and the task's JAR.
+
+**Note:** It is important to use `provided` scope for the dependencies that are
+already included in the runtime. See [Creating Tasks](#create-task) section for
+the list of provided dependencies.
diff --git a/docs/src/templates/index.md b/docs/src/templates/index.md
new file mode 100644
index 0000000000..9d3bf2e909
--- /dev/null
+++ b/docs/src/templates/index.md
@@ -0,0 +1,114 @@
+# Templates
+
+> Consider a simpler mechanism of [Imports](../processes-v1/imports.md)
+first.
+
+Templates allow users to share common elements between different
+projects and processes.
+
+Templates can contain the same type of files which are used in a
+regular [process payload](../getting-started/processes.md), plus additional
+instructions on how to modify process' data.
+
+Process files will overwrite any template files with the same name,
+this way a user can "override" any resource provided by a template.
+
+Additionally, using [template aliases](#usage), it is possible to
+create Concord flows which don't require sending a payload archive
+or having a GIT repository with flows and can be called with a
+simple HTTP request.
+
+## Creating a template
+
+Template is a regular JAR or ZIP archive with the structure similar
+to a regular process payload.
+
+For example, the
+[ansible template]({{site.concord_source}}tree/master/plugins/templates/ansible/src/main/filtered-resources)
+has the following structure:
+
+```
+_callbacks/ # (1)
+ trace.py
+
+processes/ # (2)
+ main.yml
+
+_main.js # (3)
+```
+
+It contains additional resources needed for the ansible task (1),
+a folder with a flow definition (2) and a pre-processing script (3).
+
+Template archive must be uploaded to a repository just like a regular
+artifact.
+
+## Pre-processing
+
+If a template contains `_main.js` file, it will be executed by the
+server before starting a process. The script must return a JSON
+object which will be used as process variables.
+
+For example, if `_main.js` looks like this:
+
+```javascript
+({
+ entryPoint: "main",
+ arguments: {
+ message: _input.message,
+ name: "Concord"
+ }
+})
+```
+
+then given this request data
+
+```json
+{
+ "message": "Hello,"
+}
+```
+
+the process variables will look like this:
+
+```json
+{
+ "entryPoint": "main",
+ "arguments": {
+ "message": "Hello,",
+ "name": "Concord"
+ }
+}
+```
+
+A special `_input` variable is provided to access source data from a
+template script.
+
+## Usage
+
+Template can be referenced with a `template` entry in process variables:
+
+```yaml
+flows:
+ default:
+ - log: "${message} ${name}"
+
+configuration:
+ template: "http://host/path/my-template.jar"
+ # or by using a maven repository
+ template: "mvn://groupId:artifactId:version"
+```
+
+Only one template can be used at a time.
+
+The `template` parameter can also be specified in request JSON data
+or profiles.
+
+Templates also can be references by their aliases:
+
+```yaml
+configuration:
+ template: "my-template"
+```
+
+The alias must be added using the [template API](../api/template.md).
diff --git a/docs/src/triggers/cron.md b/docs/src/triggers/cron.md
new file mode 100644
index 0000000000..2b0d94e45a
--- /dev/null
+++ b/docs/src/triggers/cron.md
@@ -0,0 +1,92 @@
+# Cron Triggers
+
+You can schedule execution of flows by defining one or multiple `cron` triggers.
+
+Each `cron` trigger is required to specify the flow to execute with the
+`entryPoint` parameter. Optionally, key/value pairs can be supplied as
+`arguments`.
+
+The `spec` parameter is used to supply a regular schedule to execute the
+flow by using a [CRON syntax](https://en.wikipedia.org/wiki/Cron).
+
+The following example trigger kicks off a process to run the `hourlyCleanUp`
+flow whenever the minute value is 30, and hence once an hour every hour.
+
+```yaml
+flows:
+ hourlyCleanUp:
+ - log: "Sweep and wash."
+triggers:
+ - cron:
+ spec: "30 * * * *"
+ entryPoint: hourlyCleanUp
+```
+
+Multiple values can be used to achieve shorter intervals, e.g. every 15 minutes
+with `spec: 0,15,30,45 * * * *`. A daily execution at 9 can be specified with
+`spec: 0 9 * * *`. The later fields can be used for hour, day and other
+values and advanced [CRON](https://en.wikipedia.org/wiki/Cron) features such as
+regular expression usage are supported as well.
+
+Cron triggers that include a specific hour of day, can also specify a timezone
+value for stricter control. Otherwise the Concord instance specific timezone is used.
+
+```yaml
+flows:
+ cronEvent:
+ - log: "On cron event."
+triggers:
+ - cron:
+ spec: "0 12 * * *"
+ timezone: "Europe/Moscow"
+ entryPoint: cronEvent
+```
+
+Values for the timezone are derived from the
+[tzdata](https://en.wikipedia.org/wiki/Tz_database)
+database as used in the
+[Java TimeZone class](https://docs.oracle.com/javase/8/docs/api/java/util/TimeZone.html).
+You can use any of the TZ values from the
+[full list of zones](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones).
+
+Each trigger execution receives an `event` object with the properties `event.fireAt`
+and `event.spec` as well as any additional arguments supplied in the
+configuration (e.g. `arguments` or `activeProfiles`):
+
+```yaml
+flows:
+ eventOutput:
+ - log: "${name} - event run at ${event.fireAt} due to spec ${event.spec} started."
+triggers:
+ - cron:
+ spec: "* 12 * * *"
+ entryPoint: eventOutput
+ activeProfiles:
+ - myProfile
+ arguments:
+ name: "Concord"
+```
+
+Scheduled events are a useful feature to enable tasks such as regular cleanup
+operations, batch reporting or processing and other repeating task that are
+automated via a Concord flow.
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+## Running as a Specific User
+
+Cron-triggered processes run as a system `cron` user by default. This user may
+not have access to certain resources (e.g. Secrets, JSON Store). A user's API
+key can be referenced from a project-scoped single-value (string)
+[Secret](../console/secret.md) to run
+the process as the user.
+
+```yaml
+triggers:
+- cron:
+ spec: "* 12 * * *"
+ entryPoint: cronEvent
+ runAs:
+ # secret must be scoped to the project and contain the API token of the initiator
+ withSecret: "user-api-key-secret-name"
+```
diff --git a/docs/src/triggers/generic.md b/docs/src/triggers/generic.md
new file mode 100644
index 0000000000..8bef719ae3
--- /dev/null
+++ b/docs/src/triggers/generic.md
@@ -0,0 +1,78 @@
+# Generic
+
+- [Version 1](#generic-v1)
+- [Version 2](#generic-v2)
+- [Migration](#generic-migration)
+
+You can configure generic triggers to respond to events that are configured to
+submit data to the Concord REST API.
+
+Currently, Concord supports two different implementations of generic triggers:
+`version: 1` and `version: 2`.
+
+**Note:** Generic triggers are more impactful to server performance than starting
+processes directly through the [Process API](../api/process.md). Consult with
+your system administrator before implementing generic triggers.
+
+
+
+## Version 2
+
+For example, if you submit a JSON document to the API at `/api/v1/events/example`,
+an `example` event is triggered. You can capture this event and trigger a flow by
+creating a trigger configuration using the same `example` name:
+
+```yaml
+triggers:
+- example:
+ version: 2
+ entryPoint: exampleFLow
+ conditions:
+ aField: "aValue"
+```
+
+Every incoming `example` event with a JSON field `aField` containing `aValue` kicks
+of a process of the `exampleFlow`.
+
+The generic event end-point provides a simple way of integrating third-party
+systems with Concord. Simply modify or extend the external system to send
+events to the Concord API and define the flow in Concord to proceed with the
+next steps.
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+
+
+## Version 1
+
+```yaml
+- example:
+ version: 1 # optional, depends on the environment's defaults
+ aField: "aValue"
+ entryPoint: exampleFLow
+```
+
+Check out the [full example]({{site.concord_source}}tree/master/examples/generic_triggers)
+for more details.
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+
+
+## Migrating Generic trigger from v1 to v2
+
+In `version: 2`, the trigger conditions are moved into a `conditions` field:
+
+```yaml
+# v1
+- example:
+ aField: "aValue"
+ entryPoint: exampleFLow
+
+# v2
+- example:
+ version: 2
+ conditions:
+ aField: "aValue"
+ entryPoint: exampleFLow
+```
diff --git a/docs/src/triggers/github.md b/docs/src/triggers/github.md
new file mode 100644
index 0000000000..fca2272eb1
--- /dev/null
+++ b/docs/src/triggers/github.md
@@ -0,0 +1,299 @@
+# GitHub
+
+- [Usage](#usage)
+- [Examples](#examples)
+ - [Push Notifications](#push-notifications)
+ - [Pull Requests](#pull-requests)
+ - [Organization Events](#organization-events)
+ - [Common Events](#common-events)
+
+## Usage
+
+The `github` event source allows Concord to receive `push` and `pull_request`
+notifications from GitHub. Here's an example:
+
+```yaml
+flows:
+ onPush:
+ - log: "${event.sender} pushed ${event.commitId} to ${event.payload.repository.full_name}"
+
+triggers:
+- github:
+ version: 2
+ useInitiator: true
+ entryPoint: onPush
+ conditions:
+ type: push
+```
+
+GitHub trigger supports the following attributes
+
+- `entryPoint` - string, mandatory, the name of the flow that Concord starts
+when GitHub event matches trigger conditions;
+- `activeProfiles` - list of strings, optional, list of profiles that Concord
+applies for process;
+- `useInitiator` - boolean, optional, process initiator is set to `sender` when
+this attribute is marked as `true`;
+- `useEventCommitId` - boolean, optional, Concord will use the event's commit
+ID to start the process;
+- `ignoreEmptyPush` - boolean, optional, if `true` Concord skips empty `push`
+notifications, i.e. pushes with the same `after` and `before` commit IDs.
+Default value is `true`;
+- `exclusive` - object, optional, exclusive execution configuration for process;
+- `arguments` - object, optional, additional parameters that are passed to
+the flow;
+- `conditions` - object, mandatory, conditions for GitHub event matching.
+
+Possible GitHub trigger `conditions`:
+
+- `type` - string, mandatory, GitHub event name;
+- `githubOrg` - string or regex, optional, GitHub organization name. Default is
+the current repository's GitHub organization name;
+- `githubRepo` - string or regex, optional, GitHub repository name. Default is
+the current repository's name;
+- `githubHost` - string or regex, optional, GitHub host;
+- `branch` - string or regex, optional, event branch name. Default is the
+current repository's branch;
+- `sender` - string or regex, optional, event sender;
+- `status` - string or regex, optional. For `pull_request` notifications
+possible values are `opened` or `closed`. A complete list of values can be
+found [here](https://developer.github.com/v3/activity/events/types/#pullrequestevent);
+- `repositoryInfo` - a list of objects, information about the matching Concord
+repositories (see below);
+- `payload` - key-value, optional, github event payload.
+
+The `repositoryInfo` condition allows triggering on GitHub repository events
+that have matching Concord repositories. See below for examples.
+
+The `repositoryInfo` entries have the following structure:
+- `projectId` - UUID, ID of a Concord project with the registered repository;
+- `repositoryId` - UUID, ID of the registered repository;
+- `repository` - string, name of the registered repository;
+- `branch` - string, the configured branch in the registered repository;
+- `enabled` - boolean, enabled or disabled state of the registered repository.
+
+The `exclusive` section in the trigger definition can be used to configure
+[exclusive execution](../processes-v1/configuration.md#exclusive-execution)
+of the process:
+
+```yaml
+triggers:
+ - github:
+ version: 2
+ useInitiator: true
+ entryPoint: onPush
+ exclusive:
+ groupBy: "branch"
+ mode: "cancelOld"
+ conditions:
+ type: push
+```
+
+In the example above, if there's another process running in the same project
+that was started by a GitHub event in the same branch is running, it will be
+immediately cancelled. This mechanism can be used, for example, to cancel
+processes started by `push` events if a new commit appears in the same Git
+branch.
+
+The `exclusive` entry has the following structure:
+- `group` - string, optional;
+- `groupBy` - string, optional, allowed values:
+ - `branch` - group processes by the branch name;
+- `mode` - string, mandatory, allowed values:
+ - `cancel` - cancel all new processes if there's a process already running
+ in the `group`;
+ - `cancelOld` - all running processes in the same `group` that starts before
+ current will be cancelled;
+ - `wait` - only one process in the same `group` is allowed to run.
+
+**Note:** this feature available only for processes running in projects.
+
+The `event` object provides all attributes from trigger conditions filled with
+GitHub event.
+
+Refer to the GitHub's [Webhook](https://developer.github.com/webhooks/)
+documentation for the complete list of event types and `payload` structure.
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+## Examples
+
+### Push Notifications
+
+To listen for all commits into the branch configured in the project's
+repository:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPush"
+ conditions:
+ type: "push"
+```
+
+The following example trigger fires when someone pushes to a development branch
+with a name starting with `dev-`, e.g. `dev-my-feature`, `dev-bugfix`, and
+ignores pushes if the branch is deleted:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPush"
+ conditions:
+ branch: "^dev-.*$"
+ type: "push"
+ payload:
+ deleted: false
+```
+
+The following example trigger fires when someone pushes/merges into master, but
+ignores pushes by `jenkinspan` and `anothersvc`:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPush"
+ conditions:
+ type: "push"
+ branch: "master"
+ sender: "^(?!.*(jenkinspan|anothersvc)).*$"
+```
+
+The following example triggers fire when the given file paths within the `push`
+event are `added`, `modified`, `deleted`, or `any`.
+
+```yaml
+# fire when concord.yml is modified
+- github:
+ version: 2
+ entryPoint: "onConcordUpdate"
+ conditions:
+ type: "push"
+ branch: "master"
+ files:
+ modified:
+ - "concord.yml"
+
+# fire when any file within src/ is added, modified, or deleted
+- github:
+ version: 2
+ entryPoint: "onSrcChanged"
+ conditions:
+ type: "push"
+ branch: "master"
+ files:
+ any:
+ - "src/.*"
+```
+
+### Pull Requests
+
+To receive a notification when a PR is opened:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPr"
+ conditions:
+ type: "pull_request"
+ status: "opened"
+ branch: ".*"
+```
+
+To trigger a process when a new PR is opened or commits are added to the existing PR:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPr"
+ conditions:
+ type: "pull_request"
+ status: "(opened|synchronize)"
+ branch: ".*"
+```
+
+To trigger a process when a PR is merged:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onPr"
+ conditions:
+ type: "pull_request"
+ status: "closed"
+ branch: ".*"
+ payload:
+ pull_request:
+ merged: true
+```
+
+The next example trigger only fires on pull requests that have the label `bug`:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onBug"
+ conditions:
+ type: "pull_request"
+ payload:
+ pull_request:
+ labels:
+ - { name: "bug" }
+```
+
+### Organization Events
+
+To receive notifications about team membership changes in the current project's
+organization:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onTeamChange"
+ conditions:
+ type: "membership"
+ githubRepo: ".*"
+```
+
+To trigger a process when a team is added to the current repository:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: "onTeamAdd"
+ conditions:
+ type: "team_add"
+```
+
+### Common Events
+
+If `https://github.com/myorg/producer-repo` is registered in Concord as
+`producerRepo`, put `producerRepo` in `repository` field under `repositoryInfo`
+as shown below. The following trigger will receive all matching events for the
+registered repository:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: onPush
+ conditions:
+ repositoryInfo:
+ - repository: producerRepo
+```
+
+Regular expressions can be used to subscribe to *all* GitHub repositories
+handled by the registered webhooks:
+
+```yaml
+- github:
+ version: 2
+ entryPoint: onEvent
+ conditions:
+ githubOrg: ".*"
+ githubRepo: ".*"
+ branch: ".*"
+```
+
+**Note:** subscribing to all GitHub events can be restricted on the system
+policy level. Ask your Concord instance administrator if it is allowed in your
+environment.
diff --git a/docs/src/triggers/index.md b/docs/src/triggers/index.md
new file mode 100644
index 0000000000..7ca5369b3c
--- /dev/null
+++ b/docs/src/triggers/index.md
@@ -0,0 +1,158 @@
+# Overview
+
+Triggers provide a way to automatically start specific Concord flows as a
+response to specific events.
+
+- [Common Syntax](#common-syntax)
+- [Supported Triggers](#supported-triggers)
+- [Exclusive Triggers](#exclusive-triggers)
+- [Security](#security)
+- [Limitations](#limitations)
+
+## Common Syntax
+
+All triggers work by the same process:
+
+- Concord matches the patterns you specify as triggers to event data.
+- event data is typically external, but can be internally produced in the case
+of the [cron triggers](./cron.md).
+- for each matched trigger, it starts a new process.
+
+You define triggers in the `triggers` section of a `concord.yml` file, as in
+this example:
+
+```yaml
+triggers:
+- eventSource:
+ parameter1: ".*123.*"
+ parameter2: false
+ entryPoint: myFlow
+ activeProfiles:
+ - myProfile
+ arguments:
+ myValue: "..."
+ exclusive:
+ group: "myGroup"
+ mode: cancel
+...
+```
+
+When the API end-point `/api/v1/events/` receives an event, Concord detects any
+existing matches with trigger names.
+
+This allows you to publish events to `/api/v1/events/eventSource` for matching
+with triggers (where `eventSource` is any string).
+
+Further:
+
+- Concord detects any matches of `parameter1` and `parameter2` with the external
+ event's parameters;
+- `entryPoint` is the name of the flow that Concord starts when there is a match;
+- `activeProfiles` is the list of [profiles](../processes-v1/profiles.md)
+ to active for the process;
+- `arguments` is the list of additional parameters that are passed to the flow;
+- `exclusive` is the exclusivity info of the [exclusive group](#exclusive-triggers).
+
+Parameters can contain YAML literals as follows:
+
+- strings
+- numbers
+- boolean values
+- regular expressions
+
+The `triggers` section can contain multiple trigger definitions. Each matching
+trigger is processed individually--each match can start a new process.
+
+A trigger definition without match attributes is activated for any event
+received from the specified source.
+
+In addition to the `arguments` list, a started flow receives the `event`
+parameter which contains attributes of the external event. Depending on the
+source of the event, the exact structure of the `event` object may vary.
+
+## Supported Triggers
+
+- [GitHub](./github.md)
+- [Cron](./cron.md)
+- [Manual](./manual.md)
+- [Generic](./generic.md)
+- [OneOps](./oneops.md)
+
+## Exclusive Triggers
+
+There is an option to make a triggered processes "exclusive". This prevents
+the process from running, if there are any other processes in the same project
+with the same "exclusive group":
+
+```yaml
+flows:
+ cronEvent:
+ - log: "Hello!"
+ - ${sleep.ms(65000)} # wait for 1m 5s
+
+triggers:
+- cron:
+ spec: "* * * * *" # run every minute
+ timezone: "America/Toronto"
+ entryPoint: cronEvent
+```
+
+In this example, if the triggered process runs longer than the trigger's period,
+then it is possible that multiple `cronEvent` processes can run at the same
+time. In some cases, it is necessary to enforce that only one trigger process
+runs at a time, due to limitation in target systems being accessed or similar
+reasons.
+
+```yaml
+triggers:
+- cron:
+ spec: "* * * * *"
+ timezone: "America/Toronto"
+ entryPoint: cronEvent
+ exclusive:
+ group: "myGroup"
+ mode: "cancel" # or "wait"
+```
+
+Any processes with the same `exclusive` value are automatically prevented from
+starting, if a running process in the same group exists. If you wish to enqueue
+the processes instead use `mode: "wait"`.
+
+See also [Exclusive Execution](../processes-v1/configuration.md#exclusive-execution)
+section in the Concord DSL documentation.
+
+## Security
+
+Triggering a project process requires at least
+[READER-level privileges](../getting-started/orgs.md#teams).
+
+To activate a trigger using the API, the request must be correctly
+authenticated first. To activate a [generic trigger](./generic.md) one can
+use an API request similar to this:
+
+```
+curl -ik \
+ -H 'Authorization: ' \
+ -H 'Content-Type: application/json' \
+ -d '{"some_value": 123}'
+ https://concord.example.com/api/v1/events/my_trigger
+```
+
+The owner of the `token` must have the necessary privileges in all projects
+that have such triggers.
+
+Processes started by triggers are executed using the request sender's
+privileges. If the process uses any Concord resources such as
+[secrets](../getting-started/security.md#secret-management) or
+[JSON stores](../getting-started/json-store.md), the user's permissions need
+to be configured accordingly.
+
+## Limitations
+
+Trigger configuration is typically loaded automatically, but can be disabled
+globally or for specific types of repositories. For example, personal Git
+repositories can be treated differently from organizational repositories in
+GitHub. You can force a new parsing and configuration by manually reloading the
+repository content with the **`Refresh`** button beside the repository in
+the Concord Console or by
+[using the API](../api/repository.md#refresh-repository).
diff --git a/docs/src/triggers/manual.md b/docs/src/triggers/manual.md
new file mode 100644
index 0000000000..50c7284a38
--- /dev/null
+++ b/docs/src/triggers/manual.md
@@ -0,0 +1,29 @@
+# Manual
+
+Manual triggers can be used to add items to the repository action drop down
+in the Concord Console, similar to the default added _Run_ action.
+
+Each `manual` trigger must specify the flow to execute using the `entryPoint`
+parameter. The `name` parameter is the displayed name of the shortcut.
+
+After repository triggers are refreshed, the defined `manual` triggers appear
+as dropdown menu items in the repository actions menu.
+
+```yaml
+triggers:
+- manual:
+ name: Build
+ entryPoint: main
+- manual:
+ name: Deploy Prod
+ entryPoint: deployProd
+- manual:
+ name: Deploy Dev and Test
+ entryPoint: deployDev
+ activeProfiles:
+ - devProfile
+ arguments:
+ runTests: true
+```
+
+**Note:** standard [limitations](./index.md#limitations) apply.
diff --git a/docs/src/triggers/oneops.md b/docs/src/triggers/oneops.md
new file mode 100644
index 0000000000..6a5002def9
--- /dev/null
+++ b/docs/src/triggers/oneops.md
@@ -0,0 +1,112 @@
+# OneOps
+
+- [Version 2](#oneops-v2)
+- [Version 1](#oneops-v1)
+- [Migration](#oneops-migration)
+
+Using `oneops` as an event source allows Concord to receive events from
+[OneOps](https://oneops.github.io). You can configure event properties in the OneOps
+notification sink, specifically for use in Concord triggers.
+
+Currently Concord supports two different implementations of `oneops` triggers:
+`version: 1` and `version: 2`.
+
+
+
+## Version 2
+
+Deployment completion events can be especially useful:
+
+```yaml
+flows:
+ onDeployment:
+ - log: "OneOps has completed a deployment: ${event}"
+
+triggers:
+- oneops:
+ version: 2
+ conditions:
+ org: "myOrganization"
+ asm: "myAssembly"
+ env: "myEnvironment"
+ platform: "myPlatform"
+ type: "deployment"
+ deploymentState: "complete"
+ useInitiator: true
+ entryPoint: onDeployment
+```
+
+The `event` object, in addition to its trigger parameters, contains a `payload`
+attribute--the original event's data "as is". You can set `useInitiator` to
+`true` in order to make sure that process is initiated using `createdBy`
+attribute of the event.
+
+The following example uses the IP address of the deployment component to build
+an Ansible inventory for execution of an [Ansible task]({{ site.concord_plugins_v2_docs }}/ansible.md):
+
+```yaml
+flows:
+ onDeployment:
+ - task: ansible
+ in:
+ ...
+ inventory:
+ hosts:
+ - "${event.payload.cis.public_ip}"
+```
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+
+
+## Version 1
+
+```yaml
+flows:
+ onDeployment:
+ - log: "OneOps has completed a deployment: ${event}"
+
+triggers:
+- oneops:
+ version: 1 # optional, depends on the environment's defaults
+ org: "myOrganization"
+ asm: "myAssembly"
+ env: "myEnvironment"
+ platform: "myPlatform"
+ type: "deployment"
+ deploymentState: "complete"
+ useInitiator: true
+ entryPoint: onDeployment
+```
+
+**Note:** standard [limitations](./index.md#limitations) apply.
+
+
+
+### Migrating OneOps trigger from v1 to v2
+
+In `version: 2`, the trigger conditions are moved into a `conditions` field:
+
+```
+# v1
+- oneops:
+ org: "myOrganization"
+ asm: "myAssembly"
+ env: "myEnvironment"
+ platform: "myPlatform"
+ type: "deployment"
+ deploymentState: "complete"
+ useInitiator: true
+ entryPoint: onDeployment
+# v2
+- oneops:
+ conditions:
+ org: "myOrganization"
+ asm: "myAssembly"
+ env: "myEnvironment"
+ platform: "myPlatform"
+ type: "deployment"
+ deploymentState: "complete"
+ useInitiator: true
+ entryPoint: onDeployment
+```
\ No newline at end of file