Skip to content

bbilly1/docker-delivery-hook

Repository files navigation

Docker Delivery Hook

Webhook endpoint to trigger docker container rebuild.

Usecase

  • You build and push your container images from a CI/CD pipeline to a container registry.
  • You run your own container from a VM created from a docker compose file or docker stack swarm.
  • You are looking for a way to pull and recreate your docker image after CI/CD completes.
  • You want to avoid polling the container registry on an interval.
  • You want to avoid setting up SSH from the pipeline into your server.
  • You want to keep the ability to maintain your containers outside of CI/CD events.

Install

Docker Compose example:

services:
  docker-delivery-hook:
    image: ghcr.io/bbilly1/docker-delivery-hook
    container_name: docker-delivery-hook
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /path/to/docker-compose.yml:/path/to/docker-compose.yml:ro
    ports:
      - "127.0.0.1:8000:8000"
    environment:
      SECRET_KEY: "your-very-secret-key"

Permissions

You can run the application under a custom user instead of the default root user. You need to make sure the user running in the container has access to the docker socket.

On your host system, verify the user and group permissions of the docker socket, e.g.:

stat /var/run/docker.sock

Note the Gid number of the socket, and use the same for your user to run the container under, e.g.:

services:
  docker-delivery-hook:
    user: 1000:988
    ...

Volumes

  • Docker Socket: Mount host docker socket into the container to allow the container to execute docker commands as the host user. See security considerations below.
  • Compose File (when using the compose endpoints): Crucially mount the docker-compose.yml file exactly at the same absolute path inside the container as outside on the host machine. Docker tracks the compose environment with the labels com.docker.compose.project.config_files and com.docker.compose.project.working_dir. Interacting with existing containers requires the same compose location otherwise docker will treat this as a separate compose file.
  • Docker Context: If your project depends on additional files like env files defined in env_file key, make sure the container has the same context by mounting additional folders to the same location.
  • If needed for authentication, mount the docker config.json file into the container at $HOME/.docker/config.json where $HOME is the home folder inside the container. If you are running the container under a specific user, $HOME will default to /, else $HOME will be set to $/root.

Environment Variables

Configure the API with these environment variables:

  • SECRET_KEY: Required, shared secret key for signature validation.
  • UVICORN_PORT: Optional, overwrite internal web server port from default 8000.
  • SHOW_DOCS: Optional, set to anything except an empty string to show default FastAPI docs. Only for your local dev environment.

Endpoints

This API exposes the following endpoints. These endpoints are async. Meaning after request validation will return while the docker commands will process in the background.

Docker Compose

When using these endpoints, see the notes about volumes above.

These endpoints expect a mandatory body in the request with a "container_name" key, e.g.:

{
  "container_name": "your-container-name",
}
  • /pull: Rebuilding the container by pulling the new image. Only applicable if your compose file defines an image key. That is equivalent to:
docker compose pull container_name && docker compose up -d container_name
  • /build: Rebuild the container by building locally. Only applicable if your compose file defines a build key. Be aware that you either need to pull the context from a remote like git or mount the correct build context into the container and not just the compose file. That is equivalent to:
docker compose up -d --build container_name

Docker Swarm

These endpoints expect a mandatory body in the request with a "container_name" key and an optional "with_registry_auth" boolean key, e.g.:

{
  "container_name": "your-container-name",
  "with_registry_auth": true,  // this is optional
}
  • /swarm: Rebuild your container in a docker swarm. This is equivalent to:
docker service update --image container_image --force container_name
  • container_name can be either the "NAME" or "IMAGE" of your service.
  • If you pass the "NAME", the container_image will be looked up automatically.
  • If you pass the "IMAGE" you can omit the tag. This can result in multiple containers matching, all containers built from the specified image will update.
  • When in doubt verify with docker service ls on your manager node.
  • When you specify with_registry_auth, that adds --with-registry-auth to the command for private repositories.

Action

There is an action published to the Github marketplace created from bbilly1/docker-delivery-hook-action. See the instructions there with example usage.

Manual Pipeline Example

PAYLOAD='{"container_name": "my-container-name"}'
SECRET_KEY="your-very-secret-key"
TIMESTAMP=$(date +%s)
MESSAGE="${PAYLOAD}${TIMESTAMP}"
SIGNATURE=$(echo -n "$MESSAGE" | openssl dgst -sha256 -hmac "$SECRET_KEY" | cut -d " " -f 2)

curl -X POST -H "Content-Type: application/json" \
 	-H "X-Timestamp: $TIMESTAMP" \
 	-H "X-Signature: $SIGNATURE" \
 	-d "$PAYLOAD" \
 	$API_ENDPOINT

Explanation:

  • SECRET_KEY: That's the shared secret between your pipeline and the API container. That is usually stored as a secret variable in your pipeline.
  • TIMESTAMP: UTC epoch timestamp.
  • MESSAGE: String concatenated from Payload and Timestamp.
  • SIGNATURE: SHA256 HMAC signature from the message. See below for additional examples.
  • PAYLOAD: JSON body with key "container_name" and value the container name as defined in your compose file.

Signature building

Depending what you have available in your pipeline environment, you might want to choose one over the other. Here are some examples:

Using OpenSSL:

SIGNATURE=$(echo -n "$MESSAGE" | openssl dgst -sha256 -hmac "$SECRET_KEY" | cut -d " " -f 2)

Using Python standard library:

SIGNATURE=$(python -c "import hmac, hashlib; print(hmac.new(b'$SECRET_KEY', b'$MESSAGE', hashlib.sha256).hexdigest())")

Using NodeJS:

SIGNATURE=$(node -e "
  const crypto = require('crypto');
  const signature = crypto.createHmac('sha256', '$SECRET_KEY').update('$MESSAGE').digest('hex');
  console.log(signature);
")

Security Consideration

If you see any flaws here, reach out.

Verifications

  • Signature Verification: By passing the X-Signature header with your request, the API will be able to verify that the origin has the same SECRET_KEY as the API and the origin receives the same data as expected.
  • Timestamp: By passing the X-Timestamp header plus by using the timestamp in the message to verify the signature, you are able to guarantee that even an intercepted message wouldn't be able to be reused in a future time.
  • Container Name: The container name you send with the payload is verified by docker directly first by checking all existing containers and searching for a match.
  • Compose Validation: The compose file location is validated by inspecting the container name
  • Predefined commands: The commands executed are predefined. The variables going into the commands are validated as described above.

SSH in your pipeline

Alternate approach to solve this problem is to setup SSH in your pipeline. That usually means to create a least privileged user on your VM, lock down SSH for that user to limit what that user is allowed to do, then add the private SSH key in your pipeline. Then as part of your pipeline, you'll register the key, login to your VM, and run the needed commands.

That has a few downsides:

  • Requires configurations on the VM. That can be automated with scripting or tools like Ansible, but that's something that needs to be maintained additionally to your application code base.
  • Another SSH key on the VM is required to basically just execute a single command. That is an additional exposure that you might want to avoid.
  • The private key needs to be in the CI/CD pipeline and will be accessible by everyone with access to the pipeline.
  • That is difficult to manage with infrastructure as code. Having a CI/CD listener on your VM that can react to webhooks can be managed in your regular docker compose file. All can be committed to version control as part of your application, obviously except the SECRET_KEY.
  • Needing SSH from your pipeline makes hardening your SSH exposure much more difficult. Depending on your environment you might not know all possible IPs from your runners and you might not allow SSH to be available to the internet unrestricted.

Mounting docker socket

You might also want to read up on the implication for mounting docker.sock into the container. Verify the code first, use at your own risk before publishing that to the internet.

About

Webhook endpoint to automatically recreate your containers from CI/CD

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages