This repository shows how to setup and deploy an Observable Framework frontend application with a backend application written in Go, as well as a validator sample application, onto Google Cloud Run in a sidecar configuration. nginx is used as the ingress container and hosts the statically built Observable Framework application; the backend application and the validator application are separate sidecars.
----------------------------------------------------------------------------------
| |
| ------------------------------------------------- ------------------------- |
| | -------------------------------- | | | |
---------- | | | Observable Framework static | | | validator | |
| client | <--> | | --------------- site served from the proxy | |<-->| application | |
---------- | | | nginx proxy |------------------------------ | | server | |
| | --------------- | | | |
| ------------------------------------------------- ------------------------- |
| ingress container sidecar container |
| ^ |
| | |
| v |
| ----------------------------------------------- |
| | | |
| | backend application server | |
| | | |
| ----------------------------------------------- |
| sidecar container |
| |
----------------------------------------------------------------------------------
Cloud Run instance
The application is based on https://github.com/df-rw/ob-app.
The validator application accepts requests from the client and validates them prior to the request being handed off to the backend application. What the validation does is application dependent, and could be anything like a basic auth check, a cookie check or JWT validation.
The validator application exists separately from the application and proxy; however the proxy is (and in production, must be!) configured to send all requests through the validator.
The purpose of having a validator application, aside from it's function, is to provide a single method of validation that doesn't require changes to the backend application.
It can also provide a second layer of security. For example, when using
Google's Identity-Aware Proxy (IAP), the header X-Goog-IAP-JWT-Assertion is
added to a client request by IAP before reaching the backend application. This
can be checked in the validator to ensure that IAP is enabled.
For this demo:
-
air for rebuilding the Go backend application on file changes during development:
go install github.com/air-verse/air@latest
Configuration for air is in
./.air.toml. -
Docker for testing all applications in their own containers.
brew install --cask docker
git clone https://github.com/df-rw/ob-app-sidecar
cd ob-app-sidecar
npm install # install modules for Observable FrameworkYou may wish to set the email address of a valid user account while developing application if you need to differentiate between users. See Why do nginx-dev.conf and nginx-docker.conf set a Google header? for why you may want to do this:
sed -i '' -e 's/[email protected]/<a valid email address/>' nginx-dev.conf
sed -i '' -e 's/[email protected]/<a valid email address/>' nginx-docker.confFor ease of development, each of the nginx, Observable Framework and backend servers should be started in separate terminals. This allows easy log viewing of each individual service, as well as restarting only individual services if required:
cd backend && go run ./cmd/web/*.go -p 6082 # (or "air") Start backend server.
cd frontend && npm run dev -- --port 6081 --no-open # Start Observable framework (diff terminal).
nginx -p ./nginx -c ./nginx-dev.conf # Start nginx (diff terminal).How traffic moves through the development environment:
---------- ---------------
| client | <--> | nginx proxy |
---------- ---------------
:6080 ^ ^
| |
| | ----------------------
| -----> | backend app server |
| ----------------------
| :6082
|
| -------------------------------
----------> | Observable Framework server |
-------------------------------
:6081
nginx-dev.confis configured to listen on port6080for inbound requests, and pass off to Observable on port6081and the application server on6082.- nginx proxies requests from the client to either Observable Framework or the application server based on the URL of the request.
- nginx also proxies the Observable Framework websocket connection for live-reloading of the frontend. Changes you make to Observable Framework code will be automatically reloaded in the client.
- The validator service is stubbed
out in
nginx-dev.confand is set to let all requests pass through. You can change this as and when required, but the idea is for validation to not get in the way while writing your application.
Open browser to http://localhost:6080. Click click click, hack hack hack.
nginx-dev.conf is setup to pass any requests starting with /api/ to the
backend application. If there are specific paths you wish to forward to the
backend application, adjust nginx-*.conf to suit.
Once you're happy with your application, you may wish to test it locally with each part of the whole in a separate container. We can do this with Docker.
-------------------------------------------------
| -------------------------------- | --------------------------
---------- | | Observable Framework static | | | -------------------- |
| client | <--> | --------------- site served from the proxy | |<-->| | validator server | |
---------- | | nginx proxy |------------------------------ | | -------------------- |
| --------------- | --------------------------
------------------------------------------------- :8081 validator container
:8080 ingress container
^
|
v
-------------------------------------------------
| ---------------------- |
| | backend app server | |
| ---------------------- |
-------------------------------------------------
:8082 application container
- We setup the containers using docker compose.
Configuration is in
compose.yaml. nginx-docker.confis configured to listen on port8080for inbound requests, pass off application requests to the application server on8082, with the validation service listening on port8081.make docker.buildwill build all containers.make docker.upwill run all containers.make docker.downwill stop everything.make docker.cleanwill kill everything.
Open a browser to http://localhost:8080. Click click click.
Configuration for Cloud Build can be found in cloudbuild.yaml. There is also
configuration for the Cloud Run service in run-service.yaml. The intended
deployment method is a push to a branch, so a Cloud Build trigger should be
setup to run the deploy on this event.
New Cloud Run services can be created and updated from Cloud Build using
gcloud run deploy. However:
- sidecar deployments using
gcloud run deployare currently in preview; - sidecar start up order, which is required for our layout, requires specifying both container dependencies and startup healthcheck probes.
Container dependencies can be specified using gcloud run deploy
but healthchecks cannot, and can only be specified via the console, Terraform
or from a .yaml.
We use the service configuration in run-service.yaml to specify the startup
probes.
We can't specify the service configuration with gcloud run deploy, so have to
use gcloud services replace instead to get a new service revision going. This
will only run if the configuration supplied (ie. from run-services.yaml)
changes between runs. This is good, as there is no point creating a new Cloud
Run revision if nothing changed. This is bad however, if we make application
changes (ie. to our images) and not to our service; we will build and upload
new images, but the service won't deploy a new revision.
cloudbuild.yaml is set to use the COMMIT_SHA as the tag for our images. This
value will be whatever the commit checksum is given on the push to repo. This
value is written in run-sevice.yaml on each deployment, so our service will
get a new revision each deploy as the configuration has changed.
-
Worth noting that using
:latestwon't work; since this tag will never be changed inrun-service.yamlwe end up in the same uploaded-new-images-but-no-new-service-revision deployed basket. -
Also worth noting that we don't get any errors if
gcloud services replacedidn't start a new revision. You need to check the checksums on the images viagcloud run revisions describe...to see what images are being used.
| Filename | Porpoise |
|---|---|
Dockerfile.backend |
Builds the container for the backend application. |
Dockerfile.ingress-gcp |
Builds the ingress container on GCP. |
Dockerfile.ingress-local |
Builds the ingress container for local (Docker) use. |
Dockerfile.validator-dummy |
Builds the container for the dummy validator application. |
Dockerfile.validator-iap |
Builds the container for the IAP validator application. |
Dockerfile.ingress-gcp differs from Dockerfile.ingress-local as they
reference (slightly) different nginx configurations (see below).
Dockerfile.validator-dummy builds a dummy validator that returns success for
every connection. Dockerfile.validator-iap uses cmd/cli/validator-iap.go
to validate connections through IAP.
nginx configuration files are in ./nginx:
| Filename | Porpoise |
|---|---|
nginx-dev.conf |
nginx configuration for local development environment. |
nginx-docker.conf |
nginx configuration for local Docker environment. |
nginx-gcp.conf |
nginx configuration for GCP environment. |
Docker compose creates a network for the
container. This allows internal
applications to refer to each other by service name as specified in
compose.yaml. For instance, we can refer to the backend application with the
service name backend. nginx-docker.conf is setup like this.
This differs to GCP which uses localhost and unique port
numbers
to refer to services. For instance, we refer to the backend application on
server 127.0.0.1, with the port diferrentiating services. nginx-gcp.conf is
setup like this.
The sample application in this repository is ready to be deployed onto Google Cloud with IAP sitting in front of it. Two headers are provided by IAP to backend applications:
X-Goog-IAP-JWT-Assertion: a JWT supplied by IAP that should be verified by the application.X-Goog-Authenticated-User-Email: the email address of the authenticated user.
The validator application cmd/cli/validator-iap.go validates the JWT, and
also checks the claim in the JWT for the user by using
X-Goog-Authenticated-User-Email. If both these conditions pass, the validator
lets the request through to the backend application.
The backend application should know who the user is, and while it could
parse the JWT again, it's easier to just use X-Goog-Authenticated-User-Email
since this has been verified by the validator application. From the
docs:
If you use these headers, you must compare them against the identity information from the authenticated JWT header listed above.
Since we don't use IAP in local development, we fake
X-Goog-Authenticated-User-Email in nginx-dev.conf and nginx-docker.conf.
This also allows easy testing across different accounts locally.
See .env-sample for a... sample.
| Name | Value(s) | Notes |
|---|---|---|
OBSERVABLE_TELEMETRY_DISABLE |
true or false |
Observable Framework Telemetry off or on. |
- Rewrite nginx configuration based on local (Docker) deploy or GCP deploy.