A series of obstacle courses as a way to compare how different languages and frameworks handle structured concurrency, including:
- loser cancellation
 - resource management
 - efficient thread utilization (i.e. reactive, non-blocking)
 - explicit timeouts
 - errors causing a race loss
 
A scenario server validates the implementations of 11 scenarios:
- 
Race 2 concurrent requests
GET /1The winner returns a 200 response with a body containing
right - 
Race 2 concurrent requests, where one produces a connection error
GET /2The winner returns a 200 response with a body containing
right - 
Race 10,000 concurrent requests
GET /3The winner returns a 200 response with a body containing
right - 
Race 2 concurrent requests but 1 of them should have a 1 second timeout
GET /4The winner returns a 200 response with a body containing
right - 
Race 2 concurrent requests where a non-200 response is a loser
GET /5The winner returns a 200 response with a body containing
right - 
Race 3 concurrent requests where a non-200 response is a loser
GET /6The winner returns a 200 response with a body containing
right - 
Start a request, wait at least 3 seconds then start a second request (hedging)
GET /7The winner returns a 200 response with a body containing
right - 
Race 2 concurrent requests that "use" a resource which is obtained and released through other requests. The "use" request can return a non-20x request, in which case it is not a winner.
GET /8?open GET /8?use=<id obtained from open request> GET /8?close=<id obtained from open request>The winner returns a 200 response with a body containing
right - 
Make 10 concurrent requests where 5 return a 200 response with a letter
GET /9When assembled in order of when they responded, form the "right" answer
 - 
This scenario validates that a computationally heavy task can be run in parallel to another task, and then cancelled.
Part 1) Make a request and while the connection is open, perform something computationally heavy (e.g. repeated SHA calculation), then cancel the task when the connection closes
GET /10?{some_id}Part 2) In parallel to Part 1, every 1 second, make a request with the current process load (0 to 1)
GET /10?{same_id_as_part_1}={load}The request in Part 2 will respond with a 20x response if it looks like Part 1 was done correctly (in which case you can stop sending load values), otherwise it will respond with a 30x response if you should continue sending values, or with a 40x response if something has gone wrong.
 - 
This scenario validates that a race where all racers fail, is handled correctly. Race a request with another race of 2 requests.
GET /11The winner returns a 200 response with a body containing
right 
The scenario server has a public container ghcr.io/jamesward/easyracer and if you contribute your client to this repo, use Testcontainers and include automated integration tests.
For local dev you can spin up the server via Docker:
docker run -it -p8080:8080 ghcr.io/jamesward/easyracer --debug