raft-bare-bones is a minimal Go implementation of the Raft consensus algorithm that favours readability over raw throughput.
Raft is a consensus algorithm for keeping a replicated log consistent across machines - even with crashes and network hiccups.
This repo is a teaching-first implementation: HTTP transport, a small KV app on top, and code you can follow end-to-end without spelunking.
- Focus on understanding: the
core/package keeps the Raft state machine free of persistence and networking concerns. - HTTP transport: RequestVote and AppendEntries are exposed as JSON over HTTP, making it easy to trace traffic with
curlor any proxy. - Example application: a minimal key/value FSM shows how to plug real state machines into the consensus engine.
- Tidy logging: structured slog output with tint-driven colouring so you can follow roles, terms, and replication progress.
I wanted to implement Raft to check whether I truly understand it. The focus here is on clarity over cleverness:
I have not focused on performance, heavy concurrency, persistence details, or snapshotting.
I have focused on the core algorithm shape and clean interfaces that any application (like a KV store) could use.
The goal is code you can read in one sitting, reason about, and then extend—rather than hundreds of production validations or edge-case guards.
Having said that, I’m happy to accept PRs if you want to take a stab at any missing capabilities. Note: we’ll lean toward simplicity over performance.
cmd/rbb– CLI entry-point that wires the node, HTTP server, and KV app together.core– consensus logic: elections, replication, timers, and log storage interfaces.apps/kv– example FSM plus HTTP handlers used by the demo cluster.transport/httpserver/transport/httpclient– inbound and outbound HTTP wrappers that satisfy thecore.Transportinterface.logging– thin layer overslogfor consistent log formatting.scripts– helpers such asrun-3node.shfor spinning up local clusters.
Everything ultimately flows through node.Propose: encode a command, call Propose, and let Raft replicate it. To embed that in your service, wire up three pieces:
- Implement
core.FSM. Encode your commands intocore.LogEntry.Dataand apply them deterministically inApply(index, entry). - Provide a thin store helper. After constructing the
core.Node, callnode.SetFSM(fsm)and expose domain-friendly methods that prepare payloads before invokingnode.Propose. - Add I/O glue. Reuse the HTTP server (or your transport of choice) to route client calls and translate
ErrNotLeader/ErrNoTransportinto helpful responses.
Once those are in place you get replicated, fault-tolerant state for free. See docs/guides/replicated-counter.md for a step-by-step tutorial that builds a replicated counter on top of raft-bare-bones.
- Go 1.21+
make(optional, but used in the examples below)
make build # builds bin/rbb
make test # runs go test ./...You can also build directly with go build ./cmd/rbb if you prefer not to use make.
make run-3nodeThe script compiles the binary (if needed) and launches three nodes listening on 127.0.0.1:7001-7003. Stop the cluster with Ctrl+C.
To launch nodes manually in separate terminals:
# terminal 1
go run ./cmd/rbb --id=n1 --http=127.0.0.1:7001 \
--peers=127.0.0.1:7002,127.0.0.1:7003
# terminal 2
go run ./cmd/rbb --id=n2 --http=127.0.0.1:7002 \
--peers=127.0.0.1:7001,127.0.0.1:7003
# terminal 3
go run ./cmd/rbb --id=n3 --http=127.0.0.1:7003 \
--peers=127.0.0.1:7001,127.0.0.1:7002Once the cluster elects a leader, you can write through any node. Followers will respond with HTTP 409 (not leader) so point clients at the leader or retry on 409.
# Write two keys (adjust the port if a different node is leader)
curl -X POST http://127.0.0.1:7001/kv/consensus -d 'raft'
curl -X POST http://127.0.0.1:7001/kv/hello -d 'world'
# Reads may be served by any node after replication commits
curl http://127.0.0.1:7002/kv/consensus
curl http://127.0.0.1:7003/kv/helloKill the leader process to watch a new election and log catch-up in real time.
Watch a live demonstration of Raft consensus in action! The demo shows:
- Leader election among 3 nodes
- Write operations (only accepted by the leader)
- Data replication across all nodes
- Fault tolerance when the leader fails
- Consistency guarantees after recovery
You can also run make run-3node locally and follow along with the commands in demo-manual.sh.
- Use
--log-level debugfor detailed RPC traces or leave the defaultinfoto focus on state transitions. - Every log line is annotated with node ID and current role; look for
ROLE_CHANGED,ELECTION_START, andCOMMIT_INDEX_ADVANCEDto understand cluster behaviour.
docs/design.md– architecture overview plus a walkthrough of how a client write flows through the system.TODO.md– roadmap of missing Raft features and potential extensions.docs/guides/replicated-counter.md– beginner-friendly tutorial for building your own replicated application on top of the core node.
- Logs live in memory only; there is no durable storage yet.
- Snapshotting, membership changes, read-index support, and batching/pipelining are not supported yet.
- Cluster configuration is static once the process starts.
This project is a learning scaffold, not production-ready software. If you build on top of it, audit the TODO list first.
Contributions that keep the code approachable are welcome—especially documentation improvements, additional examples, and incremental Raft features that preserve clarity. Open an issue or PR to discuss ideas.
