Skip to content

`raft-bare-bones` is a minimal Go implementation of the [Raft consensus algorithm](https://raft.github.io/raft.pdf) that favours readability over raw throughput.

License

Notifications You must be signed in to change notification settings

lokeshdevnani/raft-bare-bones

Repository files navigation

raft-bare-bones

Go Version Status Docs

raft-bare-bones is a minimal Go implementation of the Raft consensus algorithm that favours readability over raw throughput.

Raft is a consensus algorithm for keeping a replicated log consistent across machines - even with crashes and network hiccups.

This repo is a teaching-first implementation: HTTP transport, a small KV app on top, and code you can follow end-to-end without spelunking.

Highlights

  • Focus on understanding: the core/ package keeps the Raft state machine free of persistence and networking concerns.
  • HTTP transport: RequestVote and AppendEntries are exposed as JSON over HTTP, making it easy to trace traffic with curl or any proxy.
  • Example application: a minimal key/value FSM shows how to plug real state machines into the consensus engine.
  • Tidy logging: structured slog output with tint-driven colouring so you can follow roles, terms, and replication progress.

Why this exists

I wanted to implement Raft to check whether I truly understand it. The focus here is on clarity over cleverness:

I have not focused on performance, heavy concurrency, persistence details, or snapshotting.

I have focused on the core algorithm shape and clean interfaces that any application (like a KV store) could use.

The goal is code you can read in one sitting, reason about, and then extend—rather than hundreds of production validations or edge-case guards.

Having said that, I’m happy to accept PRs if you want to take a stab at any missing capabilities. Note: we’ll lean toward simplicity over performance.

Repository Tour

  • cmd/rbb – CLI entry-point that wires the node, HTTP server, and KV app together.
  • core – consensus logic: elections, replication, timers, and log storage interfaces.
  • apps/kv – example FSM plus HTTP handlers used by the demo cluster.
  • transport/httpserver / transport/httpclient – inbound and outbound HTTP wrappers that satisfy the core.Transport interface.
  • logging – thin layer over slog for consistent log formatting.
  • scripts – helpers such as run-3node.sh for spinning up local clusters.

Integrate Your Own Application

Everything ultimately flows through node.Propose: encode a command, call Propose, and let Raft replicate it. To embed that in your service, wire up three pieces:

  • Implement core.FSM. Encode your commands into core.LogEntry.Data and apply them deterministically in Apply(index, entry).
  • Provide a thin store helper. After constructing the core.Node, call node.SetFSM(fsm) and expose domain-friendly methods that prepare payloads before invoking node.Propose.
  • Add I/O glue. Reuse the HTTP server (or your transport of choice) to route client calls and translate ErrNotLeader / ErrNoTransport into helpful responses.

Once those are in place you get replicated, fault-tolerant state for free. See docs/guides/replicated-counter.md for a step-by-step tutorial that builds a replicated counter on top of raft-bare-bones.

Getting Started

Prerequisites

  • Go 1.21+
  • make (optional, but used in the examples below)

Build and Test

make build   # builds bin/rbb
make test    # runs go test ./...

You can also build directly with go build ./cmd/rbb if you prefer not to use make.

Run a Local Cluster

make run-3node

The script compiles the binary (if needed) and launches three nodes listening on 127.0.0.1:7001-7003. Stop the cluster with Ctrl+C.

To launch nodes manually in separate terminals:

# terminal 1
go run ./cmd/rbb --id=n1 --http=127.0.0.1:7001 \
  --peers=127.0.0.1:7002,127.0.0.1:7003

# terminal 2
go run ./cmd/rbb --id=n2 --http=127.0.0.1:7002 \
  --peers=127.0.0.1:7001,127.0.0.1:7003

# terminal 3
go run ./cmd/rbb --id=n3 --http=127.0.0.1:7003 \
  --peers=127.0.0.1:7001,127.0.0.1:7002

Interact with the KV Store

Once the cluster elects a leader, you can write through any node. Followers will respond with HTTP 409 (not leader) so point clients at the leader or retry on 409.

# Write two keys (adjust the port if a different node is leader)
curl -X POST http://127.0.0.1:7001/kv/consensus -d 'raft'
curl -X POST http://127.0.0.1:7001/kv/hello -d 'world'

# Reads may be served by any node after replication commits
curl http://127.0.0.1:7002/kv/consensus
curl http://127.0.0.1:7003/kv/hello

Kill the leader process to watch a new election and log catch-up in real time.

Demo

Raft Demo

Watch a live demonstration of Raft consensus in action! The demo shows:

  • Leader election among 3 nodes
  • Write operations (only accepted by the leader)
  • Data replication across all nodes
  • Fault tolerance when the leader fails
  • Consistency guarantees after recovery

You can also run make run-3node locally and follow along with the commands in demo-manual.sh.

Observability

  • Use --log-level debug for detailed RPC traces or leave the default info to focus on state transitions.
  • Every log line is annotated with node ID and current role; look for ROLE_CHANGED, ELECTION_START, and COMMIT_INDEX_ADVANCED to understand cluster behaviour.

Documentation

  • docs/design.md – architecture overview plus a walkthrough of how a client write flows through the system.
  • TODO.md – roadmap of missing Raft features and potential extensions.
  • docs/guides/replicated-counter.md – beginner-friendly tutorial for building your own replicated application on top of the core node.

Limitations

  • Logs live in memory only; there is no durable storage yet.
  • Snapshotting, membership changes, read-index support, and batching/pipelining are not supported yet.
  • Cluster configuration is static once the process starts.

This project is a learning scaffold, not production-ready software. If you build on top of it, audit the TODO list first.

Contributing

Contributions that keep the code approachable are welcome—especially documentation improvements, additional examples, and incremental Raft features that preserve clarity. Open an issue or PR to discuss ideas.

About

`raft-bare-bones` is a minimal Go implementation of the [Raft consensus algorithm](https://raft.github.io/raft.pdf) that favours readability over raw throughput.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published