Skip to content

Conversation

VictorLowther
Copy link
Contributor

Followers can get confused if the leader streams a Snapshot that is larger than indicated by the snapshot metadata. Fix this in a few different ways:

  1. When the leader issues an install snapshot RPC, only copy the number of bytes the metadata says to. This ensures there is no extra data on the wire even if the snapshot size grows in between the time the RPC was started and the time the snapshot is streamed over the network.
  2. On the follower side, discard any remaining buffered bytes left over on the conn after processing the install RPC.
  3. Also on the follower side, exit the event processing loop after handling a install snapshot RPC. The leader has never attempted to reuse a conn after sending a snapshot, so the follower should do the same thing.

This PR also includes the changes I made while tracking this issue down, including logging the first 100 bytes left over after handling an unknown RPC and being more aggressive about buffer management with pooled conns (including poisoning any conns returned to the pool).

@VictorLowther VictorLowther requested review from a team as code owners July 1, 2025 02:18
@VictorLowther VictorLowther requested a review from dhiaayachi July 1, 2025 02:18
@VictorLowther VictorLowther force-pushed the snapshot-rpc-error-fixes branch from 5113c49 to 5d2b1e6 Compare July 8, 2025 20:09
We have observed at customer sites various hard-to-replicate cluster
instabilities where followers will fail to resync with the rest of the
cluster after a restart.  In particular, we get a couple of different
types of errors from Raft:

failed to decode incoming command: error=msgpack decode error [pos 12207]: only encoded map or array can be decoded into a struct

This one generally happens when the follower is far enough behind that
the leader decides to send a snapshot instead.

The second is one that should not be possible:

failed to decode incoming command: error=unknown rpc type 125

where the RPC number seems to change almost at random.

After many rounds of debugging at a remote customer site, it appears
that the root cause of the issue is that the Raft library will send
extra data if the file size of the snapshot on disk is larger than the
snapshot size indicated in the snapshot metadata.  In my particular
usecase that can happen because the SnapshotStore we use prefers to
recycle older snapshot files rather than deleting them and creating a
new file.  We do this to make the system more resistant to running out
of disk space.

On the leader side, alleviate this issue by clamping the maximum
amount of snapshot data we will send to the size indicated by the
snapshot metadata.

On the follower side, rework the RPC event loop to exit after handling
an installSnapshot RPC instead of attempting to process the rest of
the bytes remaining in the conn.  This mirrors what the leader has
done for snapshot handling with pooled conns since forever.

On the leader side, arrange for pooled connections to discard any
outstanding data when a connection is returned to the pool, and
arrange for the system to panic if someone ever attempts to use a conn
after returning it to the pool.
@VictorLowther VictorLowther force-pushed the snapshot-rpc-error-fixes branch from 5d2b1e6 to c93c886 Compare July 8, 2025 20:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant