-
-
Couldn't load subscription status.
- Fork 1.7k
Raft batching #7355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
sciascid
wants to merge
5
commits into
main
Choose a base branch
from
raft-batching
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Raft batching #7355
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
90702cf to
512d1fc
Compare
992285b to
973bd4a
Compare
An easy way to collect batch sizes. For performance testing only. Will be removed.
This is the baseline for perfomance testing Raft's batching capabilities. The behavior of the batching mechanism Raft is easier to observe if disk writes are synchronous. I.e we want to write() + fsync() the Raft log. So that producers can easily keep the proposal queue busy. To do so one can set "sync_interval= always". However, that results in disastrous performance: when the leader receives acks for a "big" batch of log entries, the upper layer will write() and fsync() all entries in the batch, individually. So this commit disables "sync always" on stream writes. This *should* work in principle because the data is already in the raft log. Alternatively, one could implement "group commit" for streams, i.e. fsync() only one time after processing a batch of entries. For performance testing only at this point.
This commit removes a "pathological" case from the current Raft batching mechanism: if the proposal queue contains more entries than one batch can fit, then raft will send a full batch, followed by a small batch containing the leftovers. However, it was observed that its quite possible that while the first batch was being stored and sent, clients may already have pushed more stuff into the proposal queue in the meantime. With this fix the server will compose and send a full batch, then the leftovers are handled as follows: if more proposals were pushed into the proposal queue, then we carry over the leftovers to the next iteration. So that the leftovers are batched together with the proposals that were added pushed in the meantime. If there are no more proposals, then we send the leftovers right away. For performance testing only at point.
This is an attempt to reduce contention between Propose() and
sendAppendEntry(). Change Propose() to acquire a read lock on Raft, and
avoid locking Raft during storeToWAL() (which potentially does IO and
may take a long time). This works as long as sendAppendEntry() is called
from the Raft's goroutine only, unless the entry does not require to be
stored to the Raft log. So the rest of the changes are for enforcing the
above requirement:
* Change EntryLeaderTransfer so that it is not store to the Raft log.
* Push EntryPeerState and EntrySnapshot entries to the proposal queue.
* Make sure EntrySnapshot entries skip the leader check, so make sure
those are not batched together with other entries.
For performance testing only at this point.
Limit batch size based on the configured max_payload.
973bd4a to
83995aa
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.

Changes used to evaluate and improve batching at the Raft level.
These are proof-of-concepts, not necessarily complete nor sufficiently tested,
performance evaluation only!