-
Notifications
You must be signed in to change notification settings - Fork 220
Spike: direct tx @ executor adapter #2536
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
# direct TX | ||
|
||
- What format can we expect for the direct TX on the DA? some wrapper type would be useful to unpack and distinct from | ||
random bytes | ||
- Should we always include direct TX although they may be duplicates to mempool TX? spike: yes, first one wins. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. yes, first one wins. even if there's a duplicate, the duplicate would just fail automatically due to usual tx replay protections in state machines like nonces |
||
- Should we fill the block space with directTX if possible or reserve space for mempool TX | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. IIUC, this case is when there's a lot of direct txs and mempool txs at the same time, so there needs to be a decision on what to do here. We can limit the amount of direct txs in each evolve block to some fixed number (can be made dynamic later) which follows an inclusion window so that the sequencer's mempool txs are still the ones included in an evolve block the fastest. |
||
- What should we do when a sequencer was not able to add a direct-TX within the time window? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. A sequencer following the rules of the network should be able to add a direct tx within the time window. If there's too many direct txs to include in a time window, then everyone can see that and those extra direct txs would just be ignored. This responsibility of telling a user if their direct tx will be included or not, or has be included can be passed on to the frontends. |
||
- Do we need to trace the TX source chanel? There is currently no way to find out if a sequencer adds a direct TX "to | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No I don't think it's necessary to trace the source channel, maybe it's useful for the sequencer itself though. The sequencer has preferred execution rights in an evolve block so it can included whatever transactions it wants really since only valid txs will be actually executed. If the sequencer adds a direct tx, X, before it's even included in the DA layer, then every full node can see that that X was included already so that's perfectly valid. I do not see how this can be harmful to do. |
||
early". Can be a duplicate from mempool as well. | ||
- What is a good max size for a sequencer/fallback block? Should this be configurable or genesis? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What do you mean a |
||
- Do we need to track the sequencer DA activity? When it goes down with no direct TX on DA there are no state changes. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, in order to decide when to enter fallback mode, full nodes need to assess the liveness of the sequencer. The liveness of the sequencer is dependent on whether they're posting evolve blocks on the DA layer. Gossiping evolve blocks over P2P shouldn't be considered the way to check liveness since they're vulnerable to P2P split network attacks like eclipse attacks. Opting to listen over P2P is purely optional for a full node. So we should assume the case for a full node only listening to DA for evolve blocks |
||
When it goes down and direct TX missed their time window, fullnodes switch to fallback mode anyway. | ||
- How do we restore the chain from recovery mode? | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Assuming by recovery mode, you mean fallback mode here. There doesn't need to be socially coordinated fork necessarily. Let's say the original sequencer was just experiencing downtime and the rest of the network is in fallback mode. Now if the original sequencer comes back online, first it needs to sync to the rest of the network that's in fallback mode to get to the tip of the chain, and it can then signal by posting something to the DA layer that it's back online. Then, the rest of the network full nodes can see this signal and exit fallback mode. They can then wait for the original period of time that causes fallback mode for the sequencer to start posting Evolve blocks to the DA layer. The alternative if the original sequencer is forever down for some reason like its signing keys being lost or something, is introducing a socially coordinated fork like you suggested |
||
=> socially coordinated fork which nominates a new sequencer to continue to “normal” functionality | ||
|
||
## Smarter sequencer | ||
|
||
- Flatten the batches from mempool to limit memory more efficiently | ||
- `Executor.GetTxs` method should have a max byte size to limit the response size or pass via context |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,6 +9,7 @@ import ( | |
"github.com/ethereum/go-ethereum/common" | ||
|
||
coreda "github.com/evstack/ev-node/core/da" | ||
coresequencer "github.com/evstack/ev-node/core/sequencer" | ||
"github.com/evstack/ev-node/execution/evm" // Import the evm flags package | ||
"github.com/evstack/ev-node/node" | ||
|
||
|
@@ -19,6 +20,7 @@ import ( | |
"github.com/evstack/ev-node/pkg/p2p/key" | ||
"github.com/evstack/ev-node/pkg/store" | ||
"github.com/evstack/ev-node/sequencers/based" | ||
"github.com/evstack/ev-node/sequencers/single" | ||
|
||
"github.com/spf13/cobra" | ||
) | ||
|
@@ -169,11 +171,24 @@ func NewExtendedRunNodeCmd(ctx context.Context) *cobra.Command { | |
return fmt.Errorf("failed to create P2P client: %w", err) | ||
} | ||
|
||
// Create appropriate DirectTxSequencer based on node type | ||
var directTXSeq coresequencer.DirectTxSequencer | ||
if nodeConfig.Node.Aggregator { | ||
// Aggregator nodes use the full DirectTxSequencer that can sequence direct transactions | ||
directTXSeq = single.NewDirectTxSequencer(sequencer, logger, datastore, 100, nodeConfig.ForcedInclusion) // todo (Alex): what is a good max value | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||
if err := directTXSeq.Load(ctx); err != nil { | ||
return fmt.Errorf("failed to load direct tx sequencer: %w", err) | ||
} | ||
} else { | ||
// Full nodes use a specialized DirectTxSequencer that stores direct transactions but doesn't sequence | ||
directTXSeq = single.NewFullNodeDirectTxSequencer(sequencer, logger, datastore, 100, nodeConfig.ForcedInclusion) | ||
} | ||
|
||
// Pass the raw rollDA implementation to StartNode. | ||
// StartNode might need adjustment if it strictly requires coreda.Client methods. | ||
// For now, assume it can work with coreda.DA or will be adjusted later. | ||
// We also need to pass the namespace config for rollDA. | ||
return rollcmd.StartNode(logger, cmd, executor, sequencer, rollDA, p2pClient, datastore, nodeConfig, node.NodeOptions{}) | ||
return rollcmd.StartNode(logger, cmd, executor, directTXSeq, rollDA, p2pClient, datastore, nodeConfig, node.NodeOptions{}) | ||
}, | ||
} | ||
|
||
|
Large diffs are not rendered by default.
Large diffs are not rendered by default.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -14,6 +14,7 @@ import ( | |
"time" | ||
|
||
goheader "github.com/celestiaorg/go-header" | ||
"github.com/celestiaorg/go-square/v2/share" | ||
ds "github.com/ipfs/go-datastore" | ||
"github.com/libp2p/go-libp2p/core/crypto" | ||
"github.com/rs/zerolog" | ||
|
@@ -24,6 +25,7 @@ import ( | |
coresequencer "github.com/evstack/ev-node/core/sequencer" | ||
"github.com/evstack/ev-node/pkg/cache" | ||
"github.com/evstack/ev-node/pkg/config" | ||
"github.com/evstack/ev-node/pkg/directtx" | ||
"github.com/evstack/ev-node/pkg/genesis" | ||
"github.com/evstack/ev-node/pkg/signer" | ||
storepkg "github.com/evstack/ev-node/pkg/store" | ||
|
@@ -116,8 +118,9 @@ type Manager struct { | |
dataInCh chan NewDataEvent | ||
dataStore goheader.Store[*types.Data] | ||
|
||
headerCache *cache.Cache[types.SignedHeader] | ||
dataCache *cache.Cache[types.Data] | ||
headerCache *cache.Cache[types.SignedHeader] | ||
dataCache *cache.Cache[types.Data] | ||
directTXExtractor *directtx.Extractor | ||
|
||
// headerStoreCh is used to notify sync goroutine (HeaderStoreRetrieveLoop) that it needs to retrieve headers from headerStore | ||
headerStoreCh chan struct{} | ||
|
@@ -168,6 +171,7 @@ type Manager struct { | |
// validatorHasherProvider is used to provide the validator hash for the header. | ||
// It is used to set the validator hash in the header. | ||
validatorHasherProvider types.ValidatorHasherProvider | ||
fallbackMode bool | ||
} | ||
|
||
// getInitialState tries to load lastState from Store, and if it's not available it reads genesis. | ||
|
@@ -302,6 +306,7 @@ func NewManager( | |
dataStore goheader.Store[*types.Data], | ||
headerBroadcaster broadcaster[*types.SignedHeader], | ||
dataBroadcaster broadcaster[*types.Data], | ||
directTXExtractor *directtx.Extractor, | ||
seqMetrics *Metrics, | ||
gasPrice float64, | ||
gasMultiplier float64, | ||
|
@@ -385,6 +390,7 @@ func NewManager( | |
lastBatchData: lastBatchData, | ||
headerCache: cache.NewCache[types.SignedHeader](), | ||
dataCache: cache.NewCache[types.Data](), | ||
directTXExtractor: directTXExtractor, | ||
retrieveCh: make(chan struct{}, 1), | ||
daIncluderCh: make(chan struct{}, 1), | ||
logger: logger, | ||
|
@@ -543,12 +549,18 @@ func (m *Manager) GetExecutor() coreexecutor.Executor { | |
return m.exec | ||
} | ||
|
||
const ( // copied from da/jsonclient/internal | ||
defaultGovMaxSquareSize = 64 | ||
defaultMaxBytes = defaultGovMaxSquareSize * defaultGovMaxSquareSize * share.ContinuationSparseShareContentSize | ||
) | ||
Comment on lines
+552
to
+555
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The constants |
||
|
||
func (m *Manager) retrieveBatch(ctx context.Context) (*BatchData, error) { | ||
m.logger.Debug().Str("chainID", m.genesis.ChainID).Interface("lastBatchData", m.lastBatchData).Msg("Attempting to retrieve next batch") | ||
|
||
req := coresequencer.GetNextBatchRequest{ | ||
Id: []byte(m.genesis.ChainID), | ||
LastBatchData: m.lastBatchData, | ||
MaxBytes: defaultMaxBytes, // todo (Alex): do we need to reserve some space for headers and other data? | ||
} | ||
|
||
res, err := m.sequencer.GetNextBatch(ctx, req) | ||
|
@@ -906,8 +918,17 @@ func (m *Manager) execApplyBlock(ctx context.Context, lastState types.State, hea | |
} | ||
|
||
ctx = context.WithValue(ctx, types.HeaderContextKey, header) | ||
if m.fallbackMode { | ||
ctx = directtx.WithFallbackMode(ctx) | ||
} | ||
newStateRoot, _, err := m.exec.ExecuteTxs(ctx, rawTxs, header.Height(), header.Time(), lastState.AppHash) | ||
|
||
if err != nil { | ||
if errors.Is(err, directtx.ErrDirectTXWindowMissed) { | ||
// the sequencer has missed to include a direct TX. Either by censoring or downtime | ||
m.fallbackMode = true | ||
return types.State{}, err | ||
} | ||
Comment on lines
+927
to
+931
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The |
||
return types.State{}, fmt.Errorf("failed to execute transactions: %w", err) | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is an open design question. we could either set up some sort of frontend like optimism portal (https://docs.optimism.io/app-developers/tutorials/transactions/send-tx-from-eth#trigger-the-transaction) that allows users to directly submit direct TXs
or
just allow bake in this functionality of submitting a direct tx or set of a them from any full node which can publish a set of txs to the namespace in some format deserializable by the sequencer.