Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions alex_questions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# direct TX

- What format can we expect for the direct TX on the DA? some wrapper type would be useful to unpack and distinct from
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is an open design question. we could either set up some sort of frontend like optimism portal (https://docs.optimism.io/app-developers/tutorials/transactions/send-tx-from-eth#trigger-the-transaction) that allows users to directly submit direct TXs

or

just allow bake in this functionality of submitting a direct tx or set of a them from any full node which can publish a set of txs to the namespace in some format deserializable by the sequencer.

random bytes
- Should we always include direct TX although they may be duplicates to mempool TX? spike: yes, first one wins.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, first one wins. even if there's a duplicate, the duplicate would just fail automatically due to usual tx replay protections in state machines like nonces

- Should we fill the block space with directTX if possible or reserve space for mempool TX
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, this case is when there's a lot of direct txs and mempool txs at the same time, so there needs to be a decision on what to do here. We can limit the amount of direct txs in each evolve block to some fixed number (can be made dynamic later) which follows an inclusion window so that the sequencer's mempool txs are still the ones included in an evolve block the fastest.

- What should we do when a sequencer was not able to add a direct-TX within the time window?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A sequencer following the rules of the network should be able to add a direct tx within the time window. If there's too many direct txs to include in a time window, then everyone can see that and those extra direct txs would just be ignored. This responsibility of telling a user if their direct tx will be included or not, or has be included can be passed on to the frontends.

- Do we need to trace the TX source chanel? There is currently no way to find out if a sequencer adds a direct TX "to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I don't think it's necessary to trace the source channel, maybe it's useful for the sequencer itself though.

The sequencer has preferred execution rights in an evolve block so it can included whatever transactions it wants really since only valid txs will be actually executed. If the sequencer adds a direct tx, X, before it's even included in the DA layer, then every full node can see that that X was included already so that's perfectly valid. I do not see how this can be harmful to do.

early". Can be a duplicate from mempool as well.
- What is a good max size for a sequencer/fallback block? Should this be configurable or genesis?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean a good max size? The max size of an evolve block is equal to the max blob size acceptable by the DA layer (minus any fixed serialization overhead). I think this is already set to to the DA max blob size in Evolve's current version and is only relevant to the sequencer. May be making it configurable is useful in the future but I don't see how it's related to direct txs. Can you please elaborate?

- Do we need to track the sequencer DA activity? When it goes down with no direct TX on DA there are no state changes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in order to decide when to enter fallback mode, full nodes need to assess the liveness of the sequencer. The liveness of the sequencer is dependent on whether they're posting evolve blocks on the DA layer.

Gossiping evolve blocks over P2P shouldn't be considered the way to check liveness since they're vulnerable to P2P split network attacks like eclipse attacks. Opting to listen over P2P is purely optional for a full node. So we should assume the case for a full node only listening to DA for evolve blocks

When it goes down and direct TX missed their time window, fullnodes switch to fallback mode anyway.
- How do we restore the chain from recovery mode?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming by recovery mode, you mean fallback mode here. There doesn't need to be socially coordinated fork necessarily. Let's say the original sequencer was just experiencing downtime and the rest of the network is in fallback mode. Now if the original sequencer comes back online, first it needs to sync to the rest of the network that's in fallback mode to get to the tip of the chain, and it can then signal by posting something to the DA layer that it's back online. Then, the rest of the network full nodes can see this signal and exit fallback mode. They can then wait for the original period of time that causes fallback mode for the sequencer to start posting Evolve blocks to the DA layer.

The alternative if the original sequencer is forever down for some reason like its signing keys being lost or something, is introducing a socially coordinated fork like you suggested

=> socially coordinated fork which nominates a new sequencer to continue to “normal” functionality

## Smarter sequencer

- Flatten the batches from mempool to limit memory more efficiently
- `Executor.GetTxs` method should have a max byte size to limit the response size or pass via context
17 changes: 16 additions & 1 deletion apps/evm/based/cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ import (
"github.com/ethereum/go-ethereum/common"

coreda "github.com/evstack/ev-node/core/da"
coresequencer "github.com/evstack/ev-node/core/sequencer"
"github.com/evstack/ev-node/execution/evm" // Import the evm flags package
"github.com/evstack/ev-node/node"

Expand All @@ -19,6 +20,7 @@ import (
"github.com/evstack/ev-node/pkg/p2p/key"
"github.com/evstack/ev-node/pkg/store"
"github.com/evstack/ev-node/sequencers/based"
"github.com/evstack/ev-node/sequencers/single"

"github.com/spf13/cobra"
)
Expand Down Expand Up @@ -169,11 +171,24 @@ func NewExtendedRunNodeCmd(ctx context.Context) *cobra.Command {
return fmt.Errorf("failed to create P2P client: %w", err)
}

// Create appropriate DirectTxSequencer based on node type
var directTXSeq coresequencer.DirectTxSequencer
if nodeConfig.Node.Aggregator {
// Aggregator nodes use the full DirectTxSequencer that can sequence direct transactions
directTXSeq = single.NewDirectTxSequencer(sequencer, logger, datastore, 100, nodeConfig.ForcedInclusion) // todo (Alex): what is a good max value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The maximum size for the direct transaction sequencer queue is hardcoded to 100. This value should be configurable to allow for tuning in different deployment environments. A // todo comment already indicates that this needs to be addressed.

if err := directTXSeq.Load(ctx); err != nil {
return fmt.Errorf("failed to load direct tx sequencer: %w", err)
}
} else {
// Full nodes use a specialized DirectTxSequencer that stores direct transactions but doesn't sequence
directTXSeq = single.NewFullNodeDirectTxSequencer(sequencer, logger, datastore, 100, nodeConfig.ForcedInclusion)
}

// Pass the raw rollDA implementation to StartNode.
// StartNode might need adjustment if it strictly requires coreda.Client methods.
// For now, assume it can work with coreda.DA or will be adjusted later.
// We also need to pass the namespace config for rollDA.
return rollcmd.StartNode(logger, cmd, executor, sequencer, rollDA, p2pClient, datastore, nodeConfig, node.NodeOptions{})
return rollcmd.StartNode(logger, cmd, executor, directTXSeq, rollDA, p2pClient, datastore, nodeConfig, node.NodeOptions{})
},
}

Expand Down
291 changes: 146 additions & 145 deletions apps/evm/based/go.mod

Large diffs are not rendered by default.

617 changes: 302 additions & 315 deletions apps/evm/based/go.sum

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion apps/evm/single/cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ var RunCmd = &cobra.Command{
}

logger := rollcmd.SetupLogger(nodeConfig.Log)

daJrpc, err := jsonrpc.NewClient(context.Background(), logger, nodeConfig.DA.Address, nodeConfig.DA.AuthToken, nodeConfig.DA.Namespace)
if err != nil {
return err
Expand Down
2 changes: 1 addition & 1 deletion apps/evm/single/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ require (
github.com/buger/goterm v1.0.4 // indirect
github.com/celestiaorg/go-header v0.6.6 // indirect
github.com/celestiaorg/go-libp2p-messenger v0.2.2 // indirect
github.com/celestiaorg/go-square/v2 v2.2.0 // indirect
github.com/celestiaorg/go-square/v2 v2.3.1 // indirect
github.com/cenkalti/backoff/v4 v4.3.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/compose-spec/compose-go/v2 v2.6.0 // indirect
Expand Down
4 changes: 2 additions & 2 deletions apps/evm/single/go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -107,8 +107,8 @@ github.com/celestiaorg/go-header v0.6.6 h1:17GvSXU/w8L1YWHZP4pYm9/4YHA8iy5Ku2wTE
github.com/celestiaorg/go-header v0.6.6/go.mod h1:RdnlTmsyuNerztNiJiQE5G/EGEH+cErhQ83xNjuGcaQ=
github.com/celestiaorg/go-libp2p-messenger v0.2.2 h1:osoUfqjss7vWTIZrrDSy953RjQz+ps/vBFE7bychLEc=
github.com/celestiaorg/go-libp2p-messenger v0.2.2/go.mod h1:oTCRV5TfdO7V/k6nkx7QjQzGrWuJbupv+0o1cgnY2i4=
github.com/celestiaorg/go-square/v2 v2.2.0 h1:zJnUxCYc65S8FgUfVpyG/osDcsnjzo/JSXw/Uwn8zp4=
github.com/celestiaorg/go-square/v2 v2.2.0/go.mod h1:j8kQUqJLYtcvCQMQV6QjEhUdaF7rBTXF74g8LbkR0Co=
github.com/celestiaorg/go-square/v2 v2.3.1 h1:CDdiQ+QkKPOQEcyDPODgP/PbAEzqUcftsohCPcbvsnw=
github.com/celestiaorg/go-square/v2 v2.3.1/go.mod h1:6M2txj0j6dkoE+cgwyG0EqrEPhbZpM2R1lsWEopMIBc=
github.com/celestiaorg/utils v0.1.0 h1:WsP3O8jF7jKRgLNFmlDCwdThwOFMFxg0MnqhkLFVxPo=
github.com/celestiaorg/utils v0.1.0/go.mod h1:vQTh7MHnvpIeCQZ2/Ph+w7K1R2UerDheZbgJEJD2hSU=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
Expand Down
2 changes: 1 addition & 1 deletion apps/testapp/cmd/run.go
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ var RunCmd = &cobra.Command{
if err != nil {
return err
}

// Create appropriate DirectTxSequencer based on node type
p2pClient, err := p2p.NewClient(nodeConfig, nodeKey, datastore, logger, p2p.NopMetrics())
if err != nil {
return err
Expand Down
2 changes: 1 addition & 1 deletion apps/testapp/go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ require (
github.com/beorn7/perks v1.0.1 // indirect
github.com/celestiaorg/go-header v0.6.6 // indirect
github.com/celestiaorg/go-libp2p-messenger v0.2.2 // indirect
github.com/celestiaorg/go-square/v2 v2.2.0 // indirect
github.com/celestiaorg/go-square/v2 v2.3.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/containerd/cgroups v1.1.0 // indirect
github.com/coreos/go-systemd/v22 v22.5.0 // indirect
Expand Down
4 changes: 2 additions & 2 deletions apps/testapp/go.sum
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ github.com/celestiaorg/go-header v0.6.6 h1:17GvSXU/w8L1YWHZP4pYm9/4YHA8iy5Ku2wTE
github.com/celestiaorg/go-header v0.6.6/go.mod h1:RdnlTmsyuNerztNiJiQE5G/EGEH+cErhQ83xNjuGcaQ=
github.com/celestiaorg/go-libp2p-messenger v0.2.2 h1:osoUfqjss7vWTIZrrDSy953RjQz+ps/vBFE7bychLEc=
github.com/celestiaorg/go-libp2p-messenger v0.2.2/go.mod h1:oTCRV5TfdO7V/k6nkx7QjQzGrWuJbupv+0o1cgnY2i4=
github.com/celestiaorg/go-square/v2 v2.2.0 h1:zJnUxCYc65S8FgUfVpyG/osDcsnjzo/JSXw/Uwn8zp4=
github.com/celestiaorg/go-square/v2 v2.2.0/go.mod h1:j8kQUqJLYtcvCQMQV6QjEhUdaF7rBTXF74g8LbkR0Co=
github.com/celestiaorg/go-square/v2 v2.3.1 h1:CDdiQ+QkKPOQEcyDPODgP/PbAEzqUcftsohCPcbvsnw=
github.com/celestiaorg/go-square/v2 v2.3.1/go.mod h1:6M2txj0j6dkoE+cgwyG0EqrEPhbZpM2R1lsWEopMIBc=
github.com/celestiaorg/utils v0.1.0 h1:WsP3O8jF7jKRgLNFmlDCwdThwOFMFxg0MnqhkLFVxPo=
github.com/celestiaorg/utils v0.1.0/go.mod h1:vQTh7MHnvpIeCQZ2/Ph+w7K1R2UerDheZbgJEJD2hSU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
Expand Down
25 changes: 23 additions & 2 deletions block/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ import (
"time"

goheader "github.com/celestiaorg/go-header"
"github.com/celestiaorg/go-square/v2/share"
ds "github.com/ipfs/go-datastore"
"github.com/libp2p/go-libp2p/core/crypto"
"github.com/rs/zerolog"
Expand All @@ -24,6 +25,7 @@ import (
coresequencer "github.com/evstack/ev-node/core/sequencer"
"github.com/evstack/ev-node/pkg/cache"
"github.com/evstack/ev-node/pkg/config"
"github.com/evstack/ev-node/pkg/directtx"
"github.com/evstack/ev-node/pkg/genesis"
"github.com/evstack/ev-node/pkg/signer"
storepkg "github.com/evstack/ev-node/pkg/store"
Expand Down Expand Up @@ -116,8 +118,9 @@ type Manager struct {
dataInCh chan NewDataEvent
dataStore goheader.Store[*types.Data]

headerCache *cache.Cache[types.SignedHeader]
dataCache *cache.Cache[types.Data]
headerCache *cache.Cache[types.SignedHeader]
dataCache *cache.Cache[types.Data]
directTXExtractor *directtx.Extractor

// headerStoreCh is used to notify sync goroutine (HeaderStoreRetrieveLoop) that it needs to retrieve headers from headerStore
headerStoreCh chan struct{}
Expand Down Expand Up @@ -168,6 +171,7 @@ type Manager struct {
// validatorHasherProvider is used to provide the validator hash for the header.
// It is used to set the validator hash in the header.
validatorHasherProvider types.ValidatorHasherProvider
fallbackMode bool
}

// getInitialState tries to load lastState from Store, and if it's not available it reads genesis.
Expand Down Expand Up @@ -302,6 +306,7 @@ func NewManager(
dataStore goheader.Store[*types.Data],
headerBroadcaster broadcaster[*types.SignedHeader],
dataBroadcaster broadcaster[*types.Data],
directTXExtractor *directtx.Extractor,
seqMetrics *Metrics,
gasPrice float64,
gasMultiplier float64,
Expand Down Expand Up @@ -385,6 +390,7 @@ func NewManager(
lastBatchData: lastBatchData,
headerCache: cache.NewCache[types.SignedHeader](),
dataCache: cache.NewCache[types.Data](),
directTXExtractor: directTXExtractor,
retrieveCh: make(chan struct{}, 1),
daIncluderCh: make(chan struct{}, 1),
logger: logger,
Expand Down Expand Up @@ -543,12 +549,18 @@ func (m *Manager) GetExecutor() coreexecutor.Executor {
return m.exec
}

const ( // copied from da/jsonclient/internal
defaultGovMaxSquareSize = 64
defaultMaxBytes = defaultGovMaxSquareSize * defaultGovMaxSquareSize * share.ContinuationSparseShareContentSize
)
Comment on lines +552 to +555
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The constants defaultGovMaxSquareSize and defaultMaxBytes are copied from another package. This can lead to inconsistencies if the original values in the source package change. To improve maintainability, these constants should be imported from a shared package or the source package should be updated to export them.


func (m *Manager) retrieveBatch(ctx context.Context) (*BatchData, error) {
m.logger.Debug().Str("chainID", m.genesis.ChainID).Interface("lastBatchData", m.lastBatchData).Msg("Attempting to retrieve next batch")

req := coresequencer.GetNextBatchRequest{
Id: []byte(m.genesis.ChainID),
LastBatchData: m.lastBatchData,
MaxBytes: defaultMaxBytes, // todo (Alex): do we need to reserve some space for headers and other data?
}

res, err := m.sequencer.GetNextBatch(ctx, req)
Expand Down Expand Up @@ -906,8 +918,17 @@ func (m *Manager) execApplyBlock(ctx context.Context, lastState types.State, hea
}

ctx = context.WithValue(ctx, types.HeaderContextKey, header)
if m.fallbackMode {
ctx = directtx.WithFallbackMode(ctx)
}
newStateRoot, _, err := m.exec.ExecuteTxs(ctx, rawTxs, header.Height(), header.Time(), lastState.AppHash)

if err != nil {
if errors.Is(err, directtx.ErrDirectTXWindowMissed) {
// the sequencer has missed to include a direct TX. Either by censoring or downtime
m.fallbackMode = true
return types.State{}, err
}
Comment on lines +927 to +931
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The fallbackMode flag is set to true when ErrDirectTXWindowMissed occurs, but there appears to be no mechanism to reset it to false. This means that once a node enters fallback mode, it could remain in that mode indefinitely. This could have serious implications for the node's behavior and performance. A strategy for exiting fallback mode should be implemented, whether it's automatic after a certain condition is met or requires manual intervention.

return types.State{}, fmt.Errorf("failed to execute transactions: %w", err)
}

Expand Down
2 changes: 1 addition & 1 deletion block/publish_block_p2p_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,6 @@ func setupBlockManager(t *testing.T, ctx context.Context, workDir string, mainKV
dataSyncService, err := evSync.NewDataSyncService(mainKV, nodeConfig, genesisDoc, p2pClient, dataSyncLogger)
require.NoError(t, err)
require.NoError(t, dataSyncService.Start(ctx))

result, err := NewManager(
ctx,
signer,
Expand All @@ -214,6 +213,7 @@ func setupBlockManager(t *testing.T, ctx context.Context, workDir string, mainKV
dataSyncService.Store(),
nil,
nil,
nil,
NopMetrics(),
1.,
1.,
Expand Down
2 changes: 1 addition & 1 deletion block/reaper_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ func TestReaper_SubmitTxs_Success(t *testing.T) {
// Prepare transaction and its hash
tx := []byte("tx1")

// Mock interactions for the first SubmitTxs call
// Mock interactions for the first retrieveDirectTXs call
mockExec.On("GetTxs", mock.Anything).Return([][]byte{tx}, nil).Once()
submitReqMatcher := mock.MatchedBy(func(req coresequencer.SubmitBatchTxsRequest) bool {
return string(req.Id) == chainID && len(req.Batch.Transactions) == 1 && string(req.Batch.Transactions[0]) == string(tx)
Expand Down
26 changes: 19 additions & 7 deletions block/retriever.go
Original file line number Diff line number Diff line change
Expand Up @@ -80,15 +80,26 @@ func (m *Manager) processNextDAHeaderAndData(ctx context.Context) error {
return nil
}
m.logger.Debug().Int("n", len(blobsResp.Data)).Uint64("daHeight", daHeight).Msg("retrieved potential blob data")
for _, bz := range blobsResp.Data {
for blobIdx, bz := range blobsResp.Data {
if len(bz) == 0 {
m.logger.Debug().Uint64("daHeight", daHeight).Msg("ignoring nil or empty blob")
continue
}
if m.handlePotentialHeader(ctx, bz, daHeight) {
continue
}
m.handlePotentialData(ctx, bz, daHeight)
if m.handlePotentialData(ctx, bz, daHeight) {
continue
}
if _, err := m.directTXExtractor.Handle(
ctx,
daHeight,
blobsResp.IDs[blobIdx],
bz,
blobsResp.Timestamp,
); err != nil {
return err
}
}
return nil
} else if strings.Contains(fetchErr.Error(), coreda.ErrHeightFromFuture.Error()) {
Expand Down Expand Up @@ -158,22 +169,22 @@ func (m *Manager) handlePotentialHeader(ctx context.Context, bz []byte, daHeight
}

// handlePotentialData tries to decode and process a data. No return value.
func (m *Manager) handlePotentialData(ctx context.Context, bz []byte, daHeight uint64) {
func (m *Manager) handlePotentialData(ctx context.Context, bz []byte, daHeight uint64) bool {
var signedData types.SignedData
err := signedData.UnmarshalBinary(bz)
if err != nil {
m.logger.Debug().Err(err).Msg("failed to unmarshal signed data")
return
return false
}
if len(signedData.Txs) == 0 {
m.logger.Debug().Uint64("daHeight", daHeight).Msg("ignoring empty signed data")
return
return false
}

// Early validation to reject junk data
if !m.isValidSignedData(&signedData) {
m.logger.Debug().Uint64("daHeight", daHeight).Msg("invalid data signature")
return
return false
}

dataHashStr := signedData.Data.DACommitment().String()
Expand All @@ -183,12 +194,13 @@ func (m *Manager) handlePotentialData(ctx context.Context, bz []byte, daHeight u
if !m.dataCache.IsSeen(dataHashStr) {
select {
case <-ctx.Done():
return
return false
default:
m.logger.Warn().Uint64("daHeight", daHeight).Msg("dataInCh backlog full, dropping signed data")
}
m.dataInCh <- NewDataEvent{&signedData.Data, daHeight}
}
return true
}

// areAllErrorsHeightFromFuture checks if all errors in a joined error are ErrHeightFromFutureStr
Expand Down
4 changes: 3 additions & 1 deletion client/crates/types/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ cargo build
```

The build script will:

1. Check if pre-generated files exist (`src/proto/evnode.v1.*.rs`)
2. If they exist, use them (this is the default behavior)
3. If they don't exist, attempt to generate them from source proto files
Expand All @@ -66,7 +67,8 @@ EV_TYPES_FORCE_PROTO_GEN=1 cargo build
make rust-proto-gen
```

**Important**:
**Important**:

- The build process generates both `evnode.v1.messages.rs` and `evnode.v1.services.rs`
- Both files should be committed to ensure users can use the crate without needing to regenerate
- When publishing to crates.io, the pre-generated files are included in the package
Expand Down
2 changes: 1 addition & 1 deletion docs/api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,4 @@ title: API Introduction
import spec from '../src/openapi-rpc.json'
</script>

<OAIntroduction :spec="spec" />
<OAIntroduction :spec="spec" />
2 changes: 1 addition & 1 deletion docs/api/operationsByTags/[operationId].md
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@ const route = useRoute()
const operationId = route.data.params.operationId
</script>

<OAOperation :spec="spec" :operationId="operationId" />
<OAOperation :spec="spec" :operationId="operationId" />
1 change: 0 additions & 1 deletion docs/guides/cometbft-to-evolve.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ Run the following command to initialize Evolve:
ignite evolve init
```


<!-- TODO: update

## Initialize Evolve CLI Configuration {#initialize-evolve-cli-configuration}
Expand Down
Loading
Loading