From 5f6ca6a2761721248f5f706b23a91a3a581a0599 Mon Sep 17 00:00:00 2001 From: Andrew Mayorov Date: Tue, 14 Oct 2025 11:57:02 +0200 Subject: [PATCH 1/4] chore: mention `mria` pre-0.8.18 bug in known issues --- en_US/changes/known-issues-5.9.md | 1 + en_US/changes/known-issues-6.0.md | 1 + 2 files changed, 2 insertions(+) diff --git a/en_US/changes/known-issues-5.9.md b/en_US/changes/known-issues-5.9.md index aeb51e40d..ec8ca2232 100644 --- a/en_US/changes/known-issues-5.9.md +++ b/en_US/changes/known-issues-5.9.md @@ -6,3 +6,4 @@ | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------- | | 5.0.0 | **Node Crash if Linux monotonic clock steps backward**
In certain virtual Linux environments, the operating system is unable to keep the clocks monotonic, which may cause Erlang VM to exit with the message `OS monotonic time stepped backwards!`. | For such environments, you may set the `+c` flag to `false` in `etc/vm.args`. | | | 5.3.0 | **Limitation in SAML-Based SSO**
EMQX Dashboard supports Single Sign-On based on the Security Assertion Markup Language (SAML) 2.0 standard and integrates with Okta and OneLogin as identity providers. However, the SAML-based SSO currently does not support a certificate signature verification mechanism and is incompatible with Azure Entra ID due to its complexity. | - | | +| 5.1.0 | **Core-replicant cluster changes involving adding core nodes occasionally cause replicants to hang on startup**
During cluster changes involving adding new core nodes, said core nodes could sometimes fail to start replication-related processes that replicants rely on. This in turn caused upgraded or newly added replicant nodes to hang on startup. In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes. Typical upgrade rollouts, such as adding two new cores and two new replicants running a newer EMQX release to an existing 2-core + 2-replicant cluster, could be affected. | If one or more replicant nodes hang on startup after being (re)deployed, consider forcefully restarting the newly added core nodes one after another, until the replicants unblock and complete startup. | Fixed in 6.0.1 | diff --git a/en_US/changes/known-issues-6.0.md b/en_US/changes/known-issues-6.0.md index 275254dbe..50e4b39e6 100644 --- a/en_US/changes/known-issues-6.0.md +++ b/en_US/changes/known-issues-6.0.md @@ -5,3 +5,4 @@ | Since version | Issue | Workaround | Status | | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ | | 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources` and `actions`, facilitating rolling upgrades with less manual invervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description) which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | +| 5.1.0 | **Core-replicant cluster changes involving adding core nodes occasionally cause replicants to hang on startup**
During cluster changes involving adding new core nodes, said core nodes could sometimes fail to start replication-related processes that replicants rely on. This in turn caused upgraded or newly added replicant nodes to hang on startup. In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes. Typical upgrade rollouts, such as adding two new cores and two new replicants running a newer EMQX release to an existing 2-core + 2-replicant cluster, could be affected. | If one or more replicant nodes hang on startup after being (re)deployed, consider forcefully restarting the newly added core nodes one after another, until the replicants unblock and complete startup. | Fixed in 6.0.1 | From 4604d57ddc6bd467975964352afd37b4b3c1566f Mon Sep 17 00:00:00 2001 From: Meggielqk <126552073+Meggielqk@users.noreply.github.com> Date: Fri, 17 Oct 2025 11:45:30 +0800 Subject: [PATCH 2/4] Edit and add zh translations --- en_US/changes/known-issues-5.9.md | 2 +- en_US/changes/known-issues-6.0.md | 5 +++-- zh_CN/changes/known-issues-5.9.md | 9 +++++---- zh_CN/changes/known-issues-6.0.md | 7 ++++--- 4 files changed, 13 insertions(+), 10 deletions(-) diff --git a/en_US/changes/known-issues-5.9.md b/en_US/changes/known-issues-5.9.md index ec8ca2232..967cd69a6 100644 --- a/en_US/changes/known-issues-5.9.md +++ b/en_US/changes/known-issues-5.9.md @@ -6,4 +6,4 @@ | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------- | | 5.0.0 | **Node Crash if Linux monotonic clock steps backward**
In certain virtual Linux environments, the operating system is unable to keep the clocks monotonic, which may cause Erlang VM to exit with the message `OS monotonic time stepped backwards!`. | For such environments, you may set the `+c` flag to `false` in `etc/vm.args`. | | | 5.3.0 | **Limitation in SAML-Based SSO**
EMQX Dashboard supports Single Sign-On based on the Security Assertion Markup Language (SAML) 2.0 standard and integrates with Okta and OneLogin as identity providers. However, the SAML-based SSO currently does not support a certificate signature verification mechanism and is incompatible with Azure Entra ID due to its complexity. | - | | -| 5.1.0 | **Core-replicant cluster changes involving adding core nodes occasionally cause replicants to hang on startup**
During cluster changes involving adding new core nodes, said core nodes could sometimes fail to start replication-related processes that replicants rely on. This in turn caused upgraded or newly added replicant nodes to hang on startup. In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes. Typical upgrade rollouts, such as adding two new cores and two new replicants running a newer EMQX release to an existing 2-core + 2-replicant cluster, could be affected. | If one or more replicant nodes hang on startup after being (re)deployed, consider forcefully restarting the newly added core nodes one after another, until the replicants unblock and complete startup. | Fixed in 6.0.1 | +| 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**
During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.
In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.
This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 | diff --git a/en_US/changes/known-issues-6.0.md b/en_US/changes/known-issues-6.0.md index 50e4b39e6..08f4256b0 100644 --- a/en_US/changes/known-issues-6.0.md +++ b/en_US/changes/known-issues-6.0.md @@ -4,5 +4,6 @@ | Since version | Issue | Workaround | Status | | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ | -| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources` and `actions`, facilitating rolling upgrades with less manual invervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description) which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | -| 5.1.0 | **Core-replicant cluster changes involving adding core nodes occasionally cause replicants to hang on startup**
During cluster changes involving adding new core nodes, said core nodes could sometimes fail to start replication-related processes that replicants rely on. This in turn caused upgraded or newly added replicant nodes to hang on startup. In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes. Typical upgrade rollouts, such as adding two new cores and two new replicants running a newer EMQX release to an existing 2-core + 2-replicant cluster, could be affected. | If one or more replicant nodes hang on startup after being (re)deployed, consider forcefully restarting the newly added core nodes one after another, until the replicants unblock and complete startup. | Fixed in 6.0.1 | +| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources,` and `actions`, facilitating rolling upgrades with less manual intervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | +| 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**
During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.
In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.
This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 | + diff --git a/zh_CN/changes/known-issues-5.9.md b/zh_CN/changes/known-issues-5.9.md index 9b6e7edd9..e30dc3165 100644 --- a/zh_CN/changes/known-issues-5.9.md +++ b/zh_CN/changes/known-issues-5.9.md @@ -2,7 +2,8 @@ ## e5.9.0 -| 始于版本 | 问题描述 | 解决方法 | 状态 | -| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- | -| 5.0.0 | **Linux 单调时钟回调导致 EMQX 节点重启**
在某些虚拟 Linux 环境中,操作系统无法保持时钟的单调性,这可能会导致 Erlang VM 因为错误消息 `OS monotonic time stepped backwards!` 而退出。 | 对于这类环境,可以在 `etc/vm.args` 中将 `+c` 标志设置为 `false`。 | | -| 5.3.0 | **基于 SAML 的单点登录限制**
EMQX Dashboard 支持基于安全断言标记语言(SAML)2.0 标准的单点登录(SSO),并与 Okta 和 OneLogin 作为身份提供商集成。然而,基于 SAML 的 SSO 目前不支持证书签名验证机制,并且由于其复杂性,无法与 Azure Entra ID 兼容。 | - | | +| 始于版本 | 问题描述 | 解决方法 | 状态 | +| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | +| 5.0.0 | **Linux 单调时钟回调导致 EMQX 节点重启**
在某些虚拟 Linux 环境中,操作系统无法保持时钟的单调性,这可能会导致 Erlang VM 因为错误消息 `OS monotonic time stepped backwards!` 而退出。 | 对于这类环境,可以在 `etc/vm.args` 中将 `+c` 标志设置为 `false`。 | | +| 5.3.0 | **基于 SAML 的单点登录限制**
EMQX Dashboard 支持基于安全断言标记语言(SAML)2.0 标准的单点登录(SSO),并与 Okta 和 OneLogin 作为身份提供商集成。然而,基于 SAML 的 SSO 目前不支持证书签名验证机制,并且由于其复杂性,无法与 Azure Entra ID 兼容。 | - | | +| 5.1.0 | **新增核心节点时,复制节点在启动阶段可能出现启动失败**
在涉及新增核心节点的集群变更过程中,新加入的核心节点有时可能无法正确启动复制节点所依赖的复制相关进程,进而导致升级后的或新添加的复制节点在启动时发生启动失败。
在 Kubernetes 部署中,这种情况会导致复制节点的就绪探针检查失败,从而被控制器不断地终止并重启复制节点的 Pod。
该问题通常出现在升级过程中,例如在原有的“两个核心节点 + 两个复制节点”集群基础上,添加两个运行新版 EMQX 的核心节点和两个复制节点时。 | 如果一个或多个复制节点在(重新)部署后启动时出现启动失败的情况,可以尝试依次强制重启新添加的核心节点,直到复制节点解除卡顿并完成启动。 | 已在 6.0.1 中修复 | diff --git a/zh_CN/changes/known-issues-6.0.md b/zh_CN/changes/known-issues-6.0.md index fb3d4c6de..872ce283a 100644 --- a/zh_CN/changes/known-issues-6.0.md +++ b/zh_CN/changes/known-issues-6.0.md @@ -2,6 +2,7 @@ ## 6.0.0 -| 始于版本 | 问题描述 | 解决方法 | 状态 | -| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ---- | -| 6.0.0 | **当配置中包含旧版桥接(bridges)时,无法从运行 5.x 的集群滚动升级到 6.0.0**
如果集群是从较早版本的 EMQX 启动,并且配置中包含现已弃用的 `bridges` 配置根项,则无法将配置同步到新的 6.0 节点。因为 6.0 版本已移除对该配置根项的支持,导致无法启动相应的连接器(Connector)、动作(Action)和 Source。 | 从 6.0.1 起,系统会通过 RPC 调用旧节点,将配置中的 `bridges` 自动转换为 `connectors`、`sources` 和 `actions`,从而减少手动干预,实现平滑滚动升级。
或者,也可以通过 HTTP API 或 CLI 手动更新每个受影响的桥接配置(例如修改描述字段),以触发配置更新并升级持久化的 `cluster.hocon` 文件。
以下连接器、Source 或动作类型在尝试滚动升级前仍可能需要手动修改:
- GCP PubSub 消费者
- Kafka 消费者
如果这些配置中仍包含 `topic_mapping` 字段,需要手动从配置中移除,并为每个条目创建一个 “Source + 规则” 对。 | | +| 始于版本 | 问题描述 | 解决方法 | 状态 | +| -------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ----------------- | +| 6.0.0 | **当配置中包含旧版桥接(bridges)时,无法从运行 5.x 的集群滚动升级到 6.0.0**
如果集群是从较早版本的 EMQX 启动,并且配置中包含现已弃用的 `bridges` 配置根项,则无法将配置同步到新的 6.0 节点。因为 6.0 版本已移除对该配置根项的支持,导致无法启动相应的连接器(Connector)、动作(Action)和 Source。 | 从 6.0.1 起,系统会通过 RPC 调用旧节点,将配置中的 `bridges` 自动转换为 `connectors`、`sources` 和 `actions`,从而减少手动干预,实现平滑滚动升级。
或者,也可以通过 HTTP API 或 CLI 手动更新每个受影响的桥接配置(例如修改描述字段),以触发配置更新并升级持久化的 `cluster.hocon` 文件。
以下连接器、Source 或动作类型在尝试滚动升级前仍可能需要手动修改:
- GCP PubSub 消费者
- Kafka 消费者
如果这些配置中仍包含 `topic_mapping` 字段,需要手动从配置中移除,并为每个条目创建一个 “Source + 规则” 对。 | | +| 5.1.0 | **新增核心节点时,复制节点在启动阶段可能出现启动失败**
在涉及新增核心节点的集群变更过程中,新加入的核心节点有时可能无法正确启动复制节点所依赖的复制相关进程,进而导致升级后的或新添加的复制节点在启动时发生启动失败。
在 Kubernetes 部署中,这种情况会导致复制节点的就绪探针检查失败,从而被控制器不断地终止并重启复制节点的 Pod。
该问题通常出现在升级过程中,例如在原有的“两个核心节点 + 两个复制节点”集群基础上,添加两个运行新版 EMQX 的核心节点和两个复制节点时。 | 如果一个或多个复制节点在(重新)部署后启动时出现启动失败的情况,可以尝试依次强制重启新添加的核心节点,直到复制节点解除卡顿并完成启动。 | 已在 6.0.1 中修复 | From 2b4c82a61941816f01db2270cd215886a9d6eb5d Mon Sep 17 00:00:00 2001 From: Meggielqk <126552073+Meggielqk@users.noreply.github.com> Date: Fri, 7 Nov 2025 17:46:36 +0800 Subject: [PATCH 3/4] Update en_US/changes/known-issues-6.0.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- en_US/changes/known-issues-6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en_US/changes/known-issues-6.0.md b/en_US/changes/known-issues-6.0.md index 08f4256b0..ad5b8e8cc 100644 --- a/en_US/changes/known-issues-6.0.md +++ b/en_US/changes/known-issues-6.0.md @@ -4,6 +4,6 @@ | Since version | Issue | Workaround | Status | | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ | -| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources,` and `actions`, facilitating rolling upgrades with less manual intervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | +| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources` and `actions`, facilitating rolling upgrades with less manual intervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | | 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**
During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.
In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.
This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 | From b74d55f7b40876eaf5b086d0cf52f6a151da2154 Mon Sep 17 00:00:00 2001 From: Meggielqk <126552073+Meggielqk@users.noreply.github.com> Date: Fri, 7 Nov 2025 17:48:49 +0800 Subject: [PATCH 4/4] Update en_US/changes/known-issues-6.0.md --- en_US/changes/known-issues-6.0.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/en_US/changes/known-issues-6.0.md b/en_US/changes/known-issues-6.0.md index ad5b8e8cc..fea180ddb 100644 --- a/en_US/changes/known-issues-6.0.md +++ b/en_US/changes/known-issues-6.0.md @@ -4,6 +4,6 @@ | Since version | Issue | Workaround | Status | | ------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------ | -| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources` and `actions`, facilitating rolling upgrades with less manual intervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | +| 6.0.0 | **Cannot perform rolling upgrade from cluster running 5.x to 6.0.0 when older bridges are in the configuration**
Clusters that have started running from older EMQX versions that contain the now deprecated `bridges` configuration root will fail to sync their configuration to the new 6.0 nodes, because the latter have dropped for such roots and thus fail to start the corresponding Connectors, Actions and Sources. | Starting from 6.0.1, an RPC call is made to the older node to upgrade the configuration to convert `bridges` into `connectors`, `sources`, and `actions`, facilitating rolling upgrades with less manual intervention.
Alternatively, each affected bridge can be updated via HTTP API or CLI to induce a configuration update (e.g., change the description), which will also upgrade the persisted `cluster.hocon` file.
The following Connector/Sources/Actions might still require manual changes before attempting a rolling upgrade:
- GCP PubSub Consumer
- Kafka Consumer
If there are any such sources in the configuration that still contain the `topic_mapping` field, the field must be removed from config and then one "Source + Rule" pair must be created for each entry. | | | 5.1.0 | **Replicant nodes may hang on startup when new core nodes are added to the cluster**
During cluster changes that involve adding new core nodes, the newly added cores may occasionally fail to start replication-related processes required by replicant nodes. This, in turn, caused upgraded or newly added replicant nodes to hang during startup.
In Kubernetes deployments, this led to the controller repeatedly restarting replicant pods due to failing readiness probes.
This problem typically occurs during upgrade rollouts, for example, when expanding an existing 2-core + 2-replicant cluster by adding two new core nodes and two new replicants running a newer EMQX version. | If one or more replicant nodes hang during startup after being (re)deployed, consider forcefully restarting the newly added core nodes one at a time until the replicants unblock and complete startup. | Fixed in 6.0.1 |