The PR introduces a new shared secret that is shared over all the nodes
on the Redis cluster. The main idea is to leverage the cluster bus to
share a secret between all the nodes such that later the nodes will be
able to authenticate using this secret and send internal commands to
each other (see #13740 for more information about internal commands).
The way the shared secret is chosen is the following:
1. Each node, when start, randomly generate its own internal secret.
2. Each node share its internal secret over the cluster ping messages.
3. If a node gets a ping message with secret smaller then his current
secret, it embrace it.
4. Eventually all nodes should embrace the minimal secret
The converges of the secret is as good as the topology converges.
To extend the ping messages to contain the secret, we leverage the
extension mechanism. Nodes that runs an older Redis version will just
ignore those extensions.
Specific tests were added to verify that eventually all nodes see the
secrets. In addition, a verification was added to the test infra to
verify the secret on `cluster_config_consistent` and to
`assert_cluster_state`.
This commit excludes aux fields from the output of the `cluster nodes` and `cluster replicas` command.
We may decide to re-introduce them in some form or another in the future, but not in v7.2.
Attempt to harden cluster init-tests by doing two things:
* Retry up to 3 times to join the cluster. Cluster meet is entirely idempotent, so it should stabilize if we missed a node.
* Validate the connection is actually established, not just exists in the cluster list. Nodes can exist in handshake, but might later get dropped.
Introduce Shard IDs to logically group nodes in cluster mode.
1. Added a new "shard_id" field to "cluster nodes" output and nodes.conf after "hostname"
2. Added a new PING extension to propagate "shard_id"
3. Handled upgrade from pre-7.2 releases automatically
4. Refactored PING extension assembling/parsing logic
Behavior of Shard IDs:
Replicas will always follow the shards of their reported primaries. If a primary updates its shard ID, the replica will follow. (This need not follow for cluster v2) This is not an expected use case.
Implement a new cluster shards command, which provides a flexible and extensible API for topology discovery.
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
This commit implements a sharded pubsub implementation based off of shard channels.
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Introduce memory management on cluster link buffers:
* Introduce a new `cluster-link-sendbuf-limit` config that caps memory usage of cluster bus link send buffers.
* Introduce a new `CLUSTER LINKS` command that displays current TCP links to/from peers.
* Introduce a new `mem_cluster_links` field under `INFO` command output, which displays the overall memory usage by all current cluster links.
* Introduce a new `total_cluster_links_buffer_limit_exceeded` field under `CLUSTER INFO` command output, which displays the accumulated count of cluster links freed due to `cluster-link-sendbuf-limit`.
Disable replica migration to avoid a race condition where the
migrated-from node turns into a replica.
Long term, this test should probably be improved to handle multiple
slots and accept such auto migrations but this is a quick fix to
stabilize the CI without completely dropping this test.
This is work in progress, focusing on two main areas:
* Avoiding race conditions with cluster configuration propagation.
* Ignoring limitations with redis-cli --cluster fix which makes it hard
to distinguish real errors (e.g. failure to fix) from expected
conditions in this test (e.g. nodes not agreeing on configuration).
The test was trying to wait for the replica to start loading the rdb
from the master before it kills the master, but it was actually waiting
for ROLE to be in "sync" mode, which corresponds to REPL_STATE_TRANSFER
that starts before the actual loading starts.
now instead it waits for the loading flag to be set.
Besides, the test was dependent on the previous configuration of the
servers, relying on the fact the replica is configured to persist
(either RDB of AOF), now it is set explicitly.