mirror of https://mirror.osredm.com/root/redis.git
296 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
ad9575dd6d |
[Vector sets] RDB IO errors handling (#13978)
This PR adds support for REDISMODULE_OPTIONS_HANDLE_IO_ERRORS. and tests for short read and corrupted RESTORE payload. Please: note that I also removed the comment about async loading support since we should be already covered. No manipulation of global data structures in Vector Sets, if not for the unique ID used to create new vector sets with different IDs. |
|
![]() |
d65102861f
|
Adding AGPLv3 as a license option to Redis! (#13997)
Read more about [the new license option](http://redis.io/blog/agplv3/) and [the Redis 8 release](http://redis.io/blog/redis-8-ga/). |
|
![]() |
3cdb8c6046
|
Improve replication buffering on replica and fix a related bug (#13904)
With RDB channel replication, we introduced parallel replication stream and RDB delivery to the replica during a full sync. Currently, after the replica loads the RDB and begins streaming the accumulated buffer to the database, it does not read from the master connection during this period. Although streaming the local buffer is generally a fast operation, it can take some time if the buffer is large. This PR introduces buffering during the streaming of the local buffer. One important consideration is ensuring that we consume more than we read during this operation; otherwise, it could take indefinitely. To guarantee that it will eventually complete, we limit the read to at most half of what we consume, e.g. read at most 1 mb once we consume at least 2 mb. **Additional changes** **Bug fix** - Currently, when replica starts draining accumulated buffer, we call protectClient() for the master client as we occasionally yield back to event loop via processEventsWhileBlocked(). So, it prevents freeing the master client. While we are in this loop, if replica receives "replicaof newmaster" command, we call replicaSetMaster() which expects to free the master client and trigger a new connection attempt. As the client object is protected, its destruction will happen asynchronously. Though, a new connection attempt to new master will be made immediately. Later, when the replication buffer is drained, we realize master client was marked as CLOSE_ASAP, and freeing master client triggers another connection attempt to the new master. In most cases, we realize something is wrong in the replication state machine and abort the second attempt later. So, the bug may go undetected. Fix is not calling protectClient() for the master client. Instead, trying to detect if master client is disconnected during processEventsWhileBlocked() and if so, breaking the loop immediately. **Related improvement:** - Currently, the replication buffer is a linked list of buffers, each of which is 1 MB in size. While consuming the buffer, we process one buffer at a time and check if we need to yield back to `processEventsWhileBlocked()`. However, if `loading-process-events-interval-bytes` is set to less than 1 MB, this approach doesn't handle it well. To improve this, I've modified the code to process 16KB at a time and check `loading-process-events-interval-bytes` more frequently. This way, depending on the configuration, we may yield back to networking more often. - In replication.c, `disklessLoadingRio` will be set before a call to `emptyData()`. This change should not introduce any behavioral change but it is logically more correct as emptyData() may yield to networking and we may need to call rioAbort() on disklessLoadingRio. Otherwise, failure of main channel may go undetected until a failure on rdb channel on a corner case. **Config changes** - The default value for the `loading-process-events-interval-bytes` configuration is being lowered from 2MB to 512KB. This configuration primarily used for testing and controls the frequency of networking during the loading phase, specifically when loading the RDB or applying accumulated buffers during a full sync on the replica side. Before the introduction of RDB channel replication, the 2MB value was sufficient for occasionally yielding to networking, mainly to reply -loading to the clients. However, with RDB channel replication, during a full sync on the replica side (either while loading the RDB or applying the accumulated buffer), we need to yield back to networking more frequently to continue accumulating the replication stream. If this doesn’t happen often enough, the replication stream can accumulate on the master side, which is undesirable. To address this, we’ve decided to lower the default value to 512KB. One concern with frequent yielding to networking is the potential performance impact, as each call to processEventsWhileBlocked() involves 4 syscalls, which could slow down the RDB loading phase. However, benchmarking with various configuration values has shown that using 512KB or higher does not negatively impact RDB loading performance. Based on these results, 512KB is now selected as the default value. **Test changes** - Added improved version of a replication test which checks memory usage on master during full sync. --------- Co-authored-by: Oran Agra <oran@redislabs.com> |
|
![]() |
7f5f588232
|
AOF offset info (#13773)
### Background AOF is often used as an effective data recovery method, but now if we have two AOFs from different nodes, it is hard to learn which one has latest data. Generally, we determine whose data is more up-to-date by reading the latest modification time of the AOF file, but because of replication delay, even if both master and replica write to the AOF at the same time, the data in the master is more up-to-date (there are commands that didn't arrive at the replica yet, or a large number of commands have accumulated on replica side ), so we may make wrong decision. ### Solution The replication offset always increments when AOF is enabled even if there is no replica, we think replication offset is better method to determine which one has more up-to-date data, whoever has a larger offset will have newer data, so we add the start replication offset info for AOF, as bellow. ``` file appendonly.aof.2.base.rdb seq 2 type b file appendonly.aof.2.incr.aof seq 2 type i startoffset 224 ``` And if we close gracefully the AOF file, not a crash, such as `shutdown`, `kill signal 15` or `config set appendonly no`, we will add the end replication offset, as bellow. ``` file appendonly.aof.2.base.rdb seq 2 type b file appendonly.aof.2.incr.aof seq 2 type i startoffset 224 endoffset 532 ``` #### Things to pay attention to - For BASE AOF, we do not add `startoffset` and `endoffset` info, since we could not know the start replication replication of data, and it is useless to help us to determine which one has more up-to-date data. - For AOFs from old version, we also don't add `startoffset` and `endoffset` info, since we also don't know start replication replication of them. If we add the start offset from 0, we might make the judgment even less accurate. For example, if the master has just rewritten the AOF, its INCR AOF will inevitably be very small. However, if the replica has not rewritten AOF for a long time, its INCR AOF might be much larger. By applying the following method, we might make incorrect decisions, so we still just check timestamp instead of adding offset info - If the last INCR AOF has `startoffset` or `endoffset`, we need to restore `server.master_repl_offset` according to them to avoid the rollback of the `startoffset` of next INCR AOF. If it has `endoffset`, we just use this value as `server.master_repl_offset`, and a very important thing is to remove this information from the manifest file to avoid the next time we load the manifest file with wrong `endoffset`. If it only has `startoffset`, we calculate `server.master_repl_offset` by the `startoffset` plus the file size. ### How to determine which one has more up-to-date data If AOF has a larger replication offset, it will have more up-to-date data. The following is how to get AOF offset: Read the AOF manifest file to obtain information about **the last INCR AOF** 1. If the last INCR AOF has `endoffset` field, we can directly use the `endoffset` to present the replication offset of AOF 2. If there is no `endoffset`(such as redis crashes abnormally), but there is `startoffset` filed of the last INCR AOF, we can get the replication offset of AOF by `startoffset` plus the file size 3. Finally, if the AOF doesn’t have both `startoffset` and `endoffset`, maybe from old version, and new version redis has not rewritten AOF yet, we still need to check the modification timestamp of the last INCR AOF ### TODO Fix ping causing inconsistency between AOF size and replication offset in the future PR. Because we increment the replication offset when sending PING/REPLCONF to the replica but do not write data to the AOF file, this might cause the starting offset of the AOF file plus its size to be inconsistent with the actual replication offset. |
|
![]() |
870b6bd487
|
Added a shared secret over Redis cluster. (#13763)
The PR introduces a new shared secret that is shared over all the nodes on the Redis cluster. The main idea is to leverage the cluster bus to share a secret between all the nodes such that later the nodes will be able to authenticate using this secret and send internal commands to each other (see #13740 for more information about internal commands). The way the shared secret is chosen is the following: 1. Each node, when start, randomly generate its own internal secret. 2. Each node share its internal secret over the cluster ping messages. 3. If a node gets a ping message with secret smaller then his current secret, it embrace it. 4. Eventually all nodes should embrace the minimal secret The converges of the secret is as good as the topology converges. To extend the ping messages to contain the secret, we leverage the extension mechanism. Nodes that runs an older Redis version will just ignore those extensions. Specific tests were added to verify that eventually all nodes see the secrets. In addition, a verification was added to the test infra to verify the secret on `cluster_config_consistent` and to `assert_cluster_state`. |
|
![]() |
73a9b916c9
|
Rdb channel replication (#13732)
This PR is based on: https://github.com/redis/redis/pull/12109 https://github.com/valkey-io/valkey/pull/60 Closes: https://github.com/redis/redis/issues/11678 **Motivation** During a full sync, when master is delivering RDB to the replica, incoming write commands are kept in a replication buffer in order to be sent to the replica once RDB delivery is completed. If RDB delivery takes a long time, it might create memory pressure on master. Also, once a replica connection accumulates replication data which is larger than output buffer limits, master will kill replica connection. This may cause a replication failure. The main benefit of the rdb channel replication is streaming incoming commands in parallel to the RDB delivery. This approach shifts replication stream buffering to the replica and reduces load on master. We do this by opening another connection for RDB delivery. The main channel on replica will be receiving replication stream while rdb channel is receiving the RDB. This feature also helps to reduce master's main process CPU load. By opening a dedicated connection for the RDB transfer, the bgsave process has access to the new connection and it will stream RDB directly to the replicas. Before this change, due to TLS connection restriction, the bgsave process was writing RDB bytes to a pipe and the main process was forwarding it to the replica. This is no longer necessary, the main process can avoid these expensive socket read/write syscalls. It also means RDB delivery to replica will be faster as it avoids this step. In summary, replication will be faster and master's performance during full syncs will improve. **Implementation steps** 1. When replica connects to the master, it sends 'rdb-channel-repl' as part of capability exchange to let master to know replica supports rdb channel. 2. When replica lacks sufficient data for PSYNC, master sends +RDBCHANNELSYNC reply with replica's client id. As the next step, the replica opens a new connection (rdb-channel) and configures it against the master with the appropriate capabilities and requirements. It also sends given client id back to master over rdbchannel, so that master can associate these channels. (initial replica connection will be referred as main-channel) Then, replica requests fullsync using the RDB channel. 3. Prior to forking, master attaches the replica's main channel to the replication backlog to deliver replication stream starting at the snapshot end offset. 4. The master main process sends replication stream via the main channel, while the bgsave process sends the RDB directly to the replica via the rdb-channel. Replica accumulates replication stream in a local buffer, while the RDB is being loaded into the memory. 5. Once the replica completes loading the rdb, it drops the rdb channel and streams the accumulated replication stream into the db. Sync is completed. **Some details** - Currently, rdbchannel replication is supported only if `repl-diskless-sync` is enabled on master. Otherwise, replication will happen over a single connection as in before. - On replica, there is a limit to replication stream buffering. Replica uses a new config `replica-full-sync-buffer-limit` to limit number of bytes to accumulate. If it is not set, replica inherits `client-output-buffer-limit <replica>` hard limit config. If we reach this limit, replica stops accumulating. This is not a failure scenario though. Further accumulation will happen on master side. Depending on the configured limits on master, master may kill the replica connection. **API changes in INFO output:** 1. New replica state: `send_bulk_and_stream`. Indicates full sync is still in progress for this replica. It is receiving replication stream and rdb in parallel. ``` slave0:ip=127.0.0.1,port=5002,state=send_bulk_and_stream,offset=0,lag=0 ``` Replica state changes in steps: - First, replica sends psync and receives +RDBCHANNELSYNC :`state=wait_bgsave` - After replica connects with rdbchannel and delivery starts: `state=send_bulk_and_stream` - After full sync: `state=online` 2. On replica side, replication stream buffering metrics: - replica_full_sync_buffer_size: Currently accumulated replication stream data in bytes. - replica_full_sync_buffer_peak: Peak number of bytes that this instance accumulated in the lifetime of the process. ``` replica_full_sync_buffer_size:20485 replica_full_sync_buffer_peak:1048560 ``` **API changes in CLIENT LIST** In `client list` output, rdbchannel clients will have 'C' flag in addition to 'S' replica flag: ``` id=11 addr=127.0.0.1:39108 laddr=127.0.0.1:5001 fd=14 name= age=5 idle=5 flags=SC db=0 sub=0 psub=0 ssub=0 multi=-1 watch=0 qbuf=0 qbuf-free=0 argv-mem=0 multi-mem=0 rbs=1024 rbp=0 obl=0 oll=0 omem=0 tot-mem=1920 events=r cmd=psync user=default redir=-1 resp=2 lib-name= lib-ver= io-thread=0 ``` **Config changes:** - `replica-full-sync-buffer-limit`: Controls how much replication data replica can accumulate during rdbchannel replication. If it is not set, a value of 0 means replica will inherit `client-output-buffer-limit <replica>` hard limit config to limit accumulated data. - `repl-rdb-channel` config is added as a hidden config. This is mostly for testing as we need to support both rdbchannel replication and the older single connection replication (to keep compatibility with older versions and rdbchannel replication will not be enabled if repl-diskless-sync is not enabled). it affects both the master (not to respond to rdb channel requests), and the replica (not to declare capability) **Internal API changes:** Changes that were introduced to Redis replication: - New replication capability is added to replconf command: `capa rdb-channel-repl`. Indicates replica is capable of rdb channel replication. Replica sends it when it connects to master along with other capabilities. - If replica needs fullsync, master replies `+RDBCHANNELSYNC <client-id>` to the replica's PSYNC request. - When replica opens rdbchannel connection, as part of replconf command, it sends `rdb-channel 1` to let master know this is rdb channel. Also, it sends `main-ch-client-id <client-id>` as part of replconf command so master can associate channels. **Testing:** As rdbchannel replication is enabled by default, we run whole test suite with it. Though, as we need to support both rdbchannel and single connection replication, we'll be running some tests twice with `repl-rdb-channel yes/no` config. **Replica state diagram** ``` * * Replica state machine * * * Main channel state * ┌───────────────────┐ * │RECEIVE_PING_REPLY │ * └────────┬──────────┘ * │ +PONG * ┌────────▼──────────┐ * │SEND_HANDSHAKE │ RDB channel state * └────────┬──────────┘ ┌───────────────────────────────┐ * │+OK ┌───► RDB_CH_SEND_HANDSHAKE │ * ┌────────▼──────────┐ │ └──────────────┬────────────────┘ * │RECEIVE_AUTH_REPLY │ │ REPLCONF main-ch-client-id <clientid> * └────────┬──────────┘ │ ┌──────────────▼────────────────┐ * │+OK │ │ RDB_CH_RECEIVE_AUTH_REPLY │ * ┌────────▼──────────┐ │ └──────────────┬────────────────┘ * │RECEIVE_PORT_REPLY │ │ │ +OK * └────────┬──────────┘ │ ┌──────────────▼────────────────┐ * │+OK │ │ RDB_CH_RECEIVE_REPLCONF_REPLY│ * ┌────────▼──────────┐ │ └──────────────┬────────────────┘ * │RECEIVE_IP_REPLY │ │ │ +OK * └────────┬──────────┘ │ ┌──────────────▼────────────────┐ * │+OK │ │ RDB_CH_RECEIVE_FULLRESYNC │ * ┌────────▼──────────┐ │ └──────────────┬────────────────┘ * │RECEIVE_CAPA_REPLY │ │ │+FULLRESYNC * └────────┬──────────┘ │ │Rdb delivery * │ │ ┌──────────────▼────────────────┐ * ┌────────▼──────────┐ │ │ RDB_CH_RDB_LOADING │ * │SEND_PSYNC │ │ └──────────────┬────────────────┘ * └─┬─────────────────┘ │ │ Done loading * │PSYNC (use cached-master) │ │ * ┌─▼─────────────────┐ │ │ * │RECEIVE_PSYNC_REPLY│ │ ┌────────────►│ Replica streams replication * └─┬─────────────────┘ │ │ │ buffer into memory * │ │ │ │ * │+RDBCHANNELSYNC client-id │ │ │ * ├──────┬───────────────────┘ │ │ * │ │ Main channel │ │ * │ │ accumulates repl data │ │ * │ ┌──▼────────────────┐ │ ┌───────▼───────────┐ * │ │ REPL_TRANSFER ├───────┘ │ CONNECTED │ * │ └───────────────────┘ └────▲───▲──────────┘ * │ │ │ * │ │ │ * │ +FULLRESYNC ┌───────────────────┐ │ │ * ├────────────────► REPL_TRANSFER ├────┘ │ * │ └───────────────────┘ │ * │ +CONTINUE │ * └──────────────────────────────────────────────┘ */ ``` ----- This PR also contains changes and ideas from: https://github.com/valkey-io/valkey/pull/837 https://github.com/valkey-io/valkey/pull/1173 https://github.com/valkey-io/valkey/pull/804 https://github.com/valkey-io/valkey/pull/945 https://github.com/valkey-io/valkey/pull/989 --------- Co-authored-by: Yuan Wang <wangyuancode@163.com> Co-authored-by: debing.sun <debing.sun@redis.com> Co-authored-by: Moti Cohen <moticless@gmail.com> Co-authored-by: naglera <anagler123@gmail.com> Co-authored-by: Amit Nagler <58042354+naglera@users.noreply.github.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Binbin <binloveplay1314@qq.com> Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech> Co-authored-by: Ping Xie <pingxie@outlook.com> Co-authored-by: Ran Shidlansik <ranshid@amazon.com> Co-authored-by: ranshid <88133677+ranshid@users.noreply.github.com> Co-authored-by: xbasel <103044017+xbasel@users.noreply.github.com> |
|
![]() |
64a40b20d9
|
Async IO Threads (#13695)
## Introduction Redis introduced IO Thread in 6.0, allowing IO threads to handle client request reading, command parsing and reply writing, thereby improving performance. The current IO thread implementation has a few drawbacks. - The main thread is blocked during IO thread read/write operations and must wait for all IO threads to complete their current tasks before it can continue execution. In other words, the entire process is synchronous. This prevents the efficient utilization of multi-core CPUs for parallel processing. - When the number of clients and requests increases moderately, it causes all IO threads to reach full CPU utilization due to the busy wait mechanism used by the IO threads. This makes it challenging for us to determine which part of Redis has reached its bottleneck. - When IO threads are enabled with TLS and io-threads-do-reads, a disconnection of a connection with pending data may result in it being assigned to multiple IO threads simultaneously. This can cause race conditions and trigger assertion failures. Related issue: redis#12540 Therefore, we designed an asynchronous IO threads solution. The IO threads adopt an event-driven model, with the main thread dedicated to command processing, meanwhile, the IO threads handle client read and write operations in parallel. ## Implementation ### Overall As before, we did not change the fact that all client commands must be executed on the main thread, because Redis was originally designed to be single-threaded, and processing commands in a multi-threaded manner would inevitably introduce numerous race and synchronization issues. But now each IO thread has independent event loop, therefore, IO threads can use a multiplexing approach to handle client read and write operations, eliminating the CPU overhead caused by busy-waiting. the execution process can be briefly described as follows: the main thread assigns clients to IO threads after accepting connections, IO threads will notify the main thread when clients finish reading and parsing queries, then the main thread processes queries from IO threads and generates replies, IO threads handle writing reply to clients after receiving clients list from main thread, and then continue to handle client read and write events. ### Each IO thread has independent event loop We now assign each IO thread its own event loop. This approach eliminates the need for the main thread to perform the costly `epoll_wait` operation for handling connections (except for specific ones). Instead, the main thread processes requests from the IO threads and hands them back once completed, fully offloading read and write events to the IO threads. Additionally, all TLS operations, including handling pending data, have been moved entirely to the IO threads. This resolves the issue where io-threads-do-reads could not be used with TLS. ### Event-notified client queue To facilitate communication between the IO threads and the main thread, we designed an event-notified client queue. Each IO thread and the main thread have two such queues to store clients waiting to be processed. These queues are also integrated with the event loop to enable handling. We use pthread_mutex to ensure the safety of queue operations, as well as data visibility and ordering, and race conditions are minimized, as each IO thread and the main thread operate on independent queues, avoiding thread suspension due to lock contention. And we implemented an event notifier based on `eventfd` or `pipe` to support event-driven handling. ### Thread safety Since the main thread and IO threads can execute in parallel, we must handle data race issues carefully. **client->flags** The primary tasks of IO threads are reading and writing, i.e. `readQueryFromClient` and `writeToClient`. However, IO threads and the main thread may concurrently modify or access `client->flags`, leading to potential race conditions. To address this, we introduced an io-flags variable to record operations performed by IO threads, thereby avoiding race conditions on `client->flags`. **Pause IO thread** In the main thread, we may want to operate data of IO threads, maybe uninstall event handler, access or operate query/output buffer or resize event loop, we need a clean and safe context to do that. We pause IO thread in `IOThreadBeforeSleep`, do some jobs and then resume it. To avoid thread suspended, we use busy waiting to confirm the target status. Besides we use atomic variable to make sure memory visibility and ordering. We introduce these functions to pause/resume IO Threads as below. ``` pauseIOThread, resumeIOThread pauseAllIOThreads, resumeAllIOThreads pauseIOThreadsRange, resumeIOThreadsRange ``` Testing has shown that `pauseIOThread` is highly efficient, allowing the main thread to execute nearly 200,000 operations per second during stress tests. Similarly, `pauseAllIOThreads` with 8 IO threads can handle up to nearly 56,000 operations per second. But operations performed between pausing and resuming IO threads must be quick; otherwise, they could cause the IO threads to reach full CPU utilization. **freeClient and freeClientAsync** The main thread may need to terminate a client currently running on an IO thread, for example, due to ACL rule changes, reaching the output buffer limit, or evicting a client. In such cases, we need to pause the IO thread to safely operate on the client. **maxclients and maxmemory-clients updating** When adjusting `maxclients`, we need to resize the event loop for all IO threads. Similarly, when modifying `maxmemory-clients`, we need to traverse all clients to calculate their memory usage. To ensure safe operations, we pause all IO threads during these adjustments. **Client info reading** The main thread may need to read a client’s fields to generate a descriptive string, such as for the `CLIENT LIST` command or logging purposes. In such cases, we need to pause the IO thread handling that client. If information for all clients needs to be displayed, all IO threads must be paused. **Tracking redirect** Redis supports the tracking feature and can even send invalidation messages to a connection with a specified ID. But the target client may be running on IO thread, directly manipulating the client’s output buffer is not thread-safe, and the IO thread may not be aware that the client requires a response. In such cases, we pause the IO thread handling the client, modify the output buffer, and install a write event handler to ensure proper handling. **clientsCron** In the `clientsCron` function, the main thread needs to traverse all clients to perform operations such as timeout checks, verifying whether they have reached the soft output buffer limit, resizing the output/query buffer, or updating memory usage. To safely operate on a client, the IO thread handling that client must be paused. If we were to pause the IO thread for each client individually, the efficiency would be very low. Conversely, pausing all IO threads simultaneously would be costly, especially when there are many IO threads, as clientsCron is invoked relatively frequently. To address this, we adopted a batched approach for pausing IO threads. At most, 8 IO threads are paused at a time. The operations mentioned above are only performed on clients running in the paused IO threads, significantly reducing overhead while maintaining safety. ### Observability In the current design, the main thread always assigns clients to the IO thread with the least clients. To clearly observe the number of clients handled by each IO thread, we added the new section in INFO output. The `INFO THREADS` section can show the client count for each IO thread. ``` # Threads io_thread_0:clients=0 io_thread_1:clients=2 io_thread_2:clients=2 ``` Additionally, in the `CLIENT LIST` output, we also added a field to indicate the thread to which each client is assigned. `id=244 addr=127.0.0.1:41870 laddr=127.0.0.1:6379 ... resp=2 lib-name= lib-ver= io-thread=1` ## Trade-off ### Special Clients For certain special types of clients, keeping them running on IO threads would result in severe race issues that are difficult to resolve. Therefore, we chose not to offload these clients to the IO threads. For replica, monitor, subscribe, and tracking clients, main thread may directly write them a reply when conditions are met. Race issues are difficult to resolve, so we have them processed in the main thread. This includes the Lua debug clients as well, since we may operate connection directly. For blocking client, after the IO thread reads and parses a command and hands it over to the main thread, if the client is identified as a blocking type, it will be remained in the main thread. Once the blocking operation completes and the reply is generated, the client is transferred back to the IO thread to send the reply and wait for event triggers. ### Clients Eviction To support client eviction, it is necessary to update each client’s memory usage promptly during operations such as read, write, or command execution. However, when a client operates on an IO thread, it is not feasible to update the memory usage immediately due to the risk of data races. As a result, memory usage can only be updated either in the main thread while processing commands or in the `ClientsCron` periodically. The downside of this approach is that updates might experience a delay of up to one second, which could impact the precision of memory management for eviction. To avoid incorrectly evicting clients. We adopted a best-effort compensation solution, when we decide to eviction a client, we update its memory usage again before evicting, if the memory used by the client does not decrease or memory usage bucket is not changed, then we will evict it, otherwise, not evict it. However, we have not completely solved this problem. Due to the delay in memory usage updates, it may lead us to make incorrect decisions about the need to evict clients. ### Defragment In the majority of cases we do NOT use the data from argv directly in the db. 1. key names We store a copy that we allocate in the main thread, see `sdsdup()` in `dbAdd()`. 2. hash key and value We store key as hfield and store value as sds, see `hfieldNew()` and `sdsdup()` in `hashTypeSet()`. 3. other datatypes They don't even use SDS, so there is no reference issues. But in some cases client the data from argv may be retain by the main thread. As a result, during fragmentation cleanup, we need to move allocations from the IO thread’s arena to the main thread’s arena. We always allocate new memory in the main thread’s arena, but the memory released by IO threads may not yet have been reclaimed. This ultimately causes the fragmentation rate to be higher compared to creating and allocating entirely within a single thread. The following cases below will lead to memory allocated by the IO thread being kept by the main thread. 1. string related command: `append`, `getset`, `mset` and `set`. If `tryObjectEncoding()` does not change argv, we will keep it directly in the main thread, see the code in `tryObjectEncoding()`(specifically `trimStringObjectIfNeeded()`) 2. block related command. the key names will be kept in `c->db->blocking_keys`. 3. watch command the key names will be kept in `c->db->watched_keys`. 4. [s]subscribe command channel name will be kept in `serverPubSubChannels`. 5. script load command script will be kept in `server.lua_scripts`. 7. some module API: `RM_RetainString`, `RM_HoldString` Those issues will be handled in other PRs. ## Testing ### Functional Testing The commit with enabling IO Threads has passed all TCL tests, but we did some changes: **Client query buffer**: In the original code, when using a reusable query buffer, ownership of the query buffer would be released after the command was processed. However, with IO threads enabled, the client transitions from an IO thread to the main thread for processing. This causes the ownership release to occur earlier than the command execution. As a result, when IO threads are enabled, the client's information will never indicate that a shared query buffer is in use. Therefore, we skip the corresponding query buffer tests in this case. **Defragment**: Add a new defragmentation test to verify the effect of io threads on defragmentation. **Command delay**: For deferred clients in TCL tests, due to clients being assigned to different threads for execution, delays may occur. To address this, we introduced conditional waiting: the process proceeds to the next step only when the `client list` contains the corresponding commands. ### Sanitizer Testing The commit passed all TCL tests and reported no errors when compiled with the `fsanitizer=thread` and `fsanitizer=address` options enabled. But we made the following modifications: we suppressed the sanitizer warnings for clients with watched keys when updating `client->flags`, we think IO threads read `client->flags`, but never modify it or read the `CLIENT_DIRTY_CAS` bit, main thread just only modifies this bit, so there is no actual data race. ## Others ### IO thread number In the new multi-threaded design, the main thread is primarily focused on command processing to improve performance. Typically, the main thread does not handle regular client I/O operations but is responsible for clients such as replication and tracking clients. To avoid breaking changes, we still consider the main thread as the first IO thread. When the io-threads configuration is set to a low value (e.g., 2), performance does not show a significant improvement compared to a single-threaded setup for simple commands (such as SET or GET), as the main thread does not consume much CPU for these simple operations. This results in underutilized multi-core capacity. However, for more complex commands, having a low number of IO threads may still be beneficial. Therefore, it’s important to adjust the `io-threads` based on your own performance tests. Additionally, you can clearly monitor the CPU utilization of the main thread and IO threads using `top -H -p $redis_pid`. This allows you to easily identify where the bottleneck is. If the IO thread is the bottleneck, increasing the `io-threads` will improve performance. If the main thread is the bottleneck, the overall performance can only be scaled by increasing the number of shards or replicas. --------- Co-authored-by: debing.sun <debing.sun@redis.com> Co-authored-by: oranagra <oran@redislabs.com> |
|
![]() |
79fd255828
|
Add Lua VM memory to memory overhead, now that it's part of zmalloc (#13660)
To complement the work done in #13133. it added the script VMs memory to be counted as part of zmalloc, but that means they should be also counted as part of the non-value overhead. this commit contains some refactoring to make variable names and function names less confusing. it also adds a new field named `script.VMs` into the `MEMORY STATS` command. additionally, clear scripts and stats between tests in external mode (which is related to how this issue was discovered) |
|
![]() |
6c5e263d7b
|
Temporarily hide the new SFLUSH command by marking it as experimental (#13600)
- Add a new 'EXPERIMENTAL' command flag, which causes the command generator to skip over it and make the command to be unavailable for execution - Skip experimental tests by default - Move the SFLUSH tests from the old framework to the new one --------- Co-authored-by: YaacovHazan <yaacov.hazan@redislabs.com> |
|
![]() |
3fcddfb61f
|
testsuite --dump-logs works on servers started before the test (#13500)
so far ./runtest --dump-logs used work for servers started within the test proc. now it'll also work on servers started outside the test proc scope. the downside is that these logs can be huge if they served many tests and not just the failing one. but for some rare failures, we rather have that than nothing. this feature isn't enabled y default, but is used by our GH actions. |
|
![]() |
e750c619b2
|
Fix some test failures caused by key being deleted due to premature expiration (#13453)
1. Fix fuzzer test failure when the key was deleted due to expiration
before sending random traffic for the key.
After HFE, when all fields in a hash are expired, the hash might be
deleted due to expiration.
If the key was expired in the mid of `RESTORE` command and sending rand
trafic, `fuzzer` test will fail in the following code because the 'TYPE
key' will return `none` and then throw an exception because it cannot be
found in `$commands`
|
|
![]() |
a331978583
|
Fix external test hang in redis-cli test when run in a certain order (#13423)
When the tests are run against an external server in this order: `--single unit/introspection --single unit/moduleapi/blockonbackground --single integration/redis-cli` the test would hang when the "ASK redirect test" test attempts to create a listening socket (it fails, and then redis-cli itself hangs waiting for a non-responsive socket created by the introspection test). the reasons are: 1. the blockedbackground test includes util.tcl and resets the `::last_port_attempted` variable 2. the test in introspection didn't close the listening server, so it's still alive. 3. find_available_port doesn't properly detect the busy port, and it thinks that the port is free even though it's busy. fixing all 3 of these problems, even though fixing just one would be enough to let the test pass. |
|
![]() |
fa46aa4d85
|
Test infra adjustments for external CI runs (#13421)
- when uploading server logs, make sure they don't overwrite each other. - sort the test units to get consistent order between them (following #13220) - backup and restore the entire server configuration, to protect one unit from config changes another unit performs |
|
![]() |
69b480cb7a
|
Hide user data from log (#13400)
This PR is based on the commits from PR #11747. In the event of an assertion failure, hide command arguments from the operator. In some cases, private client information can be voluntarily exposed when a redis instance crashes due to an assertion failure. This commit prevent וnintentional client info exposure. Operators can still access the hidden data, but they must actively request it. Any of the client info commands remains the unchanged. ### Config Add a new config `hide-user-data-from-log` to turn this feature on and off, default off. --------- Co-authored-by: naglera <anagler123@gmail.com> Co-authored-by: naglera <58042354+naglera@users.noreply.github.com> |
|
![]() |
a84cc20aef
|
HFE - Fix statistic to count also lazy expired and rename INFO params (#13372)
* INFO command : rename `hashes_with_expiry_fields` to `subexpiry` * INFO command : rename `expired_hash_fields` to `expired_subkeys` * Fix statistic of `expired_subkeys` to count also lazy expired * Remove TODOs comments leftover in TCL * Fix potential flaky test of rdb load of hash-field-expiration |
|
![]() |
5a3534f9b5
|
dynamically list test files (#13220)
**Related issue** https://github.com/redis/redis/issues/13219 **Motivation** Currently we have to manually update the all_tests variable when introducing new test files. **Modification** I have modified it to list test files dynamically, but instead of modifying it to add all test files, I have modified it to only add only test files from the following 4 paths - unit - unit/type - unit/cluster - integration so that it doesn't deviate too much from what we already do **Result** - dynamically list test files to all_tests variable - close issue https://github.com/redis/redis/issues/13219 **Additional information** - removed `list-common.tcl` file and added `generate_largevalue_test_array` proc in `util.tcl`. because `list-common.tcl` is not a test file - There is an order dependency. So I added a code to the "Is a ziplist encoded Hash promoted on big payload?" test that resets hash-max-listpack-value to the default (64). --------- Signed-off-by: jonghoonpark <dev@jonghoonpark.com> Co-authored-by: debing.sun <debing.sun@redis.com> |
|
![]() |
33fc0fbfae
|
HFE to support AOF and replicas (#13285)
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to have absolute unix time in msec. * On active-expiration of field, propagate HDEL to replica (`propagateHashFieldDeletion()`) * On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()` now calls `hashTypeDelete()`. It also takes care to call `propagateHashFieldDeletion()`). * Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t have any expiration on the field then it will considered as valid condition. Note, replicas doesn’t make any active expiration, and should avoid lazy expiration. On `hashTypeGetValue()` it doesn't check expiration (As long as the master didn’t request to delete the field, it is valid) TODO: * Attach `dbid` to HASH metadata. See [here](https://github.com/redis/redis/pull/13209#discussion_r1593385850) --------- Co-authored-by: debing.sun <debing.sun@redis.com> |
|
![]() |
36c3cec6d1
|
Fix hfe RDB tests by adding FIELDS keyword to hexpire commands (#13277)
FIELDS keyword was added as part of [#13270](https://github.com/redis/redis/pull/13270). It was missing in [#13243](https://github.com/redis/redis/pull/13243) |
|
![]() |
323be4d699
|
Hfe serialization listpack (#13243)
Add RDB de/serialization for HFE This PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data. When the hash RAM encoding is dict, it will be saved in the former, and when it is listpack it will be saved in the latter. Both formats just add the TTL value for each field after the data that was previously saved, i.e HASH_METADATA will save the number of entries and, for each entry, key, value and TTL, whereas listpack is saved as a blob. On read, the usual dict <--> listpack conversion takes place if required. In addition, when reading a hash that was saved as a dict fields are actively expired if expiry is due. Currently this slao holds for listpack encoding, but it is supposed to be removed. TODO: Remove active expiry on load when loading from listpack format (unless we'll decide to keep it) |
|
![]() |
0b34396924
|
Change license from BSD-3 to dual RSALv2+SSPLv1 (#13157)
[Read more about the license change here](https://redis.com/blog/redis-adopts-dual-source-available-licensing/) Live long and prosper 🖖 |
|
![]() |
3c2ea1ea95
|
Fix wathced client test timing issue caused by late close (#13062)
There is a timing issue in the test, close may arrive late, or in freeClientAsync we will free the client in async way, which will lead to errors in watching_clients statistics, since we will only unwatch all keys when we truly freeClient. Add a wait here to avoid this problem. Also fixed some outdated comments i saw. The test was introduced in #12966. |
|
![]() |
b3aaa0a136
|
When one shard, sole primary node marks potentially failed replica as FAIL instead of PFAIL (#12824)
Fixes issue where a single primary cannot mark a replica as failed in a single-shard cluster. |
|
![]() |
8bb9a2895e
|
Address some failures with new tests for improving debug report (#12915)
Fix a daily test failure because alpine doesn't support stack traces and add in an extra assertion related to making sure the stack trace was printed twice. |
|
![]() |
c85a9b7896
|
Fix delKeysInSlot server events are not executed inside an execution unit (#12745)
This is a follow-up fix to #12733. We need to apply the same changes to delKeysInSlot. Refer to #12733 for more details. This PR contains some other minor cleanups / improvements to the test suite and docs. It uses the postnotifications test module in a cluster mode test which revealed a leak in the test module (fixed). |
|
![]() |
0270abda82
|
Replace cluster metadata with slot specific dictionaries (#11695)
This is an implementation of https://github.com/redis/redis/issues/10589 that eliminates 16 bytes per entry in cluster mode, that are currently used to create a linked list between entries in the same slot. Main idea is splitting main dictionary into 16k smaller dictionaries (one per slot), so we can perform all slot specific operations, such as iteration, without any additional info in the `dictEntry`. For Redis cluster, the expectation is that there will be a larger number of keys, so the fixed overhead of 16k dictionaries will be The expire dictionary is also split up so that each slot is logically decoupled, so that in subsequent revisions we will be able to atomically flush a slot of data. ## Important changes * Incremental rehashing - one big change here is that it's not one, but rather up to 16k dictionaries that can be rehashing at the same time, in order to keep track of them, we introduce a separate queue for dictionaries that are rehashing. Also instead of rehashing a single dictionary, cron job will now try to rehash as many as it can in 1ms. * getRandomKey - now needs to not only select a random key, from the random bucket, but also needs to select a random dictionary. Fairness is a major concern here, as it's possible that keys can be unevenly distributed across the slots. In order to address this search we introduced binary index tree). With that data structure we are able to efficiently find a random slot using binary search in O(log^2(slot count)) time. * Iteration efficiency - when iterating dictionary with a lot of empty slots, we want to skip them efficiently. We can do this using same binary index that is used for random key selection, this index allows us to find a slot for a specific key index. For example if there are 10 keys in the slot 0, then we can quickly find a slot that contains 11th key using binary search on top of the binary index tree. * scan API - in order to perform a scan across the entire DB, the cursor now needs to not only save position within the dictionary but also the slot id. In this change we append slot id into LSB of the cursor so it can be passed around between client and the server. This has interesting side effect, now you'll be able to start scanning specific slot by simply providing slot id as a cursor value. The plan is to not document this as defined behavior, however. It's also worth nothing the SCAN API is now technically incompatible with previous versions, although practically we don't believe it's an issue. * Checksum calculation optimizations - During command execution, we know that all of the keys are from the same slot (outside of a few notable exceptions such as cross slot scripts and modules). We don't want to compute the checksum multiple multiple times, hence we are relying on cached slot id in the client during the command executions. All operations that access random keys, either should pass in the known slot or recompute the slot. * Slot info in RDB - in order to resize individual dictionaries correctly, while loading RDB, it's not enough to know total number of keys (of course we could approximate number of keys per slot, but it won't be precise). To address this issue, we've added additional metadata into RDB that contains number of keys in each slot, which can be used as a hint during loading. * DB size - besides `DBSIZE` API, we need to know size of the DB in many places want, in order to avoid scanning all dictionaries and summing up their sizes in a loop, we've introduced a new field into `redisDb` that keeps track of `key_count`. This way we can keep DBSIZE operation O(1). This is also kept for O(1) expires computation as well. ## Performance This change improves SET performance in cluster mode by ~5%, most of the gains come from us not having to maintain linked lists for keys in slot, non-cluster mode has same performance. For workloads that rely on evictions, the performance is similar because of the extra overhead for finding keys to evict. RDB loading performance is slightly reduced, as the slot of each key needs to be computed during the load. ## Interface changes * Removed `overhead.hashtable.slot-to-keys` to `MEMORY STATS` * Scan API will now require 64 bits to store the cursor, even on 32 bit systems, as the slot information will be stored. * New RDB version to support the new op code for SLOT information. --------- Co-authored-by: Vitaly Arbuzov <arvit@amazon.com> Co-authored-by: Harkrishn Patro <harkrisp@amazon.com> Co-authored-by: Roshan Khatri <rvkhatri@amazon.com> Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> Co-authored-by: Oran Agra <oran@redislabs.com> |
|
![]() |
f0c1c730d4
|
test suite: clean server pids after server crashed (#12639)
when a server in the test suite crashes and is restarted by redstart_server, we didn't clean it's pid from the list. we can see that when the corrupt-dump-fuzzer hangs, it has a long list of servers to lean, but in fact they're all already dead. |
|
![]() |
2e0f6724e0
|
Stabilization and improvements around aof tests (#12626)
In some tests, the code manually searches for a log message, and it uses tail -1 with a delay of 1 second, which can miss the expected line. Also, because the aof tests use start_server_aof and not start_server, the test name doesn't log into the server log. To fix the above, I made the following changes: - Change the start_server_aof to wrap the start_server. This will add the created aof server to the servers list, and make srv() and wait_for_log_messages() available for the tests. - Introduce a new option for start_server. 'wait_ready' - an option to let the caller start the test code without waiting for the server to be ready. useful for tests on a server that is expected to exit on startup. - Create a new start_server_aof_ex. The new proc also accept options as argument and make use of the new 'short_life' option for tests that are expected to exit on startup because of some error in the aof file(s). Because of the above, I had to change many lines and replace every local srv variable (a server config) usage with the srv(). |
|
![]() |
6abb3c4038
|
change log match to line match in tcl sanitizer_errors_from_file. (#12446)
In the tcl foreach loop, the function should compare line rather than the whole file. |
|
![]() |
22a29935ff
|
Support TLS service when "tls-cluster" is not enabled and persist both plain and TLS port in nodes.conf (#12233)
Originally, when "tls-cluster" is enabled, `port` is set to TLS port. In order to support non-TLS clients, `pport` is used to propagate TCP port across cluster nodes. However when "tls-cluster" is disabled, `port` is set to TCP port, and `pport` is not used, which means the cluster cannot provide TLS service unless "tls-cluster" is on. ``` typedef struct { // ... uint16_t port; /* Latest known clients port (TLS or plain). */ uint16_t pport; /* Latest known clients plaintext port. Only used if the main clients port is for TLS. */ // ... } clusterNode; ``` ``` typedef struct { // ... uint16_t port; /* TCP base port number. */ uint16_t pport; /* Sender TCP plaintext port, if base port is TLS */ // ... } clusterMsg; ``` This PR renames `port` and `pport` in `clusterNode` to `tcp_port` and `tls_port`, to record both ports no matter "tls-cluster" is enabled or disabled. This allows to provide TLS service to clients when "tls-cluster" is disabled: when displaying cluster topology, or giving `MOVED` error, server can provide TLS or TCP port according to client's connection type, no matter what type of connection cluster bus is using. For backwards compatibility, `port` and `pport` in `clusterMsg` are preserved, when "tls-cluster" is enabled, `port` is set to TLS port and `pport` is set to TCP port, when "tls-cluster" is disabled, `port` is set to TCP port and `pport` is set to TLS port (instead of 0). Also, in the nodes.conf file, a new aux field displaying an extra port is added to complete the persisted info. We may have `tls_port=xxxxx` or `tcp_port=xxxxx` in the aux field, to complete the cluster topology, while the other port is stored in the normal `<ip>:<port>` field. The format is shown below. ``` <node-id> <ip>:<tcp_port>@<cport>,<hostname>,shard-id=...,tls-port=6379 myself,master - 0 0 0 connected 0-1000 ``` Or we can switch the position of two ports, both can be correctly resolved. ``` <node-id> <ip>:<tls_port>@<cport>,<hostname>,shard-id=...,tcp-port=6379 myself,master - 0 0 0 connected 0-1000 ``` |
|
![]() |
73cf0243df
|
Make nodename test more consistent (#12330)
To determine when everything was stable, we couldn't just query the nodename since they aren't API visible by design. Instead, we were using a proxy piece of information which was bumping the epoch and waiting for everyone to observe that. This works for making source Node 0 and Node 1 had pinged, and Node 0 and Node 2 had pinged, but did not guarantee that Node 1 and Node 2 had pinged. Although unlikely, this can cause this failure message. To fix it I hijacked hostnames and used its validation that it has been propagated, since we know that it is stable. I also noticed while stress testing this sometimes the test took almost 4.5 seconds to finish, which is really close to the current 5 second limit of the log check, so I bumped that up as well just to make it a bit more consistent. |
|
![]() |
070453eef3
|
Cluster human readable nodename feature (#9564)
This PR adds a human readable name to a node in clusters that are visible as part of error logs. This is useful so that admins and operators of Redis cluster have better visibility into failures without having to cross-reference the generated ID with some logical identifier (such as pod-ID or EC2 instance ID). This is mentioned in #8948. Specific nodenames can be set by using the variable cluster-announce-human-nodename. The nodename is gossiped using the clusterbus extension in #9530. Co-authored-by: Madelyn Olson <madelyneolson@gmail.com> |
|
![]() |
f228ec1ea5
|
flushSlavesOutputBuffers should not write to replicas scheduled to drop (#12242)
This will increase the size of an already large COB (one already passed the threshold for disconnection) This could also mean that we'll attempt to write that data to the socket and the replica will manage to read it, which will result in an undesired partial sync (undesired for the test) |
|
![]() |
42c8c61813
|
Fix some compile warnings and errors when building with gcc-12 or clang (#12035)
This PR is to fix the compilation warnings and errors generated by the latest complier toolchain, and to add a new runner of the latest toolchain for daily CI. ## Fix various compilation warnings and errors 1) jemalloc.c COMPILER: clang-14 with FORTIFY_SOURCE WARNING: ``` src/jemalloc.c:1028:7: warning: suspicious concatenation of string literals in an array initialization; did you mean to separate the elements with a comma? [-Wstring-concatenation] "/etc/malloc.conf", ^ src/jemalloc.c:1027:3: note: place parentheses around the string literal to silence warning "\"name\" of the file referenced by the symbolic link named " ^ ``` REASON: the compiler to alert developers to potential issues with string concatenation that may miss a comma, just like #9534 which misses a comma. SOLUTION: use `()` to tell the compiler that these two line strings are continuous. 2) config.h COMPILER: clang-14 with FORTIFY_SOURCE WARNING: ``` In file included from quicklist.c:36: ./config.h:319:76: warning: attribute declaration must precede definition [-Wignored-attributes] char *strcat(char *restrict dest, const char *restrict src) __attribute__((deprecated("please avoid use of unsafe C functions. prefer use of redis_strlcat instead"))); ``` REASON: Enabling _FORTIFY_SOURCE will cause the compiler to use `strcpy()` with check, it results in a deprecated attribute declaration after including <features.h>. SOLUTION: move the deprecated attribute declaration from config.h to fmacro.h before "#include <features.h>". 3) networking.c COMPILER: GCC-12 WARNING: ``` networking.c: In function ‘addReplyDouble.part.0’: networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 876 | dbuf[start] = '$'; | ^ networking.c:868:14: note: at offset -5 into destination object ‘dbuf’ of size 5152 868 | char dbuf[MAX_LONG_DOUBLE_CHARS+32]; | ^ networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=] 876 | dbuf[start] = '$'; | ^ networking.c:868:14: note: at offset -6 into destination object ‘dbuf’ of size 5152 868 | char dbuf[MAX_LONG_DOUBLE_CHARS+32]; ``` REASON: GCC-12 predicts that digits10() may return 9 or 10 through `return 9 + (v >= 1000000000UL)`. SOLUTION: add an assert to let the compiler know the possible length; 4) redis-cli.c & redis-benchmark.c COMPILER: clang-14 with FORTIFY_SOURCE WARNING: ``` redis-benchmark.c:1621:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL redis-cli.c:3015:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL ``` REASON: when _FORTIFY_SOURCE is enabled, the compiler will use the print() with check, which is a macro. this may result in the use of directives within the macro, which is undefined behavior. SOLUTION: move the directives-related code out of `print()`. 5) server.c COMPILER: gcc-13 with FORTIFY_SOURCE WARNING: ``` In function 'lookupCommandLogic', inlined from 'lookupCommandBySdsLogic' at server.c:3139:32: server.c:3102:66: error: '*(robj **)argv' may be used uninitialized [-Werror=maybe-uninitialized] 3102 | struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr); | ~~~~^~~ ``` REASON: The compiler thinks that the `argc` returned by `sdssplitlen()` could be 0, resulting in an empty array of size 0 being passed to lookupCommandLogic. this should be a false positive, `argc` can't be 0 when strings are not NULL. SOLUTION: add an assert to let the compiler know that `argc` is positive. 6) sha1.c COMPILER: gcc-12 WARNING: ``` In function ‘SHA1Update’, inlined from ‘SHA1Final’ at sha1.c:195:5: sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread] 152 | SHA1Transform(context->state, &data[i]); | ^ sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’ sha1.c: In function ‘SHA1Final’: sha1.c:56:6: note: in a call to function ‘SHA1Transform’ 56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64]) | ^ In function ‘SHA1Update’, inlined from ‘SHA1Final’ at sha1.c:198:9: sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread] 152 | SHA1Transform(context->state, &data[i]); | ^ sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’ sha1.c: In function ‘SHA1Final’: sha1.c:56:6: note: in a call to function ‘SHA1Transform’ 56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64]) ``` REASON: due to the bug[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80922], when enable LTO, gcc-12 will not see `diagnostic ignored "-Wstringop-overread"`, resulting in a warning. SOLUTION: temporarily set SHA1Update to noinline to avoid compiler warnings due to LTO being enabled until the above gcc bug is fixed. 7) zmalloc.h COMPILER: GCC-12 WARNING: ``` In function ‘memset’, inlined from ‘moduleCreateContext’ at module.c:877:5, inlined from ‘RM_GetDetachedThreadSafeContext’ at module.c:8410:5: /usr/include/x86_64-linux-gnu/bits/string_fortified.h:59:10: warning: ‘__builtin_memset’ writing 104 bytes into a region of size 0 overflows the destination [-Wstringop-overflow=] 59 | return __builtin___memset_chk (__dest, __ch, __len, ``` REASON: due to the GCC-12 bug [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96503], GCC-12 cannot see alloc_size, which causes GCC to think that the actual size of memory is 0 when checking with __glibc_objsize0(). SOLUTION: temporarily set malloc-related interfaces to `noinline` to avoid compiler warnings due to LTO being enabled until the above gcc bug is fixed. ## Other changes 1) Fixed `ps -p [pid]` doesn't output `<defunct>` when using procps 4.x causing `replication child dies when parent is killed - diskless` test to fail. 2) Add a new fortify CI with GCC-13 and ubuntu-lunar docker image. |
|
![]() |
997fa41e99
|
Attempt to solve MacOS CI issues in GH Actions (#12013)
The MacOS CI in github actions often hangs without any logs. GH argues that it's due to resource utilization, either running out of disk space, memory, or CPU starvation, and thus the runner is terminated. This PR contains multiple attempts to resolve this: 1. introducing pause_process instead of SIGSTOP, which waits for the process to stop before resuming the test, possibly resolving race conditions in some tests, this was a suspect since there was one test that could result in an infinite loop in that case, in practice this didn't help, but still a good idea to keep. 2. disable the `save` config in many tests that don't need it, specifically ones that use heavy writes and could create large files. 3. change the `populate` proc to use short pipeline rather than an infinite one. 4. use `--clients 1` in the macos CI so that we don't risk running multiple resource demanding tests in parallel. 5. enable `--verbose` to be repeated to elevate verbosity and print more info to stdout when a test or a server starts. |
|
![]() |
aa2403ca98
|
Fix redis-cli cluster test timing issue (#11887)
This test fails sporadically: ``` *** [err]: Migrate the last slot away from a node using redis-cli in tests/unit/cluster/cli.tcl cluster size did not reach a consistent size 4 ``` I guess the time (5s) of wait_for_cluster_size is not enough, usually, the waiting time for our other tests for cluster consistency is 50s, so also changing it to 50s. |
|
![]() |
70b2c4f5fd
|
Fix WAITAOF reply when using last_offset and last_numreplicas (#11917)
WAITAOF wad added in #11713, its return is an array. But forget to handle WAITAOF in last_offset and last_numreplicas, causing WAITAOF to return a WAIT like reply. Tests was added to validate that case (both WAIT and WAITAOF). This PR also refactored processClientsWaitingReplicas a bit for better maintainability and readability. |
|
![]() |
9344f654c6
|
Implementing the WAITAOF command (issue #10505) (#11713)
Implementing the WAITAOF functionality which would allow the user to block until a specified number of Redises have fsynced all previous write commands to the AOF. Syntax: `WAITAOF <num_local> <num_replicas> <timeout>` Response: Array containing two elements: num_local, num_replicas num_local is always either 0 or 1 representing the local AOF on the master. num_replicas is the number of replicas that acknowledged the a replication offset of the last write being fsynced to the AOF. Returns an error when called on replicas, or when called with non-zero num_local on a master with AOF disabled, in all other cases the response just contains number of fsync copies. Main changes: * Added code to keep track of replication offsets that are confirmed to have been fsynced to disk. * Keep advancing master_repl_offset even when replication is disabled (and there's no replication backlog, only if there's an AOF enabled). This way we can use this command and it's mechanisms even when replication is disabled. * Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK will be appended only if there's an AOF on the replica, and already ignored on old masters (thus backwards compatible) * WAIT now no longer wait for the replication offset after your last command, but rather the replication offset after your last write (or read command that caused propagation, e.g. lazy expiry). Unrelated changes: * WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI) Implementation details: * Add an atomic var named `fsynced_reploff_pending` that's updated (usually by the bio thread) and later copied to the main `fsynced_reploff` variable (only if the AOF base file exists). I.e. during the initial AOF rewrite it will not be used as the fsynced offset since the AOF base is still missing. * Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific) job that will also update fsync offset the field. * Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio worker thread, to impose ordering on their execution. This solves a race condition where a job could set `fsynced_reploff_pending` to a higher value than another pending fsync job, resulting in indicating an offset for which parts of the data have not yet actually been fsynced. Imposing an ordering on the jobs guarantees that fsync jobs are executed in increasing order of replication offset. * Drain bio jobs when switching `appendfsync` to "always" This should prevent a write race between updates to `fsynced_reploff_pending` in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and those done in the bio thread. * Drain the pending fsync when starting over a new AOF to avoid race conditions with the previous AOF offsets overriding the new one (e.g. after switching to replicate from a new master). * Make sure to update the fsynced offset at the end of the initial AOF rewrite. a must in case there are no additional writes that trigger a periodic fsync, specifically for a replica that does a full sync. Limitations: It is possible to write a module and a Lua script that propagate to the AOF and doesn't propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand. These features are incompatible with the WAITAOF command, and can result in two bad cases. The scenario is that the user executes command that only propagates to AOF, and then immediately issues a WAITAOF, and there's no further writes on the replication stream after that. 1. if the the last thing that happened on the replication stream is a PING (which increased the replication offset but won't trigger an fsync on the replica), then the client would hang forever (will wait for an fack that the replica will never send sine it doesn't trigger any fsyncs). 2. if the last thing that happened is a write command that got propagated properly, then WAITAOF will be released immediately, without waiting for an fsync (since the offset didn't change) Refactoring: * Plumbing to allow bio worker to handle multiple job types This introduces infrastructure necessary to allow BIO workers to not have a 1-1 mapping of worker to job-type. This allows in the future to assign multiple job types to a single worker, either as a performance/resource optimization, or as a way of enforcing ordering between specific classes of jobs. Co-authored-by: Oran Agra <oran@redislabs.com> |
|
![]() |
4ba47d2d21
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845 Since ironing the details of the reply schema of each and every command can take a long time, we would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch. Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build. ### Background In #9656 we add a lot of information about Redis commands, but we are missing information about the replies ### Motivation 1. Documentation. This is the primary goal. 2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like. 3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing testsuite, see the "Testing" section) ### Schema The idea is to supply some sort of schema for the various replies of each command. The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3. Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with and without the `FULL` modifier) We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema. Example for `BZPOPMIN`: ``` "reply_schema": { "oneOf": [ { "description": "Timeout reached and no elements were popped.", "type": "null" }, { "description": "The keyname, popped member, and its score.", "type": "array", "minItems": 3, "maxItems": 3, "items": [ { "description": "Keyname", "type": "string" }, { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } ] } ``` #### Notes 1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI, where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one. 2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply schema for documentation (and possibly to create a fuzzer that validates the replies) 3. For documentation, the description field will include an explanation of the scenario in which the reply is sent, including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one is with `WITHSCORES` and the other is without. 4. For documentation, there will be another optional field "notes" in which we will add a short description of the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat array, for example) Given the above: 1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/) (given that "description" and "notes" are comprehensive enough) 2. We can generate a client in a strongly typed language (but the return type could be a conceptual `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support. 3. We can create a fuzzer for RESP3. ### Limitations (because we are using the standard json-schema) The problem is that Redis' replies are more diverse than what the json format allows. This means that, when we convert the reply to a json (in order to validate the schema against it), we lose information (see the "Testing" section below). The other option would have been to extend the standard json-schema (and json format) to include stuff like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that seemed like too much work, so we decided to compromise. Examples: 1. We cannot tell the difference between an "array" and a "set" 2. We cannot tell the difference between simple-string and bulk-string 3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems` compares (member,score) tuples and not just the member name. ### Testing This commit includes some changes inside Redis in order to verify the schemas (existing and future ones) are indeed correct (i.e. describe the actual response of Redis). To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands it executed and their replies. For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with `--log-req-res --force-resp3`) You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate `.reqres` files (same dir as the `stdout` files) which contain request-response pairs. These files are later on processed by `./utils/req-res-log-validator.py` which does: 1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c) 2. For each request-response pair, it validates the response against the request's reply_schema (obtained from the extended COMMAND DOCS) 5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use the existing redis test suite, rather than attempt to write a fuzzer. #### Notes about RESP2 1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to accept RESP3 as the future RESP) 2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3 so that we can validate it, we will need to know how to convert the actual reply to the one expected. - number and boolean are always strings in RESP2 so the conversion is easy - objects (maps) are always a flat array in RESP2 - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command handling (so the client will not be totally auto-generated) Example for ZRANGE: ``` "reply_schema": { "anyOf": [ { "description": "A list of member elements", "type": "array", "uniqueItems": true, "items": { "type": "string" } }, { "description": "Members and their scores. Returned in case `WITHSCORES` was used.", "notes": "In RESP2 this is returned as a flat array", "type": "array", "uniqueItems": true, "items": { "type": "array", "minItems": 2, "maxItems": 2, "items": [ { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } } ] } ``` ### Other changes 1. Some tests that behave differently depending on the RESP are now being tested for both RESP, regardless of the special log-req-res mode ("Pub/Sub PING" for example) 2. Update the history field of CLIENT LIST 3. Added basic tests for commands that were not covered at all by the testsuite ### TODO - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g. when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896 - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator) - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output of the tests - https://github.com/redis/redis/issues/11897 - [x] (probably a separate PR) add all missing schemas - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to fight with the tcl including mechanism a bit) - [x] issue: module API - https://github.com/redis/redis/issues/11898 - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899 Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com> Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Shaya Potter <shaya@redislabs.com> |
|
![]() |
a7c9e5053a
|
Fix test and improve assert_replication_stream print the whole stream (#11793)
This PR has two parts: 1. Fix flaky test case, the previous tests set a lot of volatile keys, it injects an unexpected DEL command into the replication stream during the later test, causing it to fail. Add a flushall to avoid it. 2. Improve assert_replication_stream, now it can print the whole stream rather than just the failing line. |
|
![]() |
543e0daa63
|
Make assert_refcount skip the OBJECT REFCOUNT check with needs:debug tag (#11487)
This PR add `assert_refcount_morethan`, and modify `assert_refcount` to skip the `OBJECT REFCOUNT` check with `needs:debug` flag. Use them to modify all `OBJECT REFCOUNT` calls and also update the tests/README to be more specific. The reasoning is that some of these tests could be testing something important, and along the way also add a check for the refcount, and it could be a shame to skip the whole test just because the refcount functionality is missing or blocked. but much like the fact that some redis variants may not support DEBUG, and still we want to run the majority of the test for coverage, and just skip the digest match. |
|
![]() |
3f8756a06a
|
Fix set with duplicate elements causes sdiff to hang (#11530)
This payload produces a set with duplicate elements (listpack encoding): ``` restore _key 0 "\x14\x25\x25\x00\x00\x00\x0A\x00\x06\x01\x82\x5F\x35\x03\x04\x01\x82\x5F\x31\x03\x82\x5F\x33\x03\x00\x01\x82\x5F\x39\x03\x82\x5F\x33\x03\x08\x01\x02\x01\xFF\x0B\x00\x31\xBE\x7D\x41\x01\x03\x5B\xEC" smembers key 1) "6" 2) "_5" 3) "4" 4) "_1" 5) "_3" ---> dup 6) "0" 7) "_9" 8) "_3" ---> dup 9) "8" 10) "2" ``` This kind of sets will cause SDIFF to hang, SDIFF generated a broken protocol and left the client hung. (Expected ten elements, but only got nine elements due to the duplication.) If we set `sanitize-dump-payload` to yes, we will be able to find the duplicate elements and report "ERR Bad data format". Discovered and discussed in #11290. This PR also improve prints when corrupt-dump-fuzzer hangs, it will print the cmds and the payload, an example like: ``` Testing integration/corrupt-dump-fuzzer [TIMEOUT]: clients state report follows. sock6 => (SPAWNED SERVER) pid:28884 Killing still running Redis server 28884 commands caused test to hang: SDIFF __key payload that caused test to hang: "\x14\balabala" ``` Co-authored-by: Oran Agra <oran@redislabs.com> |
|
![]() |
4c54528f0f
|
fixes for fork child exit and test: #11463 (#11499)
Fix a few issues with the recent #11463 * use exitFromChild instead of exit * test should ignore defunct process since that's what we expect to happen for thees child processes when the parent dies. * fix typo Co-authored-by: Binbin <binloveplay1314@qq.com> |
|
![]() |
ccaef5c923
|
diskless master, avoid bgsave child hung when fork parent crashes (#11463)
During a diskless sync, if the master main process crashes, the child would have hung in `write`. This fix closes the read fd on the child side, so that if the parent crashes, the child will get a write error and exit. This change also fixes disk-based replication, BGSAVE and AOFRW. In that case the child wouldn't have been hang, it would have just kept running until done which may be pointless. There is a certain degree of risk here. in case there's a BGSAVE child that could maybe succeed and the parent dies for some reason, the old code would have let the child keep running and maybe succeed and avoid data loss. On the other hand, if the parent is restarted, it would have loaded an old rdb file (or none), and then the child could reach the end and rename the rdb file (data conflicting with what the parent has), or also have a race with another BGSAVE child that the new parent started. Note that i removed a comment saying a write error will be ignored in the child and handled by the parent (this comment was very old and i don't think relevant). |
|
![]() |
47c493e070
|
Re-design cluster link send buffer to improve memory management (#11343)
Re-design cluster link send queue to improve memory management |
|
![]() |
663fbd3459
|
Stabilize cluster hostnames tests (#11307)
This PR introduces a couple of changes to improve cluster test stability: 1. Increase the cluster node timeout to 3 seconds, which is similar to the normal cluster tests, but introduce a new mechanism to increase the ping period so that the tests are still fast. This new config is a debug config. 2. Set `cluster-replica-no-failover yes` on a wider array of tests which are sensitive to failovers. This was occurring on the ARM CI. |
|
![]() |
13d25dd95e
|
Fix crash due to delete entry from compress quicklistNode and wrongly split quicklistNode (#11242)
This PR mainly deals with 2 crashes introduced in #9357, and fix the QUICKLIST-PACKED-THRESHOLD mess in external test mode. 1. Fix crash due to deleting an entry from a compress quicklistNode When inserting a large element, we need to create a new quicklistNode first, and then delete its previous element, if the node where the deleted element is located is compressed, it will cause a crash. Now add `dont_compress` to quicklistNode, if we want to use a quicklistNode after some operation, we can use this flag like following: ```c node->dont_compress = 1; /* Prevent to be compressed */ some_operation(node); /* This operation might try to compress this node */ some_other_operation(node); /* We can use this node without decompress it */ node->dont_compress = 0; /* Re-able compression */ quicklistCompressNode(node); ``` Perhaps in the future, we could just disable the current entry from being compressed during the iterator loop, but that would require more work. 2. Fix crash due to wrongly split quicklist before #9357, the offset param of _quicklistSplitNode() will not negative. For now, when offset is negative, the split extent will be wrong. following example: ```c int orig_start = after ? offset + 1 : 0; int orig_extent = after ? -1 : offset; int new_start = after ? 0 : offset; int new_extent = after ? offset + 1 : -1; # offset: -2, after: 1, node->count: 2 # current wrong range: [-1,-1] [0,-1] # correct range: [1,-1] [0, 1] ``` Because only `_quicklistInsert()` splits the quicklistNode and only `quicklistInsertAfter()`, `quicklistInsertBefore()` call _quicklistInsert(), so `quicklistReplaceEntry()` and `listTypeInsert()` might occur this crash. But the iterator of `listTypeInsert()` is alway from head to tail(iter->offset is always positive), so it is not affected. The final conclusion is this crash only occur when we insert a large element with negative index into a list, that affects `LSET` command and `RM_ListSet` module api. 3. In external test mode, we need to restore quicklist packed threshold after when the end of test. 4. Show `node->count` in quicklistRepr(). 5. Add new tcl proc `config_get_set` to support restoring config in tests. |
|
![]() |
c0ce97facc
|
fix test Migrate the last slot away from a node using redis-cli (#11221)
When using cli to add node, there can potentially be a race condition in which all nodes presenting cluster state o.k even though the added node did not yet meet all cluster nodes. this adds another utility function to wait until all cluster nodes see the same cluster size |
|
![]() |
78259826cd
|
Bump codespell from 2.1.0 to 2.2.1 in /.codespell (#11184)
add a few terms to the white list, and fix a few newly detected typos |
|
![]() |
4faddf18ca |
Build TLS as a loadable module
* Support BUILD_TLS=module to be loaded as a module via config file or command line. e.g. redis-server --loadmodule redis-tls.so * Updates to redismodule.h to allow it to be used side by side with server.h by defining REDISMODULE_CORE_MODULE * Changes to server.h, redismodule.h and module.c to avoid repeated type declarations (gcc 4.8 doesn't like these) * Add a mechanism for non-ABI neutral modules (ones who include server.h) to refuse loading if they detect not being built together with redis (release.c) * Fix wrong signature of RedisModuleDefragFunc, this could break compilation of a module, but not the ABI * Move initialization of listeners in server.c to be after loading the modules * Config TLS after initialization of listeners * Init cluster after initialization of listeners * Add TLS module to CI * Fix a test suite race conditions: Now that the listeners are initialized later, it's not sufficient to wait for the PID message in the log, we need to wait for the "Server Initialized" message. * Fix issues with moduleconfigs test as a result from start_server waiting for "Server Initialized" * Fix issues with modules/infra test as a result of an additional module present Notes about Sentinel: Sentinel can't really rely on the tls module, since it uses hiredis to initiate connections and depends on OpenSSL (won't be able to use any other connection modules for that), so it was decided that when TLS is built as a module, sentinel does not support TLS at all. This means that it keeps using redis_tls_ctx and redis_tls_client_ctx directly. Example code of config in redis-tls.so(may be use in the future): RedisModuleString *tls_cfg = NULL; void tlsInfo(RedisModuleInfoCtx *ctx, int for_crash_report) { UNUSED(for_crash_report); RedisModule_InfoAddSection(ctx, ""); RedisModule_InfoAddFieldLongLong(ctx, "var", 42); } int tlsCommand(RedisModuleCtx *ctx, RedisModuleString **argv, int argc) { if (argc != 2) return RedisModule_WrongArity(ctx); return RedisModule_ReplyWithString(ctx, argv[1]); } RedisModuleString *getStringConfigCommand(const char *name, void *privdata) { REDISMODULE_NOT_USED(name); REDISMODULE_NOT_USED(privdata); return tls_cfg; } int setStringConfigCommand(const char *name, RedisModuleString *new, void *privdata, RedisModuleString **err) { REDISMODULE_NOT_USED(name); REDISMODULE_NOT_USED(err); REDISMODULE_NOT_USED(privdata); if (tls_cfg) RedisModule_FreeString(NULL, tls_cfg); RedisModule_RetainString(NULL, new); tls_cfg = new; return REDISMODULE_OK; } int RedisModule_OnLoad(void *ctx, RedisModuleString **argv, int argc) { .... if (RedisModule_CreateCommand(ctx,"tls",tlsCommand,"",0,0,0) == REDISMODULE_ERR) return REDISMODULE_ERR; if (RedisModule_RegisterStringConfig(ctx, "cfg", "", REDISMODULE_CONFIG_DEFAULT, getStringConfigCommand, setStringConfigCommand, NULL, NULL) == REDISMODULE_ERR) return REDISMODULE_ERR; if (RedisModule_LoadConfigs(ctx) == REDISMODULE_ERR) { if (tls_cfg) { RedisModule_FreeString(ctx, tls_cfg); tls_cfg = NULL; } return REDISMODULE_ERR; } ... } Co-authored-by: zhenwei pi <pizhenwei@bytedance.com> Signed-off-by: zhenwei pi <pizhenwei@bytedance.com> |
|
![]() |
8a4e3bcd8d
|
Cluster test improvements (#10920)
* Restructured testing to allow running cluster tests easily as part of the normal testing |