mirror of https://mirror.osredm.com/root/redis.git
22 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
64a40b20d9
|
Async IO Threads (#13695)
## Introduction Redis introduced IO Thread in 6.0, allowing IO threads to handle client request reading, command parsing and reply writing, thereby improving performance. The current IO thread implementation has a few drawbacks. - The main thread is blocked during IO thread read/write operations and must wait for all IO threads to complete their current tasks before it can continue execution. In other words, the entire process is synchronous. This prevents the efficient utilization of multi-core CPUs for parallel processing. - When the number of clients and requests increases moderately, it causes all IO threads to reach full CPU utilization due to the busy wait mechanism used by the IO threads. This makes it challenging for us to determine which part of Redis has reached its bottleneck. - When IO threads are enabled with TLS and io-threads-do-reads, a disconnection of a connection with pending data may result in it being assigned to multiple IO threads simultaneously. This can cause race conditions and trigger assertion failures. Related issue: redis#12540 Therefore, we designed an asynchronous IO threads solution. The IO threads adopt an event-driven model, with the main thread dedicated to command processing, meanwhile, the IO threads handle client read and write operations in parallel. ## Implementation ### Overall As before, we did not change the fact that all client commands must be executed on the main thread, because Redis was originally designed to be single-threaded, and processing commands in a multi-threaded manner would inevitably introduce numerous race and synchronization issues. But now each IO thread has independent event loop, therefore, IO threads can use a multiplexing approach to handle client read and write operations, eliminating the CPU overhead caused by busy-waiting. the execution process can be briefly described as follows: the main thread assigns clients to IO threads after accepting connections, IO threads will notify the main thread when clients finish reading and parsing queries, then the main thread processes queries from IO threads and generates replies, IO threads handle writing reply to clients after receiving clients list from main thread, and then continue to handle client read and write events. ### Each IO thread has independent event loop We now assign each IO thread its own event loop. This approach eliminates the need for the main thread to perform the costly `epoll_wait` operation for handling connections (except for specific ones). Instead, the main thread processes requests from the IO threads and hands them back once completed, fully offloading read and write events to the IO threads. Additionally, all TLS operations, including handling pending data, have been moved entirely to the IO threads. This resolves the issue where io-threads-do-reads could not be used with TLS. ### Event-notified client queue To facilitate communication between the IO threads and the main thread, we designed an event-notified client queue. Each IO thread and the main thread have two such queues to store clients waiting to be processed. These queues are also integrated with the event loop to enable handling. We use pthread_mutex to ensure the safety of queue operations, as well as data visibility and ordering, and race conditions are minimized, as each IO thread and the main thread operate on independent queues, avoiding thread suspension due to lock contention. And we implemented an event notifier based on `eventfd` or `pipe` to support event-driven handling. ### Thread safety Since the main thread and IO threads can execute in parallel, we must handle data race issues carefully. **client->flags** The primary tasks of IO threads are reading and writing, i.e. `readQueryFromClient` and `writeToClient`. However, IO threads and the main thread may concurrently modify or access `client->flags`, leading to potential race conditions. To address this, we introduced an io-flags variable to record operations performed by IO threads, thereby avoiding race conditions on `client->flags`. **Pause IO thread** In the main thread, we may want to operate data of IO threads, maybe uninstall event handler, access or operate query/output buffer or resize event loop, we need a clean and safe context to do that. We pause IO thread in `IOThreadBeforeSleep`, do some jobs and then resume it. To avoid thread suspended, we use busy waiting to confirm the target status. Besides we use atomic variable to make sure memory visibility and ordering. We introduce these functions to pause/resume IO Threads as below. ``` pauseIOThread, resumeIOThread pauseAllIOThreads, resumeAllIOThreads pauseIOThreadsRange, resumeIOThreadsRange ``` Testing has shown that `pauseIOThread` is highly efficient, allowing the main thread to execute nearly 200,000 operations per second during stress tests. Similarly, `pauseAllIOThreads` with 8 IO threads can handle up to nearly 56,000 operations per second. But operations performed between pausing and resuming IO threads must be quick; otherwise, they could cause the IO threads to reach full CPU utilization. **freeClient and freeClientAsync** The main thread may need to terminate a client currently running on an IO thread, for example, due to ACL rule changes, reaching the output buffer limit, or evicting a client. In such cases, we need to pause the IO thread to safely operate on the client. **maxclients and maxmemory-clients updating** When adjusting `maxclients`, we need to resize the event loop for all IO threads. Similarly, when modifying `maxmemory-clients`, we need to traverse all clients to calculate their memory usage. To ensure safe operations, we pause all IO threads during these adjustments. **Client info reading** The main thread may need to read a client’s fields to generate a descriptive string, such as for the `CLIENT LIST` command or logging purposes. In such cases, we need to pause the IO thread handling that client. If information for all clients needs to be displayed, all IO threads must be paused. **Tracking redirect** Redis supports the tracking feature and can even send invalidation messages to a connection with a specified ID. But the target client may be running on IO thread, directly manipulating the client’s output buffer is not thread-safe, and the IO thread may not be aware that the client requires a response. In such cases, we pause the IO thread handling the client, modify the output buffer, and install a write event handler to ensure proper handling. **clientsCron** In the `clientsCron` function, the main thread needs to traverse all clients to perform operations such as timeout checks, verifying whether they have reached the soft output buffer limit, resizing the output/query buffer, or updating memory usage. To safely operate on a client, the IO thread handling that client must be paused. If we were to pause the IO thread for each client individually, the efficiency would be very low. Conversely, pausing all IO threads simultaneously would be costly, especially when there are many IO threads, as clientsCron is invoked relatively frequently. To address this, we adopted a batched approach for pausing IO threads. At most, 8 IO threads are paused at a time. The operations mentioned above are only performed on clients running in the paused IO threads, significantly reducing overhead while maintaining safety. ### Observability In the current design, the main thread always assigns clients to the IO thread with the least clients. To clearly observe the number of clients handled by each IO thread, we added the new section in INFO output. The `INFO THREADS` section can show the client count for each IO thread. ``` # Threads io_thread_0:clients=0 io_thread_1:clients=2 io_thread_2:clients=2 ``` Additionally, in the `CLIENT LIST` output, we also added a field to indicate the thread to which each client is assigned. `id=244 addr=127.0.0.1:41870 laddr=127.0.0.1:6379 ... resp=2 lib-name= lib-ver= io-thread=1` ## Trade-off ### Special Clients For certain special types of clients, keeping them running on IO threads would result in severe race issues that are difficult to resolve. Therefore, we chose not to offload these clients to the IO threads. For replica, monitor, subscribe, and tracking clients, main thread may directly write them a reply when conditions are met. Race issues are difficult to resolve, so we have them processed in the main thread. This includes the Lua debug clients as well, since we may operate connection directly. For blocking client, after the IO thread reads and parses a command and hands it over to the main thread, if the client is identified as a blocking type, it will be remained in the main thread. Once the blocking operation completes and the reply is generated, the client is transferred back to the IO thread to send the reply and wait for event triggers. ### Clients Eviction To support client eviction, it is necessary to update each client’s memory usage promptly during operations such as read, write, or command execution. However, when a client operates on an IO thread, it is not feasible to update the memory usage immediately due to the risk of data races. As a result, memory usage can only be updated either in the main thread while processing commands or in the `ClientsCron` periodically. The downside of this approach is that updates might experience a delay of up to one second, which could impact the precision of memory management for eviction. To avoid incorrectly evicting clients. We adopted a best-effort compensation solution, when we decide to eviction a client, we update its memory usage again before evicting, if the memory used by the client does not decrease or memory usage bucket is not changed, then we will evict it, otherwise, not evict it. However, we have not completely solved this problem. Due to the delay in memory usage updates, it may lead us to make incorrect decisions about the need to evict clients. ### Defragment In the majority of cases we do NOT use the data from argv directly in the db. 1. key names We store a copy that we allocate in the main thread, see `sdsdup()` in `dbAdd()`. 2. hash key and value We store key as hfield and store value as sds, see `hfieldNew()` and `sdsdup()` in `hashTypeSet()`. 3. other datatypes They don't even use SDS, so there is no reference issues. But in some cases client the data from argv may be retain by the main thread. As a result, during fragmentation cleanup, we need to move allocations from the IO thread’s arena to the main thread’s arena. We always allocate new memory in the main thread’s arena, but the memory released by IO threads may not yet have been reclaimed. This ultimately causes the fragmentation rate to be higher compared to creating and allocating entirely within a single thread. The following cases below will lead to memory allocated by the IO thread being kept by the main thread. 1. string related command: `append`, `getset`, `mset` and `set`. If `tryObjectEncoding()` does not change argv, we will keep it directly in the main thread, see the code in `tryObjectEncoding()`(specifically `trimStringObjectIfNeeded()`) 2. block related command. the key names will be kept in `c->db->blocking_keys`. 3. watch command the key names will be kept in `c->db->watched_keys`. 4. [s]subscribe command channel name will be kept in `serverPubSubChannels`. 5. script load command script will be kept in `server.lua_scripts`. 7. some module API: `RM_RetainString`, `RM_HoldString` Those issues will be handled in other PRs. ## Testing ### Functional Testing The commit with enabling IO Threads has passed all TCL tests, but we did some changes: **Client query buffer**: In the original code, when using a reusable query buffer, ownership of the query buffer would be released after the command was processed. However, with IO threads enabled, the client transitions from an IO thread to the main thread for processing. This causes the ownership release to occur earlier than the command execution. As a result, when IO threads are enabled, the client's information will never indicate that a shared query buffer is in use. Therefore, we skip the corresponding query buffer tests in this case. **Defragment**: Add a new defragmentation test to verify the effect of io threads on defragmentation. **Command delay**: For deferred clients in TCL tests, due to clients being assigned to different threads for execution, delays may occur. To address this, we introduced conditional waiting: the process proceeds to the next step only when the `client list` contains the corresponding commands. ### Sanitizer Testing The commit passed all TCL tests and reported no errors when compiled with the `fsanitizer=thread` and `fsanitizer=address` options enabled. But we made the following modifications: we suppressed the sanitizer warnings for clients with watched keys when updating `client->flags`, we think IO threads read `client->flags`, but never modify it or read the `CLIENT_DIRTY_CAS` bit, main thread just only modifies this bit, so there is no actual data race. ## Others ### IO thread number In the new multi-threaded design, the main thread is primarily focused on command processing to improve performance. Typically, the main thread does not handle regular client I/O operations but is responsible for clients such as replication and tracking clients. To avoid breaking changes, we still consider the main thread as the first IO thread. When the io-threads configuration is set to a low value (e.g., 2), performance does not show a significant improvement compared to a single-threaded setup for simple commands (such as SET or GET), as the main thread does not consume much CPU for these simple operations. This results in underutilized multi-core capacity. However, for more complex commands, having a low number of IO threads may still be beneficial. Therefore, it’s important to adjust the `io-threads` based on your own performance tests. Additionally, you can clearly monitor the CPU utilization of the main thread and IO threads using `top -H -p $redis_pid`. This allows you to easily identify where the bottleneck is. If the IO thread is the bottleneck, increasing the `io-threads` will improve performance. If the main thread is the bottleneck, the overall performance can only be scaled by increasing the number of shards or replicas. --------- Co-authored-by: debing.sun <debing.sun@redis.com> Co-authored-by: oranagra <oran@redislabs.com> |
|
![]() |
4e3be944fc
|
Fix timing issue in blockedclient test (#13071)
We can see that the past time here happens to be busy_time_limit, causing the test to fail: ``` [err]: RM_Call from blocked client in tests/unit/moduleapi/blockedclient.tcl Expected '50' to be more than '50' (context: type eval line 26 cmd {assert_morethan [expr [clock clicks -milliseconds]-$start] $busy_time_limit} proc ::test) ``` It is reasonable for them to be equal, so equal is added here. It should be noted that in the previous `Busy module command` test, we also used assert_morethan_equal, so this should have been missed at the time. |
|
![]() |
6016973ac0
|
Fix module assertion crash when timer and timeout are unlocked in the same event loop (#13015)
When we use a timer to unblock a client in module, if the timer period and the block timeout are very close, they will unblock the client in the same event loop, and it will trigger the assertion. The reason is that in moduleBlockedClientTimedOut we will protect against re-processing, so we don't actually call updateStatsOnUnblock (see #12817), so we are not able to reset the c->duration. The reason is unblockClientOnTimeout() didn't realize that bc had been unblocked. We add a function to the module to determine if bc is blocked, and then use it in unblockClientOnTimeout() to exit. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout unblockClient resetClient -- assertion, crash the server 'c->duration == 0' is not true ``` |
|
![]() |
74a6e48a3d
|
Fix module unblock crash due to no timeout_callback (#13017)
The block timeout is passed in the test case, but we do not pass in the timeout_callback, and it will crash when unlocking. In this case, in moduleBlockedClientTimedOut we will check timeout_callback. There is the stack: ``` beforeSleep blockedBeforeSleep handleBlockedClientsTimeout checkBlockedClientTimeout unblockClientOnTimeout replyToBlockedClientTimedOut moduleBlockedClientTimedOut -- timeout_callback is NULL, invalidFunctionWasCalled bc->timeout_callback(&ctx,(void**)c->argv,c->argc); ``` |
|
![]() |
3230199920
|
Modules: Unblock from within a timer coverage (#12337)
Apart from adding the missing coverage, this PR also adds `blockedBeforeSleep` that gathers all block-related functions from `beforeSleep` The order inside `blockedBeforeSleep` is different: now `handleClientsBlockedOnKeys` (which may unblock clients) is called before `processUnblockedClients` (which handles unblocked clients). It makes sense to have this order. There are no visible effects of the wrong ordering, except some cleanups of the now-unblocked client would have happen in the next `beforeSleep` (will now happen in the current one) The reason we even got into it is because i triggers an assertion in logresreq.c (breaking the assumption that `unblockClient` is called **before** actually flushing the reply to the socket): `handleClientsBlockedOnKeys` is called, then it calls `moduleUnblockClientOnKey`, which calls `moduleUnblockClient`, which adds the client to `moduleUnblockedClients` back to `beforeSleep`, we call `handleClientsWithPendingWritesUsingThreads`, it writes the data of buf to the client, so `client->bufpos` became 0 On the next `beforeSleep`, we call `moduleHandleBlockedClients`, which calls `unblockClient`, which calls `reqresAppendResponse`, triggering the assert. (because the `bufpos` is 0) - see https://github.com/redis/redis/pull/12301#discussion_r1226386716 |
|
![]() |
6117f28822
|
Fix WAIT for clients being blocked in a module command (#12220)
So far clients being blocked and unblocked by a module command would update the c->woff variable and so WAIT was ineffective and got released without waiting for the command actions to propagate. This seems to have existed since forever, but not for RM_BlockClientOnKeys. It is problematic though to know if the module did or didn't propagate anything in that command, so for now, instead of adding an API, we'll just update the woff to the latest offset when unblocking, this will cause the client to possibly wait excessively, but that's not that bad. |
|
![]() |
4ba47d2d21
|
Add reply_schema to command json files (internal for now) (#10273)
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845 Since ironing the details of the reply schema of each and every command can take a long time, we would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch. Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build. ### Background In #9656 we add a lot of information about Redis commands, but we are missing information about the replies ### Motivation 1. Documentation. This is the primary goal. 2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like. 3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing testsuite, see the "Testing" section) ### Schema The idea is to supply some sort of schema for the various replies of each command. The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3. Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with and without the `FULL` modifier) We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema. Example for `BZPOPMIN`: ``` "reply_schema": { "oneOf": [ { "description": "Timeout reached and no elements were popped.", "type": "null" }, { "description": "The keyname, popped member, and its score.", "type": "array", "minItems": 3, "maxItems": 3, "items": [ { "description": "Keyname", "type": "string" }, { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } ] } ``` #### Notes 1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI, where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one. 2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply schema for documentation (and possibly to create a fuzzer that validates the replies) 3. For documentation, the description field will include an explanation of the scenario in which the reply is sent, including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one is with `WITHSCORES` and the other is without. 4. For documentation, there will be another optional field "notes" in which we will add a short description of the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat array, for example) Given the above: 1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/) (given that "description" and "notes" are comprehensive enough) 2. We can generate a client in a strongly typed language (but the return type could be a conceptual `union` and the caller needs to know which schema is relevant). see the section below for RESP2 support. 3. We can create a fuzzer for RESP3. ### Limitations (because we are using the standard json-schema) The problem is that Redis' replies are more diverse than what the json format allows. This means that, when we convert the reply to a json (in order to validate the schema against it), we lose information (see the "Testing" section below). The other option would have been to extend the standard json-schema (and json format) to include stuff like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that seemed like too much work, so we decided to compromise. Examples: 1. We cannot tell the difference between an "array" and a "set" 2. We cannot tell the difference between simple-string and bulk-string 3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems` compares (member,score) tuples and not just the member name. ### Testing This commit includes some changes inside Redis in order to verify the schemas (existing and future ones) are indeed correct (i.e. describe the actual response of Redis). To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands it executed and their replies. For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with `--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with `--log-req-res --force-resp3`) You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate `.reqres` files (same dir as the `stdout` files) which contain request-response pairs. These files are later on processed by `./utils/req-res-log-validator.py` which does: 1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c) 2. For each request-response pair, it validates the response against the request's reply_schema (obtained from the extended COMMAND DOCS) 5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use the existing redis test suite, rather than attempt to write a fuzzer. #### Notes about RESP2 1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to accept RESP3 as the future RESP) 2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3 so that we can validate it, we will need to know how to convert the actual reply to the one expected. - number and boolean are always strings in RESP2 so the conversion is easy - objects (maps) are always a flat array in RESP2 - others (nested array in RESP3's `ZRANGE` and others) will need some special per-command handling (so the client will not be totally auto-generated) Example for ZRANGE: ``` "reply_schema": { "anyOf": [ { "description": "A list of member elements", "type": "array", "uniqueItems": true, "items": { "type": "string" } }, { "description": "Members and their scores. Returned in case `WITHSCORES` was used.", "notes": "In RESP2 this is returned as a flat array", "type": "array", "uniqueItems": true, "items": { "type": "array", "minItems": 2, "maxItems": 2, "items": [ { "description": "Member", "type": "string" }, { "description": "Score", "type": "number" } ] } } ] } ``` ### Other changes 1. Some tests that behave differently depending on the RESP are now being tested for both RESP, regardless of the special log-req-res mode ("Pub/Sub PING" for example) 2. Update the history field of CLIENT LIST 3. Added basic tests for commands that were not covered at all by the testsuite ### TODO - [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g. when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896 - [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode - [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator) - [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output of the tests - https://github.com/redis/redis/issues/11897 - [x] (probably a separate PR) add all missing schemas - [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res - [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to fight with the tcl including mechanism a bit) - [x] issue: module API - https://github.com/redis/redis/issues/11898 - [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899 Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com> Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com> Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Shaya Potter <shaya@redislabs.com> |
|
![]() |
61baabd8d5
|
Fix crash on RM_Call with script mode. (#10886)
The PR fixes 2 issues: ### RM_Call crash on script mode `RM_Call` can potentially be called from a background thread where `server.current_client` are not set. In such case we get a crash on `NULL` dereference. The fix is to check first if `server.current_client` is `NULL`, if it does we should verify disc errors and readonly replica as we do to any normal clients (no masters nor AOF). ### RM_Call block OOM commands when not needed Again `RM_Call` can be executed on a background thread using a `ThreadSafeCtx`. In such case `server.pre_command_oom_state` can be irrelevant and should not be considered when check OOM state. This cause OOM commands to be blocked when not necessarily needed. In such case, check the actual used memory (and not the cached value). Notice that in order to know if the cached value can be used, we check that the ctx that was used on the `RM_Call` is a ThreadSafeCtx. Module writer can potentially abuse the API and use ThreadSafeCtx on the main thread. We consider this as a API miss used. |
|
![]() |
df787764e3
|
Fix regression not aborting transaction on error, and re-edit some error responses (#10612)
1. Disk error and slave count checks didn't flag the transactions or counted correctly in command stats (regression from #10372 , 7.0 RC3) 2. RM_Call will reply the same way Redis does, in case of non-exisitng command or arity error 3. RM_WrongArtiy will consider the full command name 4. Use lowercase 'u' in "unknonw subcommand" (to align with "unknown command") Followup work of #10127 |
|
![]() |
7d1ad6ca96
|
Fix RM_Yield bug processing future commands of the current client. (#10573)
RM_Yield was missing a call to protectClient to prevent redis from processing future commands of the yielding client. Adding tests that fail without this fix. This would be complicated to solve since nested calls to RM_Call used to replace the current_client variable with the module temp client. It looks like it's no longer necessary to do that, since it was added back in #9890 to solve two issues, both already gone: 1. call to CONFIG SET maxmemory could trigger a module hook calling RM_Call. although this specific issue is gone, arguably other hooks like keyspace notification, can do the same. 2. an assertion in lookupKey that checks the current command of the current client, introduced in #9572 and removed in #10248 |
|
![]() |
451531f1c8
|
Fix RM_Yield bug (#10548)
The bug was when using REDISMODULE_YIELD_FLAG_CLIENTS. in that case we would have only set the CLIENTS type flag in server.busy_module_yield_flags and then clear that flag when exiting RM_Yield, so we would never call unblockPostponedClients when the context is destroyed. This didn't really have any actual implication, which is why the tests couldn't (and still can't) find that since the bug only happens when using CLIENT, but in this case we won't have any clients to un-postpone i.e. clients will get rejected with BUSY error, rather than being postponed. Unrelated: * Adding tests for nested contexts, just in case. * Avoid nested RM_Yield calls |
|
![]() |
f3855a0930
|
Add new RM_Call flags for script mode, no writes, and error replies. (#10372)
The PR extends RM_Call with 3 new capabilities using new flags that are given to RM_Call as part of the `fmt` argument. It aims to assist modules that are getting a list of commands to be executed from the user (not hard coded as part of the module logic), think of a module that implements a new scripting language... * `S` - Run the command in a script mode, this means that it will raise an error if a command which are not allowed inside a script (flaged with the `deny-script` flag) is invoked (like SHUTDOWN). In addition, on script mode, write commands are not allowed if there is not enough good replicas (as configured with `min-replicas-to-write`) and/or a disk error happened. * `W` - no writes mode, Redis will reject any command that is marked with `write` flag. Again can be useful to modules that implement a new scripting language and wants to prevent any write commands. * `E` - Return errors as RedisModuleCallReply. Today the errors that happened before the command was invoked (like unknown commands or acl error) return a NULL reply and set errno. This might be missing important information about the failure and it is also impossible to just pass the error to the user using RM_ReplyWithCallReply. This new flag allows you to get a RedisModuleCallReply object with the relevant error message and treat it as if it was an error that was raised by the command invocation. Tests were added to verify the new code paths. In addition small refactoring was done to share some code between modules, scripts, and `processCommand` function: 1. `getAclErrorMessage` was added to `acl.c` to unified to log message extraction from the acl result 2. `checkGoodReplicasStatus` was added to `replication.c` to check the status of good replicas. It is used on `scriptVerifyWriteCommandAllow`, `RM_Call`, and `processCommand`. 3. `writeCommandsGetDiskErrorMessage` was added to `server.c` to get the error message on persistence failure. Again it is used on `scriptVerifyWriteCommandAllow`, `RM_Call`, and `processCommand`. |
|
![]() |
fad0b0d2a6
|
Fix error stats and failed command stats for blocked clients (#10309)
This is a followup work for #10278, and a discussion about #10279 The changes: - fix failed_calls in command stats for blocked clients that got error. including CLIENT UNBLOCK, and module replying an error from a thread. - fix latency stats for XREADGROUP that filed with -NOGROUP Theory behind which errors should be counted: - error stats represents errors returned to the user, so an error handled by a module should not be counted. - total error counter should be the same. - command stats represents execution of commands (even with RM_Call, and if they fail or get rejected it counts these calls in commandstats, so it should also count failed_calls) Some thoughts about Scripts: for scripts it could be different since they're part of user code, not the infra (not an extension to redis) we certainly want commandstats to contain all calls and errors a simple script is like mult-exec transaction so an error inside it should be counted in error stats a script that replies with an error to the user (using redis.error_reply) should also be counted in error stats but then the problem is that a plain `return redis.call("SET")` should not be counted twice (once for the SET and once for EVAL) so that's something left to be resolved in #10279 |
|
![]() |
b099889a3a
|
Fix and improve module error reply statistics (#10278)
This PR handles several aspects 1. Calls to RM_ReplyWithError from thread safe contexts don't violate thread safety. 2. Errors returning from RM_Call to the module aren't counted in the statistics (they might be handled silently by the module) 3. When a module propagates a reply it got from RM_Call to it's client, then the error statistics are counted. This is done by: 1. When appending an error reply to the output buffer, we avoid updating the global error statistics, instead we cache that error in a deferred list in the client struct. 2. When creating a RedisModuleCallReply object, the deferred error list is moved from the client into that object. 3. when a module calls RM_ReplyWithCallReply we copy the deferred replies to the dest client (if that's a real client, then that's when the error statistics are updated to the server) Note about RM_ReplyWithCallReply: if the original reply had an array with errors, and the module replied with just a portion of the original reply, and not the entire reply, the errors are currently not propagated and the errors stats will not get propagated. Fix #10180 |
|
![]() |
23325c135f
|
sub-command support for ACL CAT and COMMAND LIST. redisCommand always stores fullname (#10127)
Summary of changes: 1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a const char * for native commands and SDS for module commands. 2. Store the [sub]command fullname in `redisCommand->fullname` (sds). 3. List subcommands in `ACL CAT` 4. List subcommands in `COMMAND LIST` 5. `moduleUnregisterCommands` now will also free the module subcommands. 6. RM_GetCurrentCommandName returns full command name Other changes: 1. Add `addReplyErrorArity` and `addReplyErrorExpireTime` 2. Remove `getFullCommandName` function that now is useless. 3. Some cleanups about `fullname` since now it is SDS. 4. Delete `populateSingleCommand` function from server.h that is useless. 5. Added tests to cover this change. 6. Add some module unload tests and fix the leaks 7. Make error messages uniform, make sure they always contain the full command name and that it's quoted. 7. Fixes some typos see the history in #9504, fixes #10124 Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: guybe7 <guy.benoish@redislabs.com> |
|
![]() |
c4b788230c
|
Adding module api for processing commands during busy jobs and allow flagging the commands that should be handled at this status (#9963)
Some modules might perform a long-running logic in different stages of Redis lifetime, for example: * command execution * RDB loading * thread safe context During this long-running logic Redis is not responsive. This PR offers 1. An API to process events while a busy command is running (`RM_Yield`) 2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy jobs which can also be used by modules (`allow-busy`) 3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only after `busy-reply-threshold` 4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`), but either way, the processing is throttled to the server hz rate. 5. Allow modules to Yield to redis background tasks, but not to client commands * rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`) Co-authored-by: Oran Agra <oran@redislabs.com> |
|
![]() |
cf860df599
|
Fix module blocked clients RESP version (#9634)
Before this commit, module blocked clients did not carry through the original RESP version, resulting with RESP3 clients receiving unexpected RESP2 replies. |
|
![]() |
48efc25f74
|
Handle output buffer limits for Module blocked clients (#8141)
Module blocked clients cache the response in a temporary client, the reply list in this client would be affected by the recent fix in #7202, but when the reply is later copied into the real client, it would have bypassed all the checks for output buffer limit, which would have resulted in both: responding with a partial response to the client, and also not disconnecting it at all. |
|
![]() |
c1b1e8c329
|
Adds pub/sub channel patterns to ACL (#7993)
Fixes #7923. This PR appropriates the special `&` symbol (because `@` and `*` are taken), followed by a literal value or pattern for describing the Pub/Sub patterns that an ACL user can interact with. It is similar to the existing key patterns mechanism in function (additive) and implementation (copy-pasta). It also adds the allchannels and resetchannels ACL keywords, naturally. The default user is given allchannels permissions, whereas new users get whatever is defined by the acl-pubsub-default configuration directive. For backward compatibility in 6.2, the default of this directive is allchannels but this is likely to be changed to resetchannels in the next major version for stronger default security settings. Unless allchannels is set for the user, channel access permissions are checked as follows : * Calls to both PUBLISH and SUBSCRIBE will fail unless a pattern matching the argumentative channel name(s) exists for the user. * Calls to PSUBSCRIBE will fail unless the pattern(s) provided as an argument literally exist(s) in the user's list. Such failures are logged to the ACL log. Runtime changes to channel permissions for a user with existing subscribing clients cause said clients to disconnect unless the new permissions permit the connections to continue. Note, however, that PSUBSCRIBErs' patterns are matched literally, so given the change bar:* -> b*, pattern subscribers to bar:* will be disconnected. Notes/questions: * UNSUBSCRIBE, PUNSUBSCRIBE and PUBSUB remain unprotected due to lack of reasons for touching them. |
|
![]() |
d87a0d0286
|
Unified MULTI, LUA, and RM_Call with respect to blocking commands (#8025)
Blocking command should not be used with MULTI, LUA, and RM_Call. This is because, the caller, who executes the command in this context, expects a reply. Today, LUA and MULTI have a special (and different) treatment to blocking commands: LUA - Most commands are marked with no-script flag which are checked when executing and command from LUA, commands that are not marked (like XREAD) verify that their blocking mode is not used inside LUA (by checking the CLIENT_LUA client flag). MULTI - Command that is going to block, first verify that the client is not inside multi (by checking the CLIENT_MULTI client flag). If the client is inside multi, they return a result which is a match to the empty key with no timeout (for example blpop inside MULTI will act as lpop) For modules that perform RM_Call with blocking command, the returned results type is REDISMODULE_REPLY_UNKNOWN and the caller can not really know what happened. Disadvantages of the current state are: No unified approach, LUA, MULTI, and RM_Call, each has a different treatment Module can not safely execute blocking command (and get reply or error). Though It is true that modules are not like LUA or MULTI and should be smarter not to execute blocking commands on RM_Call, sometimes you want to execute a command base on client input (for example if you create a module that provides a new scripting language like javascript or python). While modules (on modules command) can check for REDISMODULE_CTX_FLAGS_LUA or REDISMODULE_CTX_FLAGS_MULTI to know not to block the client, there is no way to check if the command came from another module using RM_Call. So there is no way for a module to know not to block another module RM_Call execution. This commit adds a way to unify the treatment for blocking clients by introducing a new CLIENT_DENY_BLOCKING client flag. On LUA, MULTI, and RM_Call the new flag turned on to signify that the client should not be blocked. A blocking command verifies that the flag is turned off before blocking. If a blocking command sees that the CLIENT_DENY_BLOCKING flag is on, it's not blocking and return results which are matches to empty key with no timeout (as MULTI does today). The new flag is checked on the following commands: List blocking commands: BLPOP, BRPOP, BRPOPLPUSH, BLMOVE, Zset blocking commands: BZPOPMIN, BZPOPMAX Stream blocking commands: XREAD, XREADGROUP SUBSCRIBE, PSUBSCRIBE, MONITOR In addition, the new flag is turned on inside the AOF client, we do not want to block the AOF client to prevent deadlocks and commands ordering issues (and there is also an existing assert in the code that verifies it). To keep backward compatibility on LUA, all the no-script flags on existing commands were kept untouched. In addition, a LUA special treatment on XREAD and XREADGROUP was kept. To keep backward compatibility on MULTI (which today allows SUBSCRIBE, and PSUBSCRIBE). We added a special treatment on those commands to allow executing them on MULTI. The only backward compatibility issue that this PR introduces is that now MONITOR is not allowed inside MULTI. Tests were added to verify blocking commands are not blocking the client on LUA, MULTI, or RM_Call. Tests were added to verify the module can check for CLIENT_DENY_BLOCKING flag. Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Itamar Haber <itamar@redislabs.com> |
|
![]() |
adc3183cd2
|
Add Module API for version and compatibility checks (#7865)
* Introduce a new API's: RM_GetContextFlagsAll, and RM_GetKeyspaceNotificationFlagsAll that will return the full flags mask of each feature. The module writer can check base on this value if the Flags he needs are supported or not. * For each flag, introduce a new value on redismodule.h, this value represents the LAST value and should be there as a reminder to update it when a new value is added, also it will be used in the code to calculate the full flags mask (assuming flags are incrementally increasing). In addition, stated that the module writer should not use the LAST flag directly and he should use the GetFlagAll API's. * Introduce a new API: RM_IsSubEventSupported, that returns for a given event and subevent, whether or not the subevent supported. * Introduce a new macro RMAPI_FUNC_SUPPORTED(func) that returns whether or not a function API is supported by comparing it to NULL. * Introduce a new API: int RM_GetServerVersion();, that will return the current Redis version in the format 0x00MMmmpp; e.g. 0x00060008; * Changed unstable version from 999.999.999 to 255.255.255 Co-authored-by: Oran Agra <oran@redislabs.com> Co-authored-by: Yossi Gottlieb <yossigo@gmail.com> |
|
![]() |
042189fd87
|
RM_ThreadSafeContextTryLock a non-blocking method for acquiring GIL (#7738)
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com> Co-authored-by: Oran Agra <oran@redislabs.com> |