Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
This PR is to fix the compilation warnings and errors generated by the latest
complier toolchain, and to add a new runner of the latest toolchain for daily CI.
## Fix various compilation warnings and errors
1) jemalloc.c
COMPILER: clang-14 with FORTIFY_SOURCE
WARNING:
```
src/jemalloc.c:1028:7: warning: suspicious concatenation of string literals in an array initialization; did you mean to separate the elements with a comma? [-Wstring-concatenation]
"/etc/malloc.conf",
^
src/jemalloc.c:1027:3: note: place parentheses around the string literal to silence warning
"\"name\" of the file referenced by the symbolic link named "
^
```
REASON: the compiler to alert developers to potential issues with string concatenation
that may miss a comma,
just like #9534 which misses a comma.
SOLUTION: use `()` to tell the compiler that these two line strings are continuous.
2) config.h
COMPILER: clang-14 with FORTIFY_SOURCE
WARNING:
```
In file included from quicklist.c:36:
./config.h:319:76: warning: attribute declaration must precede definition [-Wignored-attributes]
char *strcat(char *restrict dest, const char *restrict src) __attribute__((deprecated("please avoid use of unsafe C functions. prefer use of redis_strlcat instead")));
```
REASON: Enabling _FORTIFY_SOURCE will cause the compiler to use `strcpy()` with check,
it results in a deprecated attribute declaration after including <features.h>.
SOLUTION: move the deprecated attribute declaration from config.h to fmacro.h before "#include <features.h>".
3) networking.c
COMPILER: GCC-12
WARNING:
```
networking.c: In function ‘addReplyDouble.part.0’:
networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
876 | dbuf[start] = '$';
| ^
networking.c:868:14: note: at offset -5 into destination object ‘dbuf’ of size 5152
868 | char dbuf[MAX_LONG_DOUBLE_CHARS+32];
| ^
networking.c:876:21: warning: writing 1 byte into a region of size 0 [-Wstringop-overflow=]
876 | dbuf[start] = '$';
| ^
networking.c:868:14: note: at offset -6 into destination object ‘dbuf’ of size 5152
868 | char dbuf[MAX_LONG_DOUBLE_CHARS+32];
```
REASON: GCC-12 predicts that digits10() may return 9 or 10 through `return 9 + (v >= 1000000000UL)`.
SOLUTION: add an assert to let the compiler know the possible length;
4) redis-cli.c & redis-benchmark.c
COMPILER: clang-14 with FORTIFY_SOURCE
WARNING:
```
redis-benchmark.c:1621:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL
redis-cli.c:3015:2: warning: embedding a directive within macro arguments has undefined behavior [-Wembedded-directive] #ifdef USE_OPENSSL
```
REASON: when _FORTIFY_SOURCE is enabled, the compiler will use the print() with
check, which is a macro. this may result in the use of directives within the macro, which
is undefined behavior.
SOLUTION: move the directives-related code out of `print()`.
5) server.c
COMPILER: gcc-13 with FORTIFY_SOURCE
WARNING:
```
In function 'lookupCommandLogic',
inlined from 'lookupCommandBySdsLogic' at server.c:3139:32:
server.c:3102:66: error: '*(robj **)argv' may be used uninitialized [-Werror=maybe-uninitialized]
3102 | struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr);
| ~~~~^~~
```
REASON: The compiler thinks that the `argc` returned by `sdssplitlen()` could be 0,
resulting in an empty array of size 0 being passed to lookupCommandLogic.
this should be a false positive, `argc` can't be 0 when strings are not NULL.
SOLUTION: add an assert to let the compiler know that `argc` is positive.
6) sha1.c
COMPILER: gcc-12
WARNING:
```
In function ‘SHA1Update’,
inlined from ‘SHA1Final’ at sha1.c:195:5:
sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
152 | SHA1Transform(context->state, &data[i]);
| ^
sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’
sha1.c: In function ‘SHA1Final’:
sha1.c:56:6: note: in a call to function ‘SHA1Transform’
56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
| ^
In function ‘SHA1Update’,
inlined from ‘SHA1Final’ at sha1.c:198:9:
sha1.c:152:13: warning: ‘SHA1Transform’ reading 64 bytes from a region of size 0 [-Wstringop-overread]
152 | SHA1Transform(context->state, &data[i]);
| ^
sha1.c:152:13: note: referencing argument 2 of type ‘const unsigned char[64]’
sha1.c: In function ‘SHA1Final’:
sha1.c:56:6: note: in a call to function ‘SHA1Transform’
56 | void SHA1Transform(uint32_t state[5], const unsigned char buffer[64])
```
REASON: due to the bug[https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80922], when
enable LTO, gcc-12 will not see `diagnostic ignored "-Wstringop-overread"`, resulting in a warning.
SOLUTION: temporarily set SHA1Update to noinline to avoid compiler warnings due
to LTO being enabled until the above gcc bug is fixed.
7) zmalloc.h
COMPILER: GCC-12
WARNING:
```
In function ‘memset’,
inlined from ‘moduleCreateContext’ at module.c:877:5,
inlined from ‘RM_GetDetachedThreadSafeContext’ at module.c:8410:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:59:10: warning: ‘__builtin_memset’ writing 104 bytes into a region of size 0 overflows the destination [-Wstringop-overflow=]
59 | return __builtin___memset_chk (__dest, __ch, __len,
```
REASON: due to the GCC-12 bug [https://gcc.gnu.org/bugzilla/show_bug.cgi?id=96503],
GCC-12 cannot see alloc_size, which causes GCC to think that the actual size of memory
is 0 when checking with __glibc_objsize0().
SOLUTION: temporarily set malloc-related interfaces to `noinline` to avoid compiler warnings
due to LTO being enabled until the above gcc bug is fixed.
## Other changes
1) Fixed `ps -p [pid]` doesn't output `<defunct>` when using procps 4.x causing `replication
child dies when parent is killed - diskless` test to fail.
2) Add a new fortify CI with GCC-13 and ubuntu-lunar docker image.
this pr fix two wrongs:
1. When client’s querybuf is pre-allocated for a fat argv, we need to update the
querybuf_peak of the client immediately to completely avoid the unexpected
shrinking of querybuf in the next clientCron (before data arrives to set the peak).
2. the protocol's bulklen does not include `\r\n`, but the allocation and the data we
read does. so in `clientsCronResizeQueryBuffer`, the `resize` or `querybuf_peak`
should add these 2 bytes.
the first bug is likely to hit us on large payloads over slow connections, in which case
transferring the payload can take longer and a cron event will be triggered (specifically
if there are not a lot of clients)
* Add RM_ReplyWithErrorFormat that can support format
Reply with the error create from a printf format and arguments.
If the error code is already passed in the string 'fmt', the error
code provided is used, otherwise the string "-ERR " for the generic
error code is automatically added.
The usage is, for example:
RedisModule_ReplyWithErrorFormat(ctx, "An error: %s", "foo");
RedisModule_ReplyWithErrorFormat(ctx, "-WRONGTYPE Wrong Type: %s", "foo");
The function always returns REDISMODULE_OK.
## Issue
When we use GCC-12 later or clang 9.0 later to build with `-D_FORTIFY_SOURCE=3`,
we can see the following buffer overflow:
```
=== REDIS BUG REPORT START: Cut & paste starting from here ===
6263:M 06 Apr 2023 08:59:12.915 # Redis 255.255.255 crashed by signal: 6, si_code: -6
6263:M 06 Apr 2023 08:59:12.915 # Crashed running the instruction at: 0x7f03d59efa7c
------ STACK TRACE ------
EIP:
/lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c]
Backtrace:
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f03d599b520]
/lib/x86_64-linux-gnu/libc.so.6(pthread_kill+0x12c)[0x7f03d59efa7c]
/lib/x86_64-linux-gnu/libc.so.6(raise+0x16)[0x7f03d599b476]
/lib/x86_64-linux-gnu/libc.so.6(abort+0xd3)[0x7f03d59817f3]
/lib/x86_64-linux-gnu/libc.so.6(+0x896f6)[0x7f03d59e26f6]
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x2a)[0x7f03d5a8f76a]
/lib/x86_64-linux-gnu/libc.so.6(+0x1350c6)[0x7f03d5a8e0c6]
src/redis-server 127.0.0.1:25111(+0xd5e80)[0x557cddd3be80]
src/redis-server 127.0.0.1:25111(feedReplicationBufferWithObject+0x78)[0x557cddd3c768]
src/redis-server 127.0.0.1:25111(replicationFeedSlaves+0x1a4)[0x557cddd3cbc4]
src/redis-server 127.0.0.1:25111(+0x8721a)[0x557cddced21a]
src/redis-server 127.0.0.1:25111(call+0x47a)[0x557cddcf38ea]
src/redis-server 127.0.0.1:25111(processCommand+0xbf4)[0x557cddcf4aa4]
src/redis-server 127.0.0.1:25111(processInputBuffer+0xe6)[0x557cddd22216]
src/redis-server 127.0.0.1:25111(readQueryFromClient+0x3a8)[0x557cddd22898]
src/redis-server 127.0.0.1:25111(+0x1b9134)[0x557cdde1f134]
src/redis-server 127.0.0.1:25111(aeMain+0x119)[0x557cddce5349]
src/redis-server 127.0.0.1:25111(main+0x466)[0x557cddcd6716]
/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f03d5982d90]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f03d5982e40]
src/redis-server 127.0.0.1:25111(_start+0x25)[0x557cddcd7025]
```
The main reason is that when FORTIFY_SOURCE is enabled, GCC or clang will enhance some
common functions, such as `strcpy`, `memcpy`, `fgets`, etc, so that they can detect buffer
overflow errors and stop program execution, thus improving the safety of the program.
We use `zmalloc_usable_size()` everywhere to use memory blocks, but that is an abuse since the
malloc_usable_size() isn't meant for this kind of use, it is for diagnostics only. That is also why the
behavior is flaky when built with _FORTIFY_SOURCE, the compiler can sense that we reach outside
the allocated block and SIGABRT.
### Solution
If we need to use the additional memory we got, we need to use a dummy realloc with `alloc_size` attribute
and no inlining, (see `extend_to_usable`) to let the compiler see the large of memory we need to use.
This can either be an implicit call inside `z*usable` that returns the size, so that the caller doesn't have any
other worry, or it can be a normal zmalloc call which means that if the caller wants to use
zmalloc_usable_size it must also use extend_to_usable.
### Changes
This PR does the following:
1) rename the current z[try]malloc_usable family to z[try]malloc_internal and don't expose them to users outside zmalloc.c,
2) expose a new set of `z[*]_usable` family that use z[*]_internal and `extend_to_usable()` implicitly, the caller gets the
size of the allocation and it is safe to use.
3) go over all the users of `zmalloc_usable_size` and convert them to use the `z[*]_usable` family if possible.
4) in the places where the caller can't use `z[*]_usable` and store the real size, and must still rely on zmalloc_usable_size,
we still make sure that the allocation used `z[*]_usable` (which has a call to `extend_to_usable()`) and ignores the
returning size, this way a later call to `zmalloc_usable_size` is still safe.
[4] was done for module.c and listpack.c, all the others places (sds, reply proto list, replication backlog, client->buf)
are using [3].
Co-authored-by: Oran Agra <oran@redislabs.com>
In #11012, we changed the way command durations were computed to handle the same command being executed multiple times. This commit fixes some misses from that commit.
* Wait commands were not correctly reporting their duration if the timeout was reached.
* Multi/scripts/and modules with RM_Call were not properly resetting the duration between inner calls, leading to them reporting cumulative duration.
* When a blocked client is freed, the call and duration are always discarded.
This commit also adds an assert if the duration is not properly reset, potentially indicating that a report to call statistics was missed. The assert potentially be removed in the future, as it's mainly intended to detect misses in tests.
The reply schema validator is failing since the recent changes to introspection.tcl that use the RESET command, this happens because this test forces RESP3, but RESET command didn't respect that and set back RESP2.
This PR allows clients to send information about the client library to redis
to be displayed in CLIENT LIST and CLIENT INFO.
Currently supports:
`CLIENT [lib-name | lib-ver] <value>`
Client libraries are expected to pipeline these right after AUTH, and ignore
the failure in case they're talking to an older version of redis.
These will be shown in CLIENT LIST and CLIENT INFO as:
* `lib-name` - meant to hold the client library name.
* `lib-ver` - meant to hold the client library version.
The values cannot contain spaces, newlines and any wild ASCII characters,
but all other normal chars are accepted, e.g `.`, `=` etc (same as CLIENT NAME).
The RESET command does NOT clear these, but they can be cleared to the
default by sending a command with a blank string.
Co-authored-by: Oran Agra <oran@redislabs.com>
Allow running blocking commands from within a module using `RM_Call`.
Today, when `RM_Call` is used, the fake client that is used to run command
is marked with `CLIENT_DENY_BLOCKING` flag. This flag tells the command
that it is not allowed to block the client and in case it needs to block, it must
fallback to some alternative (either return error or perform some default behavior).
For example, `BLPOP` fallback to simple `LPOP` if it is not allowed to block.
All the commands must respect the `CLIENT_DENY_BLOCKING` flag (including
module commands). When the command invocation finished, Redis asserts that
the client was not blocked.
This PR introduces the ability to call blocking command using `RM_Call` by
passing a callback that will be called when the client will get unblocked.
In order to do that, the user must explicitly say that he allow to perform blocking
command by passing a new format specifier argument, `K`, to the `RM_Call`
function. This new flag will tell Redis that it is allow to run blocking command
and block the client. In case the command got blocked, Redis will return a new
type of call reply (`REDISMODULE_REPLY_PROMISE`). This call reply indicates
that the command got blocked and the user can set the on_unblocked handler using
`RM_CallReplyPromiseSetUnblockHandler`.
When clients gets unblocked, it eventually reaches `processUnblockedClients` function.
This is where we check if the client is a fake module client and if it is, we call the unblock
callback instead of performing the usual unblock operations.
**Notice**: `RM_CallReplyPromiseSetUnblockHandler` must be called atomically
along side the command invocation (without releasing the Redis lock in between).
In addition, unlike other CallReply types, the promise call reply must be released
by the module when the Redis GIL is acquired.
The module can abort the execution on the blocking command (if it was not yet
executed) using `RM_CallReplyPromiseAbort`. the API will return `REDISMODULE_OK`
on success and `REDISMODULE_ERR` if the operation is already executed.
**Notice** that in case of misbehave module, Abort might finished successfully but the
operation will not really be aborted. This can only happened if the module do not respect
the disconnect callback of the blocked client.
For pure Redis commands this can not happened.
### Atomicity Guarantees
The API promise that the unblock handler will run atomically as an execution unit.
This means that all the operation performed on the unblock handler will be wrapped
with a multi exec transaction when replicated to the replica and AOF.
The API **do not** grantee any other atomicity properties such as when the unblock
handler will be called. This gives us the flexibility to strengthen the grantees (or not)
in the future if we will decide that we need a better guarantees.
That said, the implementation **does** provide a better guarantees when performing
pure Redis blocking command like `BLPOP`. In this case the unblock handler will run
atomically with the operation that got unblocked (for example, in case of `BLPOP`, the
unblock handler will run atomically with the `LPOP` operation that run when the command
got unblocked). This is an implementation detail that might be change in the future and the
module writer should not count on that.
### Calling blocking commands while running on script mode (`S`)
`RM_Call` script mode (`S`) was introduced on #0372. It is used for usecases where the
command that was invoked on `RM_Call` comes from a user input and we want to make
sure the user will not run dangerous commands like `shutdown`. Some command, such
as `BLPOP`, are marked with `NO_SCRIPT` flag, which means they will not be allowed on
script mode. Those commands are marked with `NO_SCRIPT` just because they are
blocking commands and not because they are dangerous. Now that we can run blocking
commands on RM_Call, there is no real reason not to allow such commands on script mode.
The underline problem is that the `NO_SCRIPT` flag is abused to also mark some of the
blocking commands (notice that those commands know not to block the client if it is not
allowed to do so, and have a fallback logic to such cases. So even if those commands
were not marked with `NO_SCRIPT` flag, it would not harm Redis, and today we can
already run those commands within multi exec).
In addition, not all blocking commands are marked with `NO_SCRIPT` flag, for example
`blmpop` are not marked and can run from within a script.
Those facts shows that there are some ambiguity about the meaning of the `NO_SCRIPT`
flag, and its not fully clear where it should be use.
The PR suggest that blocking commands should not be marked with `NO_SCRIPT` flag,
those commands should handle `CLIENT_DENY_BLOCKING` flag and only block when
it's safe (like they already does today). To achieve that, the PR removes the `NO_SCRIPT`
flag from the following commands:
* `blmove`
* `blpop`
* `brpop`
* `brpoplpush`
* `bzpopmax`
* `bzpopmin`
* `wait`
This might be considered a breaking change as now, on scripts, instead of getting
`command is not allowed from script` error, the user will get some fallback behavior
base on the command implementation. That said, the change matches the behavior
of scripts and multi exec with respect to those commands and allow running them on
`RM_Call` even when script mode is used.
### Additional RedisModule API and changes
* `RM_BlockClientSetPrivateData` - Set private data on the blocked client without the
need to unblock the client. This allows up to set the promise CallReply as the private
data of the blocked client and abort it if the client gets disconnected.
* `RM_BlockClientGetPrivateData` - Return the current private data set on a blocked client.
We need it so we will have access to this private data on the disconnect callback.
* On RM_Call, the returned reply will be added to the auto memory context only if auto
memory is enabled, this allows us to keep the call reply for longer time then the context
lifetime and does not force an unneeded borrow relationship between the CallReply and
the RedisModuleContext.
This change adds new module callbacks that can override the default password based authentication associated with ACLs. With this, Modules can register auth callbacks through which they can implement their own Authentication logic. When `AUTH` and `HELLO AUTH ...` commands are used, Module based authentication is attempted and then normal password based authentication is attempted if needed.
The new Module APIs added in this PR are - `RM_RegisterCustomAuthCallback` and `RM_BlockClientOnAuth` and `RedisModule_ACLAddLogEntryByUserName `.
Module based authentication will be attempted for all Redis users (created through the ACL SETUSER cmd or through Module APIs) even if the Redis user does not exist at the time of the command. This gives a chance for the Module to create the RedisModule user and then authenticate via the RedisModule API - from the custom auth callback.
For the AUTH command, we will support both variations - `AUTH <username> <password>` and `AUTH <password>`. In case of the `AUTH <password>` variation, the custom auth callbacks are triggered with “default” as the username and password as what is provided.
### RedisModule_RegisterCustomAuthCallback
```
void RM_RegisterCustomAuthCallback(RedisModuleCtx *ctx, RedisModuleCustomAuthCallback cb) {
```
This API registers a callback to execute to prior to normal password based authentication. Multiple callbacks can be registered across different modules. These callbacks are responsible for either handling the authentication, each authenticating the user or explicitly denying, or deferring it to other authentication mechanisms. Callbacks are triggered in the order they were registered. When a Module is unloaded, all the auth callbacks registered by it are unregistered. The callbacks are attempted, in the order of most recently registered callbacks, when the AUTH/HELLO (with AUTH field is provided) commands are called. The callbacks will be called with a module context along with a username and a password, and are expected to take one of the following actions:
(1) Authenticate - Use the RM_Authenticate* API successfully and return `REDISMODULE_AUTH_HANDLED`. This will immediately end the auth chain as successful and add the OK reply.
(2) Block a client on authentication - Use the `RM_BlockClientOnAuth` API and return `REDISMODULE_AUTH_HANDLED`. Here, the client will be blocked until the `RM_UnblockClient `API is used which will trigger the auth reply callback (provided earlier through the `RM_BlockClientOnAuth`). In this reply callback, the Module should authenticate, deny or skip handling authentication.
(3) Deny Authentication - Return `REDISMODULE_AUTH_HANDLED` without authenticating or blocking the client. Optionally, `err` can be set to a custom error message. This will immediately end the auth chain as unsuccessful and add the ERR reply.
(4) Skip handling Authentication - Return `REDISMODULE_AUTH_NOT_HANDLED` without blocking the client. This will allow the engine to attempt the next custom auth callback.
If none of the callbacks authenticate or deny auth, then password based auth is attempted and will authenticate or add failure logs and reply to the clients accordingly.
### RedisModule_BlockClientOnAuth
```
RedisModuleBlockedClient *RM_BlockClientOnAuth(RedisModuleCtx *ctx, RedisModuleCustomAuthCallback reply_callback,
void (*free_privdata)(RedisModuleCtx*,void*))
```
This API can only be used from a Module from the custom auth callback. If a client is not in the middle of custom module based authentication, ERROR is returned. Otherwise, the client is blocked and the `RedisModule_BlockedClient` is returned similar to the `RedisModule_BlockClient` API.
### RedisModule_ACLAddLogEntryByUserName
```
int RM_ACLAddLogEntryByUserName(RedisModuleCtx *ctx, RedisModuleString *username, RedisModuleString *object, RedisModuleACLLogEntryReason reason)
```
Adds a new entry in the ACL log with the `username` RedisModuleString provided. This simplifies the Module usage because now, developers do not need to create a Module User just to add an error ACL Log entry. Aside from accepting username (RedisModuleString) instead of a RedisModuleUser, it is the same as the existing `RedisModule_ACLAddLogEntry` API.
### Breaking changes
- HELLO command - Clients can now only set the client name and RESP protocol from the `HELLO` command if they are authenticated. Also, we now finish command arg validation first and return early with a ERR reply if any arg is invalid. This is to avoid mutating the client name / RESP from a command that would have failed on invalid arguments.
### Notable behaviors
- Module unblocking - Now, we will not allow Modules to block the client from inside the context of a reply callback (triggered from the Module unblock flow `moduleHandleBlockedClients`).
---------
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Replace NBSP character (0xC2 0xA0) with space (0x20).
Looks like that was originally added due to misconfigured editor which seems to have been fixed by now.
Implementing the WAITAOF functionality which would allow the user to
block until a specified number of Redises have fsynced all previous write
commands to the AOF.
Syntax: `WAITAOF <num_local> <num_replicas> <timeout>`
Response: Array containing two elements: num_local, num_replicas
num_local is always either 0 or 1 representing the local AOF on the master.
num_replicas is the number of replicas that acknowledged the a replication
offset of the last write being fsynced to the AOF.
Returns an error when called on replicas, or when called with non-zero
num_local on a master with AOF disabled, in all other cases the response
just contains number of fsync copies.
Main changes:
* Added code to keep track of replication offsets that are confirmed to have
been fsynced to disk.
* Keep advancing master_repl_offset even when replication is disabled (and
there's no replication backlog, only if there's an AOF enabled).
This way we can use this command and it's mechanisms even when replication
is disabled.
* Extend REPLCONF ACK to `REPLCONF ACK <ofs> FACK <ofs>`, the FACK
will be appended only if there's an AOF on the replica, and already ignored on
old masters (thus backwards compatible)
* WAIT now no longer wait for the replication offset after your last command, but
rather the replication offset after your last write (or read command that caused
propagation, e.g. lazy expiry).
Unrelated changes:
* WAIT command respects CLIENT_DENY_BLOCKING (not just CLIENT_MULTI)
Implementation details:
* Add an atomic var named `fsynced_reploff_pending` that's updated
(usually by the bio thread) and later copied to the main `fsynced_reploff`
variable (only if the AOF base file exists).
I.e. during the initial AOF rewrite it will not be used as the fsynced offset
since the AOF base is still missing.
* Replace close+fsync bio job with new BIO_CLOSE_AOF (AOF specific)
job that will also update fsync offset the field.
* Handle all AOF jobs (BIO_CLOSE_AOF, BIO_AOF_FSYNC) in the same bio
worker thread, to impose ordering on their execution. This solves a
race condition where a job could set `fsynced_reploff_pending` to a higher
value than another pending fsync job, resulting in indicating an offset
for which parts of the data have not yet actually been fsynced.
Imposing an ordering on the jobs guarantees that fsync jobs are executed
in increasing order of replication offset.
* Drain bio jobs when switching `appendfsync` to "always"
This should prevent a write race between updates to `fsynced_reploff_pending`
in the main thread (`flushAppendOnlyFile` when set to ALWAYS fsync), and
those done in the bio thread.
* Drain the pending fsync when starting over a new AOF to avoid race conditions
with the previous AOF offsets overriding the new one (e.g. after switching to
replicate from a new master).
* Make sure to update the fsynced offset at the end of the initial AOF rewrite.
a must in case there are no additional writes that trigger a periodic fsync,
specifically for a replica that does a full sync.
Limitations:
It is possible to write a module and a Lua script that propagate to the AOF and doesn't
propagate to the replication stream. see REDISMODULE_ARGV_NO_REPLICAS and luaRedisSetReplCommand.
These features are incompatible with the WAITAOF command, and can result
in two bad cases. The scenario is that the user executes command that only
propagates to AOF, and then immediately
issues a WAITAOF, and there's no further writes on the replication stream after that.
1. if the the last thing that happened on the replication stream is a PING
(which increased the replication offset but won't trigger an fsync on the replica),
then the client would hang forever (will wait for an fack that the replica will never
send sine it doesn't trigger any fsyncs).
2. if the last thing that happened is a write command that got propagated properly,
then WAITAOF will be released immediately, without waiting for an fsync (since
the offset didn't change)
Refactoring:
* Plumbing to allow bio worker to handle multiple job types
This introduces infrastructure necessary to allow BIO workers to
not have a 1-1 mapping of worker to job-type. This allows in the
future to assign multiple job types to a single worker, either as
a performance/resource optimization, or as a way of enforcing
ordering between specific classes of jobs.
Co-authored-by: Oran Agra <oran@redislabs.com>
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes#11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
Work in progress towards implementing a reply schema as part of COMMAND DOCS, see #9845
Since ironing the details of the reply schema of each and every command can take a long time, we
would like to merge this PR when the infrastructure is ready, and let this mature in the unstable branch.
Meanwhile the changes of this PR are internal, they are part of the repo, but do not affect the produced build.
### Background
In #9656 we add a lot of information about Redis commands, but we are missing information about the replies
### Motivation
1. Documentation. This is the primary goal.
2. It should be possible, based on the output of COMMAND, to be able to generate client code in typed
languages. In order to do that, we need Redis to tell us, in detail, what each reply looks like.
3. We would like to build a fuzzer that verifies the reply structure (for now we use the existing
testsuite, see the "Testing" section)
### Schema
The idea is to supply some sort of schema for the various replies of each command.
The schema will describe the conceptual structure of the reply (for generated clients), as defined in RESP3.
Note that the reply structure itself may change, depending on the arguments (e.g. `XINFO STREAM`, with
and without the `FULL` modifier)
We decided to use the standard json-schema (see https://json-schema.org/) as the reply-schema.
Example for `BZPOPMIN`:
```
"reply_schema": {
"oneOf": [
{
"description": "Timeout reached and no elements were popped.",
"type": "null"
},
{
"description": "The keyname, popped member, and its score.",
"type": "array",
"minItems": 3,
"maxItems": 3,
"items": [
{
"description": "Keyname",
"type": "string"
},
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
]
}
```
#### Notes
1. It is ok that some commands' reply structure depends on the arguments and it's the caller's responsibility
to know which is the relevant one. this comes after looking at other request-reply systems like OpenAPI,
where the reply schema can also be oneOf and the caller is responsible to know which schema is the relevant one.
2. The reply schemas will describe RESP3 replies only. even though RESP3 is structured, we want to use reply
schema for documentation (and possibly to create a fuzzer that validates the replies)
3. For documentation, the description field will include an explanation of the scenario in which the reply is sent,
including any relation to arguments. for example, for `ZRANGE`'s two schemas we will need to state that one
is with `WITHSCORES` and the other is without.
4. For documentation, there will be another optional field "notes" in which we will add a short description of
the representation in RESP2, in case it's not trivial (RESP3's `ZRANGE`'s nested array vs. RESP2's flat
array, for example)
Given the above:
1. We can generate the "return" section of all commands in [redis-doc](https://redis.io/commands/)
(given that "description" and "notes" are comprehensive enough)
2. We can generate a client in a strongly typed language (but the return type could be a conceptual
`union` and the caller needs to know which schema is relevant). see the section below for RESP2 support.
3. We can create a fuzzer for RESP3.
### Limitations (because we are using the standard json-schema)
The problem is that Redis' replies are more diverse than what the json format allows. This means that,
when we convert the reply to a json (in order to validate the schema against it), we lose information (see
the "Testing" section below).
The other option would have been to extend the standard json-schema (and json format) to include stuff
like sets, bulk-strings, error-string, etc. but that would mean also extending the schema-validator - and that
seemed like too much work, so we decided to compromise.
Examples:
1. We cannot tell the difference between an "array" and a "set"
2. We cannot tell the difference between simple-string and bulk-string
3. we cannot verify true uniqueness of items in commands like ZRANGE: json-schema doesn't cover the
case of two identical members with different scores (e.g. `[["m1",6],["m1",7]]`) because `uniqueItems`
compares (member,score) tuples and not just the member name.
### Testing
This commit includes some changes inside Redis in order to verify the schemas (existing and future ones)
are indeed correct (i.e. describe the actual response of Redis).
To do that, we added a debugging feature to Redis that causes it to produce a log of all the commands
it executed and their replies.
For that, Redis needs to be compiled with `-DLOG_REQ_RES` and run with
`--reg-res-logfile <file> --client-default-resp 3` (the testsuite already does that if you run it with
`--log-req-res --force-resp3`)
You should run the testsuite with the above args (and `--dont-clean`) in order to make Redis generate
`.reqres` files (same dir as the `stdout` files) which contain request-response pairs.
These files are later on processed by `./utils/req-res-log-validator.py` which does:
1. Goes over req-res files, generated by redis-servers, spawned by the testsuite (see logreqres.c)
2. For each request-response pair, it validates the response against the request's reply_schema
(obtained from the extended COMMAND DOCS)
5. In order to get good coverage of the Redis commands, and all their different replies, we chose to use
the existing redis test suite, rather than attempt to write a fuzzer.
#### Notes about RESP2
1. We will not be able to use the testing tool to verify RESP2 replies (we are ok with that, it's time to
accept RESP3 as the future RESP)
2. Since the majority of the test suite is using RESP2, and we want the server to reply with RESP3
so that we can validate it, we will need to know how to convert the actual reply to the one expected.
- number and boolean are always strings in RESP2 so the conversion is easy
- objects (maps) are always a flat array in RESP2
- others (nested array in RESP3's `ZRANGE` and others) will need some special per-command
handling (so the client will not be totally auto-generated)
Example for ZRANGE:
```
"reply_schema": {
"anyOf": [
{
"description": "A list of member elements",
"type": "array",
"uniqueItems": true,
"items": {
"type": "string"
}
},
{
"description": "Members and their scores. Returned in case `WITHSCORES` was used.",
"notes": "In RESP2 this is returned as a flat array",
"type": "array",
"uniqueItems": true,
"items": {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": [
{
"description": "Member",
"type": "string"
},
{
"description": "Score",
"type": "number"
}
]
}
}
]
}
```
### Other changes
1. Some tests that behave differently depending on the RESP are now being tested for both RESP,
regardless of the special log-req-res mode ("Pub/Sub PING" for example)
2. Update the history field of CLIENT LIST
3. Added basic tests for commands that were not covered at all by the testsuite
### TODO
- [x] (maybe a different PR) add a "condition" field to anyOf/oneOf schemas that refers to args. e.g.
when `SET` return NULL, the condition is `arguments.get||arguments.condition`, for `OK` the condition
is `!arguments.get`, and for `string` the condition is `arguments.get` - https://github.com/redis/redis/issues/11896
- [x] (maybe a different PR) also run `runtest-cluster` in the req-res logging mode
- [x] add the new tests to GH actions (i.e. compile with `-DLOG_REQ_RES`, run the tests, and run the validator)
- [x] (maybe a different PR) figure out a way to warn about (sub)schemas that are uncovered by the output
of the tests - https://github.com/redis/redis/issues/11897
- [x] (probably a separate PR) add all missing schemas
- [x] check why "SDOWN is triggered by misconfigured instance replying with errors" fails with --log-req-res
- [x] move the response transformers to their own file (run both regular, cluster, and sentinel tests - need to
fight with the tcl including mechanism a bit)
- [x] issue: module API - https://github.com/redis/redis/issues/11898
- [x] (probably a separate PR): improve schemas: add `required` to `object`s - https://github.com/redis/redis/issues/11899
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Hanna Fadida <hanna.fadida@redislabs.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Shaya Potter <shaya@redislabs.com>
When no-touch mode is enabled, the client will not touch LRU/LFU of the
keys it accesses, except when executing command `TOUCH`.
This allows inspecting or modifying the key-space without affecting their eviction.
Changes:
- A command `CLIENT NO-TOUCH ON|OFF` to switch on and off this mode.
- A client flag `#define CLIENT_NOTOUCH (1ULL<<45)`, which can be shown
with `CLIENT INFO`, by the letter "T" in the "flags" field.
- Clear `NO-TOUCH` flag in `clearClientConnectionState`, which is used by `RESET`
command and resetting temp clients used by modules.
- Also clear `NO-EVICT` flag in `clearClientConnectionState`, this might have been an
oversight, spotted by @madolson.
- A test using `DEBUG OBJECT` command to verify that LRU stat is not touched when
no-touch mode is on.
Co-authored-by: chentianjie <chentianjie@alibaba-inc.com>
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: sundb <sundbcn@gmail.com>
A simple HELLO command to a password protected Redis server replies
with an error with another command suggestion. This omits protocol version
from HELLO command arguments which causes another error.
This PR adds the protocol version in the command suggestion.
We have cases where we print information (might be important but by
no means an error indicator) with the LL_WARNING level.
Demoting these to LL_NOTICE:
- oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
- User requested shutdown...
This is also true for cases that we encounter a rare but normal situation.
Demoting to LL_NOTICE. Examples:
- AOF was enabled but there is already another background operation. An AOF background was scheduled to start when possible.
- Connection with master lost.
base on yoav-steinberg's https://github.com/redis/redis/pull/10650#issuecomment-1112280554
and yossigo's https://github.com/redis/redis/pull/10650#pullrequestreview-967677676
* Make it clear that current_client is the root client that was called by
external connection
* add executing_client which is the client that runs the current command
(can be a module or a script)
* Remove script_caller that was used for commands that have CLIENT_SCRIPT
to get the client that called the script. in most cases, that's the current_client,
and in others (when being called from a module), it could be an intermediate
client when we actually want the original one used by the external connection.
bugfixes:
* RM_Call with C flag should log ACL errors with the requested user rather than
the one used by the original client, this also solves a crash when RM_Call is used
with C flag from a detached thread safe context.
* addACLLogEntry would have logged info about the script_caller, but in case the
script was issued by a module command we actually want the current_client. the
exception is when RM_Call is called from a timer event, in which case we don't
have a current_client.
behavior changes:
* client side tracking for scripts now tracks the keys that are read by the script
instead of the keys that are declared by the caller for EVAL
other changes:
* Log both current_client and executing_client in the crash log.
* remove prepareLuaClient and resetLuaClient, being dead code that was forgotten.
* remove scriptTimeSnapshot and snapshot_time and instead add cmd_time_snapshot
that serves all commands and is reset only when execution nesting starts.
* remove code to propagate CLIENT_FORCE_REPL from the executed command
to the script caller since scripts aren't propagated anyway these days and anyway
this flag wouldn't have had an effect since CLIENT_PREVENT_PROP is added by scriptResetRun.
* fix a module GIL violation issue in afterSleep that was introduced in #10300 (unreleased)
If we have integer scores on the sorted set we're not using the fastest way
to reply by calling `d2string` which uses `double2ll` and `ll2string` when it can,
instead of `fpconv_dtoa`.
This results by some 50% performance improvement in certain cases of integer
scores for both RESP2 and RESP3, and no apparent impact on double scores.
Co-authored-by: Oran Agra <oran@redislabs.com>
Introduce .is_local method to connection, and implement for TCP/TLS/
Unix socket, also drop 'int islocalClient(client *c)'. Then we can
hide the detail into the specific connection types.
Uplayer tests a connection is local or not by abstract method only.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
*TL;DR*
---------------------------------------
Following the discussion over the issue [#7551](https://github.com/redis/redis/issues/7551)
We decided to refactor the client blocking code to eliminate some of the code duplications
and to rebuild the infrastructure better for future key blocking cases.
*In this PR*
---------------------------------------
1. reprocess the command once a client becomes unblocked on key (instead of running
custom code for the unblocked path that's different than the one that would have run if
blocking wasn't needed)
2. eliminate some (now) irrelevant code for handling unblocking lists/zsets/streams etc...
3. modify some tests to intercept the error in cases of error on reprocess after unblock (see
details in the notes section below)
4. replace '$' on the client argv with current stream id. Since once we reprocess the stream
XREAD we need to read from the last msg and not wait for new msg in order to prevent
endless block loop.
5. Added statistics to the info "Clients" section to report the:
* `total_blocking_keys` - number of blocking keys
* `total_blocking_keys_on_nokey` - number of blocking keys which have at least 1 client
which would like
to be unblocked on when the key is deleted.
6. Avoid expiring unblocked key during unblock. Previously we used to lookup the unblocked key
which might have been expired during the lookup. Now we lookup the key using NOTOUCH and
NOEXPIRE to avoid deleting it at this point, so propagating commands in blocked.c is no longer needed.
7. deprecated command flags. We decided to remove the CMD_CALL_STATS and CMD_CALL_SLOWLOG
and make an explicit verification in the call() function in order to decide if stats update should take place.
This should simplify the logic and also mitigate existing issues: for example module calls which are
triggered as part of AOF loading might still report stats even though they are called during AOF loading.
*Behavior changes*
---------------------------------------------------
1. As this implementation prevents writing dedicated code handling unblocked streams/lists/zsets,
since we now re-process the command once the client is unblocked some errors will be reported differently.
The old implementation used to issue
``UNBLOCKED the stream key no longer exists``
in the following cases:
- The stream key has been deleted (ie. calling DEL)
- The stream and group existed but the key type was changed by overriding it (ie. with set command)
- The key not longer exists after we swapdb with a db which does not contains this key
- After swapdb when the new db has this key but with different type.
In the new implementation the reported errors will be the same as if the command was processed after effect:
**NOGROUP** - in case key no longer exists, or **WRONGTYPE** in case the key was overridden with a different type.
2. Reprocessing the command means that some checks will be reevaluated once the
client is unblocked.
For example, ACL rules might change since the command originally was executed and
will fail once the client is unblocked.
Another example is OOM condition checks which might enable the command to run and
block but fail the command reprocess once the client is unblocked.
3. One of the changes in this PR is that no command stats are being updated once the
command is blocked (all stats will be updated once the client is unblocked). This implies
that when we have many clients blocked, users will no longer be able to get that information
from the command stats. However the information can still be gathered from the client list.
**Client blocking**
---------------------------------------------------
the blocking on key will still be triggered the same way as it is done today.
in order to block the current client on list of keys, the call to
blockForKeys will still need to be made which will perform the same as it is today:
* add the client to the list of blocked clients on each key
* keep the key with a matching list node (position in the global blocking clients list for that key)
in the client private blocking key dict.
* flag the client with CLIENT_BLOCKED
* update blocking statistics
* register the client on the timeout table
**Key Unblock**
---------------------------------------------------
Unblocking a specific key will be triggered (same as today) by calling signalKeyAsReady.
the implementation in that part will stay the same as today - adding the key to the global readyList.
The reason to maintain the readyList (as apposed to iterating over all clients blocked on the specific key)
is in order to keep the signal operation as short as possible, since it is called during the command processing.
The main change is that instead of going through a dedicated code path that operates the blocked command
we will just call processPendingCommandsAndResetClient.
**ClientUnblock (keys)**
---------------------------------------------------
1. Unblocking clients on keys will be triggered after command is
processed and during the beforeSleep
8. the general schema is:
9. For each key *k* in the readyList:
```
For each client *c* which is blocked on *k*:
in case either:
1. *k* exists AND the *k* type matches the current client blocking type
OR
2. *k* exists and *c* is blocked on module command
OR
3. *k* does not exists and *c* was blocked with the flag
unblock_on_deleted_key
do:
1. remove the client from the list of clients blocked on this key
2. remove the blocking list node from the client blocking key dict
3. remove the client from the timeout list
10. queue the client on the unblocked_clients list
11. *NEW*: call processCommandAndResetClient(c);
```
*NOTE:* for module blocked clients we will still call the moduleUnblockClientByHandle
which will queue the client for processing in moduleUnblockedClients list.
**Process Unblocked clients**
---------------------------------------------------
The process of all unblocked clients is done in the beforeSleep and no change is planned
in that part.
The general schema will be:
For each client *c* in server.unblocked_clients:
* remove client from the server.unblocked_clients
* set back the client readHandler
* continue processing the pending command and input buffer.
*Some notes regarding the new implementation*
---------------------------------------------------
1. Although it was proposed, it is currently difficult to remove the
read handler from the client while it is blocked.
The reason is that a blocked client should be unblocked when it is
disconnected, or we might consume data into void.
2. While this PR mainly keep the current blocking logic as-is, there
might be some future additions to the infrastructure that we would
like to have:
- allow non-preemptive blocking of client - sometimes we can think
that a new kind of blocking can be expected to not be preempt. for
example lets imagine we hold some keys on disk and when a command
needs to process them it will block until the keys are uploaded.
in this case we will want the client to not disconnect or be
unblocked until the process is completed (remove the client read
handler, prevent client timeout, disable unblock via debug command etc...).
- allow generic blocking based on command declared keys - we might
want to add a hook before command processing to check if any of the
declared keys require the command to block. this way it would be
easier to add new kinds of key-based blocking mechanisms.
Co-authored-by: Oran Agra <oran@redislabs.com>
Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
From https://en.wikipedia.org/wiki/NaN#Display, it says
that apart from nan and -nan, we can also get NAN and even
nan(char-sequence) from libc.
In #11482, our conclusion was that we wanna normalize it in
Redis to a single nan type, like we already normalized inf.
For this, we also reverted the assert_match part of the test
added in #11506, using assert_equal to validate the changes.
## Issue
During the client input/output buffer processing, the memory usage is
incrementally updated to keep track of clients going beyond a certain
threshold `maxmemory-clients` to be evicted. However, this additional
tracking activity leads to unnecessary CPU cycles wasted when no
client-eviction is required. It is applicable in two cases.
* `maxmemory-clients` is set to `0` which equates to no client eviction
(applicable to all clients)
* `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
client not applicable for eviction.
## Solution
* Disable client memory usage tracking during the read/write flow when
`maxmemory-clients` is set to `0` or `client no-evict` is `on`.
The memory usage is tracked only during the `clientCron` i.e. it gets
periodically updated.
* Cleanup the clients from the memory usage bucket when client eviction
is disabled.
* When the maxmemory-clients config is enabled or disabled at runtime,
we immediately update the memory usage buckets for all clients (tested
scanning 80000 took some 20ms)
Benchmark shown that this can improve performance by about 5% in
certain situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
As being discussed in #10981 we see a degradation in performance
between v6.2 and v7.0 of Redis on the EVAL command.
After profiling the current unstable branch we can see that we call the
expensive function evalCalcFunctionName twice.
The current "fix" is to basically avoid calling evalCalcFunctionName and
even dictFind(lua_scripts) twice for the same command.
Instead we cache the current script's dictEntry (for both Eval and Functions)
in the current client so we don't have to repeat these calls.
The exception would be when doing an EVAL on a new script that's not yet
in the script cache. in that case we will call evalCalcFunctionName (and even
evalExtractShebangFlags) twice.
Co-authored-by: Oran Agra <oran@redislabs.com>
Introduce socket `shutdown()` into connection type, and use it
on normal socket if a fork is active. This allows us to close
client connections when there are child processes sharing the
file descriptors.
Fixes#10077. The reason is that since the `fork()` child is holding
the file descriptors, the `close` in `unlinkClient -> connClose`
isn't sufficient. The client will not realize that the connection is
disconnected until the child process ends.
Let's try to be conservative and only use shutdown when the fork is active.
Renamed from "Pause Clients" to "Pause Actions" since the mechanism can pause
several actions in redis, not just clients (e.g. eviction, expiration).
Previously each pause purpose (which has a timeout that's tracked separately from others purposes),
also implicitly dictated what it pauses (reads, writes, eviction, etc). Now it is explicit, and
the actions that are paused (bit flags) are defined separately from the purpose.
- Previously, when using feature pause-client it also implicitly means to make the server static:
- Pause replica traffic
- Pauses eviction processing
- Pauses expire processing
Making the server static is used also for failover and shutdown. This PR internally rebrand
pause-client API to become pause-action API. It also Simplifies pauseClients structure
by replacing pointers array with static array.
The context of this PR is to add another trigger to pause-client which will activated in case
of OOM as throttling mechanism ([see here](https://github.com/redis/redis/issues/10907)).
In this case we want only to pause client, and eviction actions.
* Fixes build warning when CACHE_LINE_SIZE is already defined
* Fixes wrong CACHE_LINE_SIZE on some FreeBSD systems where it could be set to 128 (e.g. on MIPS)
* Fixes wrong CACHE_LINE_SIZE on Apple M1 (use 128 instead of 64)
Wrong cache line size in that case can some false sharing of array elements between threads, see #10892
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.
This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.
The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ).
test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
Adds a number of user management/ACL validaiton/command execution functions to improve a
Redis module's ability to enforce ACLs correctly and easily.
* RM_SetContextUser - sets a RedisModuleUser on the context, which RM_Call will use to both
validate ACLs (if requested and set) as well as assign to the client so that scripts executed via
RM_Call will have proper ACL validation.
* RM_SetModuleUserACLString - Enables one to pass an entire ACL string, not just a single OP
and have it applied to the user
* RM_GetModuleUserACLString - returns a stringified version of the user's ACL (same format as dump
and list). Contains an optimization to cache the stringified version until the underlying ACL is modified.
* Slightly re-purpose the "C" flag to RM_Call from just being about ACL check before calling the
command, to actually running the command with the right user, so that it also affects commands
inside EVAL scripts. see #11231
There are many commits in this PR, the detailed changes is described
in each commit message.
### Main changes in this PR
* Fully abstract connection type, and hide connection type specified methods.
Ex, currently TLS class looks like:
```
static ConnectionType CT_TLS = {
/* connection type */
.get_type = connTLSGetType,
/* connection type initialize & finalize & configure */
.init = tlsInit,
.cleanup = tlsCleanup,
.configure = tlsConfigure,
/* ae & accept & listen & error & address handler */
.ae_handler = tlsEventHandler,
.accept_handler = tlsAcceptHandler,
.addr = connTLSAddr,
.listen = connTLSListen,
/* create/close connection */
.conn_create = connCreateTLS,
.conn_create_accepted = connCreateAcceptedTLS,
.close = connTLSClose,
/* connect & accept */
.connect = connTLSConnect,
.blocking_connect = connTLSBlockingConnect,
.accept = connTLSAccept,
/* IO */
.read = connTLSRead,
.write = connTLSWrite,
.writev = connTLSWritev,
.set_write_handler = connTLSSetWriteHandler,
.set_read_handler = connTLSSetReadHandler,
.get_last_error = connTLSGetLastError,
.sync_write = connTLSSyncWrite,
.sync_read = connTLSSyncRead,
.sync_readline = connTLSSyncReadLine,
/* pending data */
.has_pending_data = tlsHasPendingData,
.process_pending_data = tlsProcessPendingData,
/* TLS specified methods */
.get_peer_cert = connTLSGetPeerCert,
};
int RedisRegisterConnectionTypeTLS()
{
return connTypeRegister(&CT_TLS);
}
```
* Also abstract Unix socket class. Currently, the connection framework becomes like:
```
uplayer
|
connection layer
/ | \
TCP Unix TLS
```
* It's possible to build TLS as a shared library (`make BUILD_TLS=module`).
Loading the shared library(redis-tls.so) into Redis by Redis module subsystem,
and Redis starts to listen TLS port. Ex:
```
./src/redis-server --tls-port 6379 --port 0 \
--tls-cert-file ./tests/tls/redis.crt \
--tls-key-file ./tests/tls/redis.key \
--tls-ca-cert-file ./tests/tls/ca.crt \
--loadmodule src/redis-tls.so
```
### Interface changes
* RM_GetContextFlags supports a new flag: REDISMODULE_CTX_FLAGS_SERVER_STARTUP
* INFO SERVER includes a list of listeners:
```
listener0:name=tcp,bind=127.0.0.1,port=6380
listener1:name=unix,bind=/run/redis.sock
listener2:name=tls,bind=127.0.0.1,port=6379
```
### Other notes
* Fix wrong signature of RedisModuleDefragFunc, this could break
compilation of a module, but not the ABI
* Some reordering of initialization order in server.c:
* Move initialization of listeners to be after loading the modules
* Config TLS after initialization of listeners
* Init cluster after initialization of listeners
* Sentinel does not support the TLS module or any connection module
since it uses hiredis for outbound connections, so when TLS is built as
a module, sentinel lacks TLS support.
The previous implementation calls `snprintf` twice, the second time used to
'memcpy' the output of the first, which could be a very large string.
The new implementation reserves space for the protocol header ahead
of the formatted double, and then prepends the string length ahead of it.
Measured improvement of simple ZADD of some 25%.
Unix socket uses different accept handler/create listener from TCP,
to hide these difference to avoid hard code, use a new unix socket
connection type. Also move 'acceptUnixHandler' into unix.c.
Currently, the connection framework becomes like following:
uplayer
|
connection layer
/ | \
TCP Unix TLS
It's possible to build Unix socket support as a shared library, and
load it dynamically. Because TCP and Unix socket don't require any
heavy dependencies or overheads, we build them into Redis statically.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract accept handler for socket&TLS, and add helper function
'connAcceptHandler' to get accept handler by specified type.
Also move acceptTcpHandler into socket.c, and move
acceptTLSHandler into tls.c.
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Abstract common interface of connection type, so Redis can hide the
implementation and uplayer only calls connection API without macro.
uplayer
|
connection layer
/ \
socket TLS
Currently, for both socket and TLS, all the methods of connection type
are declared as static functions.
It's possible to build TLS(even socket) as a shared library, and Redis
loads it dynamically in the next step.
Also add helper function connTypeOfCluster() and
connTypeOfReplication() to simplify the code:
link->conn = server.tls_cluster ? connCreateTLS() : connCreateSocket();
-> link->conn = connCreate(connTypeOfCluster());
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
Originally, connPeerToString is designed to get the address info from
socket only(for both TCP & TLS), and the API 'connPeerToString' is
oriented to operate a FD like:
int connPeerToString(connection *conn, char *ip, size_t ip_len, int *port) {
return anetFdToString(conn ? conn->fd : -1, ip, ip_len, port, FD_TO_PEER_NAME);
}
Introduce connAddr and implement .addr method for socket and TLS,
thus the API 'connAddr' and 'connFormatAddr' become oriented to a
connection like:
static inline int connAddr(connection *conn, char *ip, size_t ip_len, int *port, int remote) {
if (conn && conn->type->addr) {
return conn->type->addr(conn, ip, ip_len, port, remote);
}
return -1;
}
Also remove 'FD_TO_PEER_NAME' & 'FD_TO_SOCK_NAME', use a boolean type
'remote' to get local/remote address of a connection.
With these changes, it's possible to support the other connection
types which does not use socket(Ex, RDMA).
Thanks to Oran for suggestions!
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
replace use of:
sprintf --> snprintf
strcpy/strncpy --> redis_strlcpy
strcat/strncat --> redis_strlcat
**why are we making this change?**
Much of the code uses some unsafe variants or deprecated buffer handling
functions.
While most cases are probably not presenting any issue on the known path
programming errors and unterminated strings might lead to potential
buffer overflows which are not covered by tests.
**As part of this PR we change**
1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl
2. change all occurrences of use of sprintf with use of snprintf
3. change occurrences of use of strcpy/strncpy with redis_strlcpy
4. change occurrences of use of strcat/strncat with redis_strlcat
5. change the behavior of ll2string/ull2string/ld2string so that it will always place null
termination ('\0') on the output buffer in the first index. this was done in order to make
the use of these functions more safe in cases were the user will not check the output
returned by them (for example in rdbRemoveTempFile)
6. we added a compiler directive to issue a deprecation error in case a use of
sprintf/strcpy/strcat is found during compilation which will result in error during compile time.
However keep in mind that since the deprecation attribute is not supported on all compilers,
this is expected to fail during push workflows.
**NOTE:** while this is only an initial milestone. We might also consider
using the *_s implementation provided by the C11 Extensions (however not
yet widly supported). I would also suggest to start
looking at static code analyzers to track unsafe use cases.
For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling
which can help locate unsafe function usage.
https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c
The main reason not to onboard it at this stage is that the alternative
excepted by clang is to use the C11 extensions which are not always
supported by stdlib.
Account sharded pubsub channels memory consumption in client memory usage
computation to accurately evict client based on the set threshold for `maxmemory-clients`.
When calling CLIENT INFO/LIST, and in various debug prints, Redis is printing
the number of pubsub channels / patterns the client is subscribed to.
With the addition of sharded pubsub, it would be useful to print the number of
keychannels the client is subscribed to as well.
There are a lot of false sharing cache misses in line 4013 inside getIOPendingCount function.
The reason is that elements of io_threads_pending array access the same cache line from different
threads, so it is better to represent it as an array of structures with fields aligned to cache line size.
This change should improve performance (in particular, it affects the latency metric, we saw up to 3% improvement).
* Fix broken protocol when redis can't persist to RDB (general commands, not
modules), excessive newline. regression of #10372 (7.0 RC3)
* Fix broken protocol when Redis can't persist to AOF (modules and
scripts), missing newline.
* Fix bug in OOM check of EVAL scripts called from RM_Call.
set the cached OOM state for scripts before executing module commands too,
so that it can serve scripts that are executed by modules.
i.e. in the past EVAL executed by RM_Call could have either falsely
fail or falsely succeeded because of a wrong cached OOM state flag.
* Fix bugs with RM_Yield:
1. SHUTDOWN should only accept the NOSAVE mode
2. Avoid eviction during yield command processing.
3. Avoid processing master client commands while yielding from another client
* Add new two more checks to RM_Call script mode.
1. READONLY You can't write against a read only replica
2. MASTERDOWN Link with MASTER is down and `replica-serve-stale-data` is set to `no`
* Add new RM_Call flag to let redis automatically refuse `deny-oom` commands
while over the memory limit.
* Add tests to cover various errors from Scripts, Modules, Modules
calling scripts, and Modules calling commands in script mode.
Add tests:
* Looks like the MISCONF error was completely uncovered by the tests,
add tests for it, including from scripts, and modules
* Add tests for NOREPLICAS from scripts
* Add tests for the various errors in module RM_Call, including RM_Call that
calls EVAL, and RM_call in "eval mode". that includes:
NOREPLICAS, READONLY, MASTERDOWN, MISCONF
The amount of `server.stat_net_output_bytes/server.stat_net_input_bytes`
is actually the sum of replication flow and users' data flow.
It may cause confusions like this:
"Why does my server get such a large output_bytes while I am doing nothing? ".
After discussions and revisions, now here is the change about what this
PR brings (final version before merge):
- 2 server variables to count the network bytes during replication,
including fullsync and propagate bytes.
- `server.stat_net_repl_output_bytes`/`server.stat_net_repl_input_bytes`
- 3 info fields to print the input and output of repl bytes and instantaneous
value of total repl bytes.
- `total_net_repl_input_bytes` / `total_net_repl_output_bytes`
- `instantaneous_repl_total_kbps`
- 1 new API `rioCheckType()` to check the type of rio. So we can use this
to distinguish between diskless and diskbased replication
- 2 new counting items to keep network statistics consistent between master
and slave
- rdb portion during diskless replica. in `rdbLoadProgressCallback()`
- first line of the full sync payload. in `readSyncBulkPayload()`
Co-authored-by: Oran Agra <oran@redislabs.com>
* Till now, replicas that were unable to persist, would still execute the commands
they got from the master, now they'll panic by default, and we add a new
`replica-ignore-disk-errors` config to change that.
* Till now, when a command failed on a replica or AOF-loading, it only logged a
warning and a stat, we add a new `propagation-error-behavior` config to allow
panicking in that state (may become the default one day)
Note that commands that fail on the replica can either indicate a bug that could
cause data inconsistency between the replica and the master, or they could be
in some cases (specifically in previous versions), a result of a command (e.g. EVAL)
that failed on the master, but still had to be propagated to fail on the replica as well.
Adds the `allow-cross-slot-keys` flag to Eval scripts and Functions to allow
scripts to access keys from multiple slots.
The default behavior is now that they are not allowed to do that (unlike before).
This is a breaking change for 7.0 release candidates (to be part of 7.0.0), but
not for previous redis releases since EVAL without shebang isn't doing this check.
Note that the check is done on both the keys declared by the EVAL / FCALL command
arguments, and also the ones used by the script when making a `redis.call`.
A note about the implementation, there seems to have been some confusion
about allowing access to non local keys. I thought I missed something in our
wider conversation, but Redis scripts do block access to non-local keys.
So the issue was just about cross slots being accessed.
1. Disk error and slave count checks didn't flag the transactions or counted correctly in command stats (regression from #10372 , 7.0 RC3)
2. RM_Call will reply the same way Redis does, in case of non-exisitng command or arity error
3. RM_WrongArtiy will consider the full command name
4. Use lowercase 'u' in "unknonw subcommand" (to align with "unknown command")
Followup work of #10127
Durability of database is a big and old topic, in this regard Redis use AOF to
support it, and `appendfsync=alwasys` policy is the most strict level, guarantee
all data is both written and synced on disk before reply success to client.
But there are some cases have been overlooked, and could lead to durability broken.
1. The most clear one is about threaded-io mode
we should also set client's write handler with `ae_barrier` in
`handleClientsWithPendingWritesUsingThreads`, or the write handler would be
called after read handler in the next event loop, it means the write command result
could be replied to client before flush to AOF.
2. About blocked client (mostly by module)
in `beforeSleep()`, `handleClientsBlockedOnKeys()` should be called before
`flushAppendOnlyFile()`, in case the unblocked clients modify data without persistence
but send reply.
3. When handling `ProcessingEventsWhileBlocked`
normally it takes place when lua/function/module timeout, and we give a chance to users
to kill the slow operation, but we should call `flushAppendOnlyFile()` before
`handleClientsWithPendingWrites()`, in case the other clients in the last event loop get
acknowledge before data persistence.
for a instance:
```
in the same event loop
client A executes set foo bar
client B executes eval "for var=1,10000000,1 do end" 0
```
after the script timeout, client A will get `OK` but lose data after restart (kill redis when
timeout) if we don't flush the write command to AOF.
4. A more complex case about `ProcessingEventsWhileBlocked`
it is lua timeout in transaction, for example
`MULTI; set foo bar; eval "for var=1,10000000,1 do end" 0; EXEC`, then client will get set
command's result before the whole transaction done, that breaks atomicity too.
fortunately, it's already fixed by #5428 (although it's not the original purpose just a side
effect : )), but module timeout should be fixed too.
case 1, 2, 3 are fixed in this commit, the module issue in case 4 needs a followup PR.
To remove `pending_querybuf`, the key point is reusing `querybuf`, it means master client's `querybuf` is not only used to parse command, but also proxy to sub-replicas.
1. add a new variable `repl_applied` for master client to record how many data applied (propagated via `replicationFeedStreamFromMasterStream()`) but not trimmed in `querybuf`.
2. don't sdsrange `querybuf` in `commandProcessed()`, we trim it to `repl_applied` after the whole replication pipeline processed to avoid fragmented `sdsrange`. And here are some scenarios we cannot trim to `qb_pos`:
* we don't receive complete command from master
* master client blocked because of client pause
* IO threads operate read, master client flagged with CLIENT_PENDING_COMMAND
In these scenarios, `qb_pos` points to the part of the current command or the beginning of next command, and the current command is not applied yet, so the `repl_applied` is not equal to `qb_pos`.
Some other notes:
* Do not do big arg optimization on master client, since we can only sdsrange `querybuf` after data sent to replicas.
* Set `qb_pos` and `repl_applied` to 0 when `freeClient` in `replicationCacheMaster`.
* Rewrite `processPendingCommandsAndResetClient` to `processPendingCommandAndInputBuffer`, let `processInputBuffer` to be called successively after `processCommandAndResetClient`.
In a benchmark we noticed we spend a relatively long time updating the client
memory usage leading to performance degradation.
Before #8687 this was performed in the client's cron and didn't affect performance.
But since introducing client eviction we need to perform this after filling the input
buffers and after processing commands. This also lead me to write this code to be
thread safe and perform it in the i/o threads.
It turns out that the main performance issue here is related to atomic operations
being performed while updating the total clients memory usage stats used for client
eviction (`server.stat_clients_type_memory[]`). This update needed to be atomic
because `updateClientMemUsage()` was called from the IO threads.
In this commit I make sure to call `updateClientMemUsage()` only from the main thread.
In case of threaded IO I call it for each client during the "fan-in" phase of the read/write
operation. This also means I could chuck the `updateClientMemUsageBucket()` function
which was called during this phase and embed it into `updateClientMemUsage()`.
Profiling shows this makes `updateClientMemUsage()` (on my x86_64 linux) roughly x4 faster.
Deleting a stream while a client is blocked XREADGROUP should unblock the client.
The idea is that if a client is blocked via XREADGROUP is different from
any other blocking type in the sense that it depends on the existence of both
the key and the group. Even if the key is deleted and then revived with XADD
it won't help any clients blocked on XREADGROUP because the group no longer
exist, so they would fail with -NOGROUP anyway.
The conclusion is that it's better to unblock these clients (with error) upon
the deletion of the key, rather than waiting for the first XADD.
Other changes:
1. Slightly optimize all `serveClientsBlockedOn*` functions by checking `server.blocked_clients_by_type`
2. All `serveClientsBlockedOn*` functions now use a list iterator rather than looking at `listFirst`, relying
on `unblockClient` to delete the head of the list. Before this commit, only `serveClientsBlockedOnStreams`
used to work like that.
3. bugfix: CLIENT UNBLOCK ERROR should work even if the command doesn't have a timeout_callback
(only relevant to module commands)
This PR fix 2 issues on Lua scripting:
* Server error reply statistics (some errors were counted twice).
* Error code and error strings returning from scripts (error code was missing / misplaced).
## Statistics
a Lua script user is considered part of the user application, a sophisticated transaction,
so we want to count an error even if handled silently by the script, but when it is
propagated outwards from the script we don't wanna count it twice. on the other hand,
if the script decides to throw an error on its own (using `redis.error_reply`), we wanna
count that too.
Besides, we do count the `calls` in command statistics for the commands the script calls,
we we should certainly also count `failed_calls`.
So when a simple `eval "return redis.call('set','x','y')" 0` fails, it should count the failed call
to both SET and EVAL, but the `errorstats` and `total_error_replies` should be counted only once.
The PR changes the error object that is raised on errors. Instead of raising a simple Lua
string, Redis will raise a Lua table in the following format:
```
{
err='<error message (including error code)>',
source='<User source file name>',
line='<line where the error happned>',
ignore_error_stats_update=true/false,
}
```
The `luaPushError` function was modified to construct the new error table as describe above.
The `luaRaiseError` was renamed to `luaError` and is now simply called `lua_error` to raise
the table on the top of the Lua stack as the error object.
The reason is that since its functionality is changed, in case some Redis branch / fork uses it,
it's better to have a compilation error than a bug.
The `source` and `line` fields are enriched by the error handler (if possible) and the
`ignore_error_stats_update` is optional and if its not present then the default value is `false`.
If `ignore_error_stats_update` is true, the error will not be counted on the error stats.
When parsing Redis call reply, each error is translated to a Lua table on the format describe
above and the `ignore_error_stats_update` field is set to `true` so we will not count errors
twice (we counted this error when we invoke the command).
The changes in this PR might have been considered as a breaking change for users that used
Lua `pcall` function. Before, the error was a string and now its a table. To keep backward
comparability the PR override the `pcall` implementation and extract the error message from
the error table and return it.
Example of the error stats update:
```
127.0.0.1:6379> lpush l 1
(integer) 2
127.0.0.1:6379> eval "return redis.call('get', 'l')" 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value. script: e471b73f1ef44774987ab00bdf51f21fd9f7974a, on @user_script:1.
127.0.0.1:6379> info Errorstats
# Errorstats
errorstat_WRONGTYPE:count=1
127.0.0.1:6379> info commandstats
# Commandstats
cmdstat_eval:calls=1,usec=341,usec_per_call=341.00,rejected_calls=0,failed_calls=1
cmdstat_info:calls=1,usec=35,usec_per_call=35.00,rejected_calls=0,failed_calls=0
cmdstat_lpush:calls=1,usec=14,usec_per_call=14.00,rejected_calls=0,failed_calls=0
cmdstat_get:calls=1,usec=10,usec_per_call=10.00,rejected_calls=0,failed_calls=1
```
## error message
We can now construct the error message (sent as a reply to the user) from the error table,
so this solves issues where the error message was malformed and the error code appeared
in the middle of the error message:
```diff
127.0.0.1:6379> eval "return redis.call('set','x','y')" 0
-(error) ERR Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479): @user_script:1: OOM command not allowed when used memory > 'maxmemory'.
+(error) OOM command not allowed when used memory > 'maxmemory' @user_script:1. Error running script (call to 71e6319f97b0fe8bdfa1c5df3ce4489946dda479)
```
```diff
127.0.0.1:6379> eval "redis.call('get', 'l')" 0
-(error) ERR Error running script (call to f_8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1): @user_script:1: WRONGTYPE Operation against a key holding the wrong kind of value
+(error) WRONGTYPE Operation against a key holding the wrong kind of value script: 8a705cfb9fb09515bfe57ca2bd84a5caee2cbbd1, on @user_script:1.
```
Notica that `redis.pcall` was not change:
```
127.0.0.1:6379> eval "return redis.pcall('get', 'l')" 0
(error) WRONGTYPE Operation against a key holding the wrong kind of value
```
## other notes
Notice that Some commands (like GEOADD) changes the cmd variable on the client stats so we
can not count on it to update the command stats. In order to be able to update those stats correctly
we needed to promote `realcmd` variable to be located on the client struct.
Tests was added and modified to verify the changes.
Related PR's: #10279, #10218, #10278, #10309
Co-authored-by: Oran Agra <oran@redislabs.com>
Avoid deferred array reply on genericZrangebyrankCommand() when consumer type is client.
I.e. any ZRANGE / ZREVRNGE (when tank is used).
This was a performance regression introduced in #7844 (v 6.2) mainly affecting pipelined workloads.
Co-authored-by: Oran Agra <oran@redislabs.com>
Avoid sprintf/ll2string on setDeferredAggregateLen()/addReplyLongLongWithPrefix() when we can used shared objects.
In some pipelined workloads this achieves about 10% improvement.
Co-authored-by: Oran Agra <oran@redislabs.com>
There are scenarios where it results in many small objects in the reply list,
such as commands heavily using deferred array replies (`addReplyDeferredLen`).
E.g. what COMMAND command and CLUSTER SLOTS used to do (see #10056, #7123),
but also in case of a transaction or a pipeline of commands that use just one differed array reply.
We used to have to run multiple loops along with multiple calls to `write()` to send data back to
peer based on the current code, but by means of `writev()`, we can gather those scattered
objects in reply list and include the static reply buffer as well, then send it by one system call,
that ought to achieve higher performance.
In the case of TLS, we simply check and concatenate buffers into one big buffer and send it
away by one call to `connTLSWrite()`, if the amount of all buffers exceeds `NET_MAX_WRITES_PER_EVENT`,
then invoke `connTLSWrite()` multiple times to avoid a huge massive of memory copies.
Note that aside of reducing system calls, this change will also reduce the amount of
small TCP packets sent.
Current implementation simple idle client which serves no traffic still
use ~17Kb of memory. this is mainly due to a fixed size reply buffer
currently set to 16kb.
We have encountered some cases in which the server operates in a low memory environments.
In such cases a user who wishes to create large connection pools to support potential burst period,
will exhaust a large amount of memory to maintain connected Idle clients.
Some users may choose to "sacrifice" performance in order to save memory.
This commit introduce a dynamic mechanism to shrink and expend the client reply buffer based on
periodic observed peak.
the algorithm works as follows:
1. each time a client reply buffer has been fully written, the last recorded peak is updated:
new peak = MAX( last peak, current written size)
2. during clients cron we check for each client if the last observed peak was:
a. matching the current buffer size - in which case we expend (resize) the buffer size by 100%
b. less than half the buffer size - in which case we shrink the buffer size by 50%
3. In any case we will **not** resize the buffer in case:
a. the current buffer peak is less then the current buffer usable size and higher than 1/2 the
current buffer usable size
b. the value of (current buffer usable size/2) is less than 1Kib
c. the value of (current buffer usable size*2) is larger than 16Kib
4. the peak value is reset to the current buffer position once every **5** seconds. we maintain a new
field in the client structure (buf_peak_last_reset_time) which is used to keep track of how long it
passed since the last buffer peak reset.
### **Interface changes:**
**CIENT LIST** - now contains 2 new extra fields:
rbs= < the current size in bytes of the client reply buffer >
rbp=< the current value in bytes of the last observed buffer peak position >
**INFO STATS** - now contains 2 new statistics:
reply_buffer_shrinks = < total number of buffer shrinks performed >
reply_buffer_expends = < total number of buffer expends performed >
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Yoav Steinberg <yoav@redislabs.com>
This is a followup work for #10278, and a discussion about #10279
The changes:
- fix failed_calls in command stats for blocked clients that got error.
including CLIENT UNBLOCK, and module replying an error from a thread.
- fix latency stats for XREADGROUP that filed with -NOGROUP
Theory behind which errors should be counted:
- error stats represents errors returned to the user, so an error handled by a
module should not be counted.
- total error counter should be the same.
- command stats represents execution of commands (even with RM_Call, and if
they fail or get rejected it counts these calls in commandstats, so it should
also count failed_calls)
Some thoughts about Scripts:
for scripts it could be different since they're part of user code, not the infra (not an extension to redis)
we certainly want commandstats to contain all calls and errors
a simple script is like mult-exec transaction so an error inside it should be counted in error stats
a script that replies with an error to the user (using redis.error_reply) should also be counted in error stats
but then the problem is that a plain `return redis.call("SET")` should not be counted twice (once for the SET
and once for EVAL)
so that's something left to be resolved in #10279
This PR handles several aspects
1. Calls to RM_ReplyWithError from thread safe contexts don't violate thread safety.
2. Errors returning from RM_Call to the module aren't counted in the statistics (they
might be handled silently by the module)
3. When a module propagates a reply it got from RM_Call to it's client, then the error
statistics are counted.
This is done by:
1. When appending an error reply to the output buffer, we avoid updating the global
error statistics, instead we cache that error in a deferred list in the client struct.
2. When creating a RedisModuleCallReply object, the deferred error list is moved from
the client into that object.
3. when a module calls RM_ReplyWithCallReply we copy the deferred replies to the dest
client (if that's a real client, then that's when the error statistics are updated to the server)
Note about RM_ReplyWithCallReply: if the original reply had an array with errors, and the module
replied with just a portion of the original reply, and not the entire reply, the errors are currently not
propagated and the errors stats will not get propagated.
Fix#10180
Summary of changes:
1. Rename `redisCommand->name` to `redisCommand->declared_name`, it is a
const char * for native commands and SDS for module commands.
2. Store the [sub]command fullname in `redisCommand->fullname` (sds).
3. List subcommands in `ACL CAT`
4. List subcommands in `COMMAND LIST`
5. `moduleUnregisterCommands` now will also free the module subcommands.
6. RM_GetCurrentCommandName returns full command name
Other changes:
1. Add `addReplyErrorArity` and `addReplyErrorExpireTime`
2. Remove `getFullCommandName` function that now is useless.
3. Some cleanups about `fullname` since now it is SDS.
4. Delete `populateSingleCommand` function from server.h that is useless.
5. Added tests to cover this change.
6. Add some module unload tests and fix the leaks
7. Make error messages uniform, make sure they always contain the full command
name and that it's quoted.
7. Fixes some typos
see the history in #9504, fixes#10124
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: guybe7 <guy.benoish@redislabs.com>
Some modules might perform a long-running logic in different stages of Redis lifetime, for example:
* command execution
* RDB loading
* thread safe context
During this long-running logic Redis is not responsive.
This PR offers
1. An API to process events while a busy command is running (`RM_Yield`)
2. A new flag (`ALLOW_BUSY`) to mark the commands that should be handled during busy
jobs which can also be used by modules (`allow-busy`)
3. In slow commands and thread safe contexts, this flag will start rejecting commands with -BUSY only
after `busy-reply-threshold`
4. During loading (`rdb_load` callback), it'll process events right away (not wait for `busy-reply-threshold`),
but either way, the processing is throttled to the server hz rate.
5. Allow modules to Yield to redis background tasks, but not to client commands
* rename `script-time-limit` to `busy-reply-threshold` (an alias to the pre-7.0 `lua-time-limit`)
Co-authored-by: Oran Agra <oran@redislabs.com>
1. enable diskless replication by default
2. add a new config named repl-diskless-sync-max-replicas that enables
replication to start before the full repl-diskless-sync-delay was
reached.
3. put replica online sooner on the master (see below)
4. test suite uses repl-diskless-sync-delay of 0 to be faster
5. a few tests that use multiple replica on a pre-populated master, are
now using the new repl-diskless-sync-max-replicas
6. fix possible timing issues in a few cluster tests (see below)
put replica online sooner on the master
----------------------------------------------------
there were two tests that failed because they needed for the master to
realize that the replica is online, but the test code was actually only
waiting for the replica to realize it's online, and in diskless it could
have been before the master realized it.
changes include two things:
1. the tests wait on the right thing
2. issues in the master, putting the replica online in two steps.
the master used to put the replica as online in 2 steps. the first
step was to mark it as online, and the second step was to enable the
write event (only after getting ACK), but in fact the first step didn't
contains some of the tasks to put it online (like updating good slave
count, and sending the module event). this meant that if a test was
waiting to see that the replica is online form the point of view of the
master, and then confirm that the module got an event, or that the
master has enough good replicas, it could fail due to timing issues.
so now the full effect of putting the replica online, happens at once,
and only the part about enabling the writes is delayed till the ACK.
fix cluster tests
--------------------
I added some code to wait for the replica to sync and avoid race
conditions.
later realized the sentinel and cluster tests where using the original 5
seconds delay, so changed it to 0.
this means the other changes are probably not needed, but i suppose
they're still better (avoid race conditions)
Added a pool for temporary client objects to reuse in module operations.
By reusing temporary clients, we are avoiding expensive createClient()/freeClient()
calls and improving performance of RM_BlockClient() and RM_GetThreadSafeContext() calls.
This commit contains two optimizations:
1 - RM_BlockClient() and RM_GetThreadSafeContext() calls create temporary clients and they are freed in
RM_UnblockClient() and RM_FreeThreadSafeContext() calls respectively. Creating/destroying client object
takes quite time. To avoid that, added a pool of temporary clients. Pool expands when more clients are needed.
Also, added a cron function to shrink the pool and free unused clients after some time. Pool starts with zero
clients in it. It does not have max size and can grow unbounded as we need it. We will keep minimum of 8
temporary clients in the pool once created. Keeping small amount of clients to avoid client allocation costs
if temporary clients are required after some idle period.
2 - After unblocking a client (RM_UnblockClient()), one byte is written to pipe to wake up Redis main thread.
If there are many clients that will be unblocked, each operation requires one write() call which is quite expensive.
Changed code to avoid subsequent calls if possible.
There are a few more places that need temporary client objects (e.g RM_Call()). These are now using the same
temporary client pool to make things more centralized.
The following steps will crash redis-server:
```
[root]# cat crash
PSYNC replicationid -1
SLOWLOG GET
GET key
[root]# nc 127.0.0.1 6379 < crash
```
This one following #10020 and the crash was reported in #10076.
Other changes about the output info:
1. Cmd with a full name by using `getFullCommandName`, now it will print the right
subcommand name like `slowlog|get`.
2. Print the full client info by using `catClientInfoString`, the info is also valuable.:
The following error commands will crash redis-server:
```
> get|
Error: Server closed the connection
> get|set
Error: Server closed the connection
> get|other
```
The reason is in #9504, we use `lookupCommandBySds` for find the
container command. And it split the command (argv[0]) with `|`.
If we input something like `get|other`, after the split, `get`
will become a valid command name, pass the `ERR unknown command`
check, and finally crash in `addReplySubcommandSyntaxError`
In this case we do not need to split the command name with `|`
and just look in the commands dict to find if `argv[0]` is a
container command.
So this commit introduce a new function call `isContainerCommandBySds`
that it will return true if a command name is a container command.
Also with the old code, there is a incorrect error message:
```
> config|get set
(error) ERR Unknown subcommand or wrong number of arguments for 'set'. Try CONFIG|GET HELP.
```
The crash was reported in #10070.
This commit implements a sharded pubsub implementation based off of shard channels.
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Since #9166 we have an assertion here to make sure replica clients don't write anything to their buffer.
But in reality a replica may attempt write data to it's buffer simply by sending a command on the replication link.
This command in most cases will be rejected since #8868 but it'll still generate an error.
Actually the only valid command to send on a replication link is 'REPCONF ACK` which generates no response.
We want to keep the design so that replicas can send commands but we need to avoid any situation where we start
putting data in their response buffers, especially since they aren't used anymore. This PR makes sure to disconnect
a rogue client which generated a write on the replication link that cause something to be written to the response buffer.
To recreate the bug this fixes simply connect via telnet to a redis server and write sync\r\n wait for the the payload to
be written and then write any command (valid or invalid), such as ping\r\n on the telnet connection. It'll crash the server.
To avoid data loss, this commit adds a grace period for lagging replicas to
catch up the replication offset.
Done:
* Wait for replicas when shutdown is triggered by SIGTERM and SIGINT.
* Wait for replicas when shutdown is triggered by the SHUTDOWN command. A new
blocked client type BLOCKED_SHUTDOWN is introduced, allowing multiple clients
to call SHUTDOWN in parallel.
Note that they don't expect a response unless an error happens and shutdown is aborted.
* Log warning for each replica lagging behind when finishing shutdown.
* CLIENT_PAUSE_WRITE while waiting for replicas.
* Configurable grace period 'shutdown-timeout' in seconds (default 10).
* New flags for the SHUTDOWN command:
- NOW disables the grace period for lagging replicas.
- FORCE ignores errors writing the RDB or AOF files which would normally
prevent a shutdown.
- ABORT cancels ongoing shutdown. Can't be combined with other flags.
* New field in the output of the INFO command: 'shutdown_in_milliseconds'. The
value is the remaining maximum time to wait for lagging replicas before
finishing the shutdown. This field is present in the Server section **only**
during shutdown.
Not directly related:
* When shutting down, if there is an AOF saving child, it is killed **even** if AOF
is disabled. This can happen if BGREWRITEAOF is used when AOF is off.
* Client pause now has end time and type (WRITE or ALL) per purpose. The
different pause purposes are *CLIENT PAUSE command*, *failover* and
*shutdown*. If clients are unpaused for one purpose, it doesn't affect client
pause for other purposes. For example, the CLIENT UNPAUSE command doesn't
affect client pause initiated by the failover or shutdown procedures. A completed
failover or a failed shutdown doesn't unpause clients paused by the CLIENT
PAUSE command.
Notes:
* DEBUG RESTART doesn't wait for replicas.
* We already have a warning logged when a replica disconnects. This means that
if any replica connection is lost during the shutdown, it is either logged as
disconnected or as lagging at the time of exit.
Co-authored-by: Oran Agra <oran@redislabs.com>
This is needed in order to ease the deployment of functions for ephemeral cases, where user
needs to spin up a server with functions pre-loaded.
#### Details:
* Added `--functions-rdb` option to _redis-cli_.
* Functions only rdb via `REPLCONF rdb-filter-only functions`. This is a placeholder for a space
separated inclusion filter for the RDB. In the future can be `REPLCONF rdb-filter-only
"functions db:3 key-patten:user*"` and a complementing `rdb-filter-exclude` `REPLCONF`
can also be added.
* Handle "slave requirements" specification to RDB saving code so we can use the same RDB
when different slaves express the same requirements (like functions-only) and not share the
RDB when their requirements differ. This is currently just a flags `int`, but can be extended to
a more complex structure with various filter fields.
* make sure to support filters only in diskless replication mode (not to override the persistence file),
we do that by forcing diskless (even if disabled by config)
other changes:
* some refactoring in rdb.c (extract portion of a big function to a sub-function)
* rdb_key_save_delay used in AOFRW too
* sendChildInfo takes the number of updated keys (incremental, rather than absolute)
Co-authored-by: Oran Agra <oran@redislabs.com>
The mess:
Some parts use alsoPropagate for late propagation, others using an immediate one (propagate()),
causing edge cases, ugly/hacky code, and the tendency for bugs
The basic idea is that all commands are propagated via alsoPropagate (i.e. added to a list) and the
top-most call() is responsible for going over that list and actually propagating them (and wrapping
them in MULTI/EXEC if there's more than one command). This is done in the new function,
propagatePendingCommands.
Callers to propagatePendingCommands:
1. top-most call() (we want all nested call()s to add to the also_propagate array and just the top-most
one to propagate them) - via `afterCommand`
2. handleClientsBlockedOnKeys: it is out of call() context and it may propagate stuff - via `afterCommand`.
3. handleClientsBlockedOnKeys edge case: if the looked-up key is already expired, we will propagate the
expire but will not unblock any client so `afterCommand` isn't called. in that case, we have to propagate
the deletion explicitly.
4. cron stuff: active-expire and eviction may also propagate stuff
5. modules: the module API allows to propagate stuff from just about anywhere (timers, keyspace notifications,
threads). I could have tried to catch all the out-of-call-context places but it seemed easier to handle it in one
place: when we free the context. in the spirit of what was done in call(), only the top-most freeing of a module
context may cause propagation.
6. modules: when using a thread-safe ctx it's not clear when/if the ctx will be freed. we do know that the module
must lock the GIL before calling RM_Replicate/RM_Call so we propagate the pending commands when
releasing the GIL.
A "known limitation", which were actually a bug, was fixed because of this commit (see propagate.tcl):
When using a mix of RM_Call with `!` and RM_Replicate, the command would propagate out-of-order:
first all the commands from RM_Call, and then the ones from RM_Replicate
Another thing worth mentioning is that if, in the past, a client would issue a MULTI/EXEC with just one
write command the server would blindly propagate the MULTI/EXEC too, even though it's redundant.
not anymore.
This commit renames propagate() to propagateNow() in order to cause conflicts in pending PRs.
propagatePendingCommands is the only caller of propagateNow, which is now a static, internal helper function.
Optimizations:
1. alsoPropagate will not add stuff to also_propagate if there's no AOF and replicas
2. alsoPropagate reallocs also_propagagte exponentially, to save calls to memmove
Bugfixes:
1. CONFIG SET can create evictions, sending notifications which can cause to dirty++ with modules.
we need to prevent it from propagating to AOF/replicas
2. We need to set current_client in RM_Call. buggy scenario:
- CONFIG SET maxmemory, eviction notifications, module hook calls RM_Call
- assertion in lookupKey crashes, because current_client has CONFIG SET, which isn't CMD_WRITE
3. minor: in eviction, call propagateDeletion after notification, like active-expire and all commands
(we always send a notification before propagating the command)
## background
Till now CONFIG SET was blocked during loading.
(In the not so distant past, GET was disallowed too)
We recently (not released yet) added an async-loading mode, see #9323,
and during that time it'll serve CONFIG SET and any other command.
And now we realized (#9770) that some configs, and commands are dangerous
during async-loading.
## changes
* Allow most CONFIG SET during loading (both on async-loading and normal loading)
* Allow CONFIG REWRITE and CONFIG RESETSTAT during loading
* Block a few config during loading (`appendonly`, `repl-diskless-load`, and `dir`)
* Block a few commands during loading (list below)
## the blocked commands:
* SAVE - obviously we don't wanna start a foregreound save during loading 8-)
* BGSAVE - we don't mind to schedule one, but we don't wanna fork now
* BGREWRITEAOF - we don't mind to schedule one, but we don't wanna fork now
* MODULE - we obviously don't wanna unload a module during replication / rdb loading
(MODULE HELP and MODULE LIST are not blocked)
* SYNC / PSYNC - we're in the middle of RDB loading from master, must not allow sync
requests now.
* REPLICAOF / SLAVEOF - we're in the middle of replicating, maybe it makes sense to let
the user abort it, but he couldn't do that so far, i don't wanna take any risk of bugs due to odd state.
* CLUSTER - only allow [HELP, SLOTS, NODES, INFO, MYID, LINKS, KEYSLOT, COUNTKEYSINSLOT,
GETKEYSINSLOT, RESET, REPLICAS, COUNT_FAILURE_REPORTS], for others, preserve the status quo
## other fixes
* processEventsWhileBlocked had an issue when being nested, this could happen with a busy script
during async loading (new), but also in a busy script during AOF loading (old). this lead to a crash in
the scenario described in #6988
Block sensitive configs and commands by default.
* `enable-protected-configs` - block modification of configs with the new `PROTECTED_CONFIG` flag.
Currently we add this flag to `dbfilename`, and `dir` configs,
all of which are non-mutable configs that can set a file redis will write to.
* `enable-debug-command` - block the `DEBUG` command
* `enable-module-command` - block the `MODULE` command
These have a default value set to `no`, so that these features are not
exposed by default to client connections, and can only be set by modifying the config file.
Users can change each of these to either `yes` (allow all access), or `local` (allow access from
local TCP connections and unix domain connections)
Note that this is a **breaking change** (specifically the part about MODULE command being disabled by default).
I.e. we don't consider DEBUG command being blocked as an issue (people shouldn't have been using it),
and the few configs we protected are unlikely to have been set at runtime anyway.
On the other hand, it's likely to assume some users who use modules, load them from the config file anyway.
Note that's the whole point of this PR, for redis to be more secure by default and reduce the attack surface on
innocent users, so secure defaults will necessarily mean a breaking change.
Delete the hardcoded command table and replace it with an auto-generated table, based
on a JSON file that describes the commands (each command must have a JSON file).
These JSON files are the SSOT of everything there is to know about Redis commands,
and it is reflected fully in COMMAND INFO.
These JSON files are used to generate commands.c (using a python script), which is then
committed to the repo and compiled.
The purpose is:
* Clients and proxies will be able to get much more info from redis, instead of relying on hard coded logic.
* drop the dependency between Redis-user and the commands.json in redis-doc.
* delete help.h and have redis-cli learn everything it needs to know just by issuing COMMAND (will be
done in a separate PR)
* redis.io should stop using commands.json and learn everything from Redis (ultimately one of the release
artifacts should be a large JSON, containing all the information about all of the commands, which will be
generated from COMMAND's reply)
* the byproduct of this is:
* module commands will be able to provide that info and possibly be more of a first-class citizens
* in theory, one may be able to generate a redis client library for a strictly typed language, by using this info.
### Interface changes
#### COMMAND INFO's reply change (and arg-less COMMAND)
Before this commit the reply at index 7 contained the key-specs list
and reply at index 8 contained the sub-commands list (Both unreleased).
Now, reply at index 7 is a map of:
- summary - short command description
- since - debut version
- group - command group
- complexity - complexity string
- doc-flags - flags used for documentation (e.g. "deprecated")
- deprecated-since - if deprecated, from which version?
- replaced-by - if deprecated, which command replaced it?
- history - a list of (version, what-changed) tuples
- hints - a list of strings, meant to provide hints for clients/proxies. see https://github.com/redis/redis/issues/9876
- arguments - an array of arguments. each element is a map, with the possibility of nesting (sub-arguments)
- key-specs - an array of keys specs (already in unstable, just changed location)
- subcommands - a list of sub-commands (already in unstable, just changed location)
- reply-schema - will be added in the future (see https://github.com/redis/redis/issues/9845)
more details on these can be found in https://github.com/redis/redis-doc/pull/1697
only the first three fields are mandatory
#### API changes (unreleased API obviously)
now they take RedisModuleCommand opaque pointer instead of looking up the command by name
- RM_CreateSubcommand
- RM_AddCommandKeySpec
- RM_SetCommandKeySpecBeginSearchIndex
- RM_SetCommandKeySpecBeginSearchKeyword
- RM_SetCommandKeySpecFindKeysRange
- RM_SetCommandKeySpecFindKeysKeynum
Currently, we did not add module API to provide additional information about their commands because
we couldn't agree on how the API should look like, see https://github.com/redis/redis/issues/9944.
### Somehow related changes
1. Literals should be in uppercase while placeholder in lowercase. Now all the GEO* command
will be documented with M|KM|FT|MI and can take both lowercase and uppercase
### Unrelated changes
1. Bugfix: no_madaory_keys was absent in COMMAND's reply
2. expose CMD_MODULE as "module" via COMMAND
3. have a dedicated uint64 for ACL categories (instead of having them in the same uint64 as command flags)
Co-authored-by: Itamar Haber <itamar@garantiadata.com>
Script unit is a new unit located on script.c.
Its purpose is to provides an API for functions (and eval)
to interact with Redis. Interaction includes mostly
executing commands, but also functionalities like calling
Redis back on long scripts or check if the script was killed.
The interaction is done using a scriptRunCtx object that
need to be created by the user and initialized using scriptPrepareForRun.
Detailed list of functionalities expose by the unit:
1. Calling commands (including all the validation checks such as
acl, cluster, read only run, ...)
2. Set Resp
3. Set Replication method (AOF/REPLICATION/NONE)
4. Call Redis back to on long running scripts to allow Redis reply
to clients and perform script kill
The commit introduce the new unit and uses it on eval commands to
interact with Redis.
The following variable was renamed:
1. lua_caller -> script_caller
2. lua_time_limit -> script_time_limit
3. lua_timedout -> script_timedout
4. lua_oom -> script_oom
5. lua_disable_deny_script -> script_disable_deny_script
6. in_eval -> in_script
The following variables was moved to lctx under eval.c
1. lua
2. lua_client
3. lua_cur_script
4. lua_scripts
5. lua_scripts_mem
6. lua_replicate_commands
7. lua_write_dirty
8. lua_random_dirty
9. lua_multi_emitted
10. lua_repl
11. lua_kill
12. lua_time_start
13. lua_time_snapshot
This commit is in a low risk of introducing any issues and it
is just moving varibales around and not changing any logic.
* Fix CLIENT KILL kill all clients with id 0 or with skipme
CLIENT KILL with ID argument should only kill the client with the provided ID. In old code,
CLIENT KILL with id 0 will kill all the connected clients.
Co-authored-by: Ofir Luzon <ofirluzon@gmail.com>
Some people complain that QUIT is missing from help/command table.
Not appearing in COMMAND command, command stats, ACL, etc.
and instead, there's a hack in processCommand with a comment that looks outdated.
Note that it is [documented](https://redis.io/commands/quit)
At the same time, HOST: and POST are there in the command table although these are not real commands.
They would appear in the COMMAND command, and even in commandstats.
Other changes:
1. Initialize the static logged_time static var in securityWarningCommand
2. add `no-auth` flag to RESET so it can always be executed.
With dynamically growing argc (#9528), it is necessary to initialize
argv_len. Normally createClient() handles that, but in the case of a
module shared_client, this needs to be done explicitly.
This also addresses an issue with rewriteClientCommandArgument() which
doesn't properly handle the case where the new element extends beyond
argc but not beyond argv_len.
- Added sanitizer support. `address`, `undefined` and `thread` sanitizers are available.
- To build Redis with desired sanitizer : `make SANITIZER=undefined`
- There were some sanitizer findings, cleaned up codebase
- Added tests with address and undefined behavior sanitizers to daily CI.
- Added tests with address sanitizer to the per-PR CI (smoke out mem leaks sooner).
Basically, there are three types of issues :
**1- Unaligned load/store** : Most probably, this issue may cause a crash on a platform that
does not support unaligned access. Redis does unaligned access only on supported platforms.
**2- Signed integer overflow.** Although, signed overflow issue can be problematic time to time
and change how compiler generates code, current findings mostly about signed shift or simple
addition overflow. For most platforms Redis can be compiled for, this wouldn't cause any issue
as far as I can tell (checked generated code on godbolt.org).
**3 -Minor leak** (redis-cli), **use-after-free**(just before calling exit());
UB means nothing guaranteed and risky to reason about program behavior but I don't think any
of the fixes here worth backporting. As sanitizers are now part of the CI, preventing new issues
will be the real benefit.
## Background
For redis master, one replica uses one copy of replication buffer, that is a big waste of memory,
more replicas more waste, and allocate/free memory for every reply list also cost much.
If we set client-output-buffer-limit small and write traffic is heavy, master may disconnect with
replicas and can't finish synchronization with replica. If we set client-output-buffer-limit big,
master may be OOM when there are many replicas that separately keep much memory.
Because replication buffers of different replica client are the same, one simple idea is that
all replicas only use one replication buffer, that will effectively save memory.
Since replication backlog content is the same as replicas' output buffer, now we
can discard replication backlog memory and use global shared replication buffer
to implement replication backlog mechanism.
## Implementation
I create one global "replication buffer" which contains content of replication stream.
The structure of "replication buffer" is similar to the reply list that exists in every client.
But the node of list is `replBufBlock`, which has `id, repl_offset, refcount` fields.
```c
/* Replication buffer blocks is the list of replBufBlock.
*
* +--------------+ +--------------+ +--------------+
* | refcount = 1 | ... | refcount = 0 | ... | refcount = 2 |
* +--------------+ +--------------+ +--------------+
* | / \
* | / \
* | / \
* Repl Backlog Replia_A Replia_B
*
* Each replica or replication backlog increments only the refcount of the
* 'ref_repl_buf_node' which it points to. So when replica walks to the next
* node, it should first increase the next node's refcount, and when we trim
* the replication buffer nodes, we remove node always from the head node which
* refcount is 0. If the refcount of the head node is not 0, we must stop
* trimming and never iterate the next node. */
/* Similar with 'clientReplyBlock', it is used for shared buffers between
* all replica clients and replication backlog. */
typedef struct replBufBlock {
int refcount; /* Number of replicas or repl backlog using. */
long long id; /* The unique incremental number. */
long long repl_offset; /* Start replication offset of the block. */
size_t size, used;
char buf[];
} replBufBlock;
```
So now when we feed replication stream into replication backlog and all replicas, we only need
to feed stream into replication buffer `feedReplicationBuffer`. In this function, we set some fields of
replication backlog and replicas to references of the global replication buffer blocks. And we also
need to check replicas' output buffer limit to free if exceeding `client-output-buffer-limit`, and trim
replication backlog if exceeding `repl-backlog-size`.
When sending reply to replicas, we also need to iterate replication buffer blocks and send its
content, when totally sending one block for replica, we decrease current node count and
increase the next current node count, and then free the block which reference is 0 from the
head of replication buffer blocks.
Since now we use linked list to manage replication backlog, it may cost much time for iterating
all linked list nodes to find corresponding replication buffer node. So we create a rax tree to
store some nodes for index, but to avoid rax tree occupying too much memory, i record
one per 64 nodes for index.
Currently, to make partial resynchronization as possible as much, we always let replication
backlog as the last reference of replication buffer blocks, backlog size may exceeds our setting
if slow replicas that reference vast replication buffer blocks, and this method doesn't increase
memory usage since they share replication buffer. To avoid freezing server for freeing unreferenced
replication buffer blocks when we need to trim backlog for exceeding backlog size setting,
we trim backlog incrementally (free 64 blocks per call now), and make it faster in
`beforeSleep` (free 640 blocks).
### Other changes
- `mem_total_replication_buffers`: we add this field in INFO command, it means the total
memory of replication buffers used.
- `mem_clients_slaves`: now even replica is slow to replicate, and its output buffer memory
is not 0, but it still may be 0, since replication backlog and replicas share one global replication
buffer, only if replication buffer memory is more than the repl backlog setting size, we consider
the excess as replicas' memory. Otherwise, we think replication buffer memory is the consumption
of repl backlog.
- Key eviction
Since all replicas and replication backlog share global replication buffer, we think only the
part of exceeding backlog size the extra separate consumption of replicas.
Because we trim backlog incrementally in the background, backlog size may exceeds our
setting if slow replicas that reference vast replication buffer blocks disconnect.
To avoid massive eviction loop, we don't count the delayed freed replication backlog into
used memory even if there are no replicas, i.e. we also regard this memory as replicas's memory.
- `client-output-buffer-limit` check for replica clients
It doesn't make sense to set the replica clients output buffer limit lower than the repl-backlog-size
config (partial sync will succeed and then replica will get disconnected). Such a configuration is
ignored (the size of repl-backlog-size will be used). This doesn't have memory consumption
implications since the replica client will share the backlog buffers memory.
- Drop replication backlog after loading data if needed
We always create replication backlog if server is a master, we need it because we put DELs in
it when loading expired keys in RDB, but if RDB doesn't have replication info or there is no rdb,
it is not possible to support partial resynchronization, to avoid extra memory of replication backlog,
we drop it.
- Multi IO threads
Since all replicas and replication backlog use global replication buffer, if I/O threads are enabled,
to guarantee data accessing thread safe, we must let main thread handle sending the output buffer
to all replicas. But before, other IO threads could handle sending output buffer of all replicas.
## Other optimizations
This solution resolve some other problem:
- When replicas disconnect with master since of out of output buffer limit, releasing the output
buffer of replicas may freeze server if we set big `client-output-buffer-limit` for replicas, but now,
it doesn't cause freezing.
- This implementation may mitigate reply list copy cost time(also freezes server) when one replication
has huge reply buffer and another replica can copy buffer for full synchronization. now, we just copy
reference info, it is very light.
- If we set replication backlog size big, it also may cost much time to copy replication backlog into
replica's output buffer. But this commit eliminates this problem.
- Resizing replication backlog size doesn't empty current replication backlog content.
## Intro
The purpose is to allow having different flags/ACL categories for
subcommands (Example: CONFIG GET is ok-loading but CONFIG SET isn't)
We create a small command table for every command that has subcommands
and each subcommand has its own flags, etc. (same as a "regular" command)
This commit also unites the Redis and the Sentinel command tables
## Affected commands
CONFIG
Used to have "admin ok-loading ok-stale no-script"
Changes:
1. Dropped "ok-loading" in all except GET (this doesn't change behavior since
there were checks in the code doing that)
XINFO
Used to have "read-only random"
Changes:
1. Dropped "random" in all except CONSUMERS
XGROUP
Used to have "write use-memory"
Changes:
1. Dropped "use-memory" in all except CREATE and CREATECONSUMER
COMMAND
No changes.
MEMORY
Used to have "random read-only"
Changes:
1. Dropped "random" in PURGE and USAGE
ACL
Used to have "admin no-script ok-loading ok-stale"
Changes:
1. Dropped "admin" in WHOAMI, GENPASS, and CAT
LATENCY
No changes.
MODULE
No changes.
SLOWLOG
Used to have "admin random ok-loading ok-stale"
Changes:
1. Dropped "random" in RESET
OBJECT
Used to have "read-only random"
Changes:
1. Dropped "random" in ENCODING and REFCOUNT
SCRIPT
Used to have "may-replicate no-script"
Changes:
1. Dropped "may-replicate" in all except FLUSH and LOAD
CLIENT
Used to have "admin no-script random ok-loading ok-stale"
Changes:
1. Dropped "random" in all except INFO and LIST
2. Dropped "admin" in ID, TRACKING, CACHING, GETREDIR, INFO, SETNAME, GETNAME, and REPLY
STRALGO
No changes.
PUBSUB
No changes.
CLUSTER
Changes:
1. Dropped "admin in countkeysinslots, getkeysinslot, info, nodes, keyslot, myid, and slots
SENTINEL
No changes.
(note that DEBUG also fits, but we decided not to convert it since it's for
debugging and anyway undocumented)
## New sub-command
This commit adds another element to the per-command output of COMMAND,
describing the list of subcommands, if any (in the same structure as "regular" commands)
Also, it adds a new subcommand:
```
COMMAND LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|PATTERN <pattern>)]
```
which returns a set of all commands (unless filters), but excluding subcommands.
## Module API
A new module API, RM_CreateSubcommand, was added, in order to allow
module writer to define subcommands
## ACL changes:
1. Now, that each subcommand is actually a command, each has its own ACL id.
2. The old mechanism of allowed_subcommands is redundant
(blocking/allowing a subcommand is the same as blocking/allowing a regular command),
but we had to keep it, to support the widespread usage of allowed_subcommands
to block commands with certain args, that aren't subcommands (e.g. "-select +select|0").
3. I have renamed allowed_subcommands to allowed_firstargs to emphasize the difference.
4. Because subcommands are commands in ACL too, you can now use "-" to block subcommands
(e.g. "+client -client|kill"), which wasn't possible in the past.
5. It is also possible to use the allowed_firstargs mechanism with subcommand.
For example: `+config -config|set +config|set|loglevel` will block all CONFIG SET except
for setting the log level.
6. All of the ACL changes above required some amount of refactoring.
## Misc
1. There are two approaches: Either each subcommand has its own function or all
subcommands use the same function, determining what to do according to argv[0].
For now, I took the former approaches only with CONFIG and COMMAND,
while other commands use the latter approach (for smaller blamelog diff).
2. Deleted memoryGetKeys: It is no longer needed because MEMORY USAGE now uses the "range" key spec.
4. Bugfix: GETNAME was missing from CLIENT's help message.
5. Sentinel and Redis now use the same table, with the same function pointer.
Some commands have a different implementation in Sentinel, so we redirect
them (these are ROLE, PUBLISH, and INFO).
6. Command stats now show the stats per subcommand (e.g. instead of stats just
for "config" you will have stats for "config|set", "config|get", etc.)
7. It is now possible to use COMMAND directly on subcommands:
COMMAND INFO CONFIG|GET (The pipeline syntax was inspired from ACL, and
can be used in functions lookupCommandBySds and lookupCommandByCString)
8. STRALGO is now a container command (has "help")
## Breaking changes:
1. Command stats now show the stats per subcommand (see (5) above)
This change sets a low limit for multibulk and bulk length in the
protocol for unauthenticated connections, so that they can't easily
cause redis to allocate massive amounts of memory by sending just a few
characters on the network.
The new limits are 10 arguments of 16kb each (instead of 1m of 512mb)
Remove hard coded multi-bulk limit (was 1,048,576), new limit is INT_MAX.
When client sends an m-bulk that's higher than 1024, we initially only allocate
the argv array for 1024 arguments, and gradually grow that allocation as arguments
are received.
Fixing CI test issues introduced in #8687
- valgrind warnings in readQueryFromClient when client was freed by processInputBuffer
- adding DEBUG pause-cron for tests not to be time dependent.
- skipping a test that depends on socket buffers / events not compatible with TLS
- making sure client got subscribed by not using deferring client
### Description
A mechanism for disconnecting clients when the sum of all connected clients is above a
configured limit. This prevents eviction or OOM caused by accumulated used memory
between all clients. It's a complimentary mechanism to the `client-output-buffer-limit`
mechanism which takes into account not only a single client and not only output buffers
but rather all memory used by all clients.
#### Design
The general design is as following:
* We track memory usage of each client, taking into account all memory used by the
client (query buffer, output buffer, parsed arguments, etc...). This is kept up to date
after reading from the socket, after processing commands and after writing to the socket.
* Based on the used memory we sort all clients into buckets. Each bucket contains all
clients using up up to x2 memory of the clients in the bucket below it. For example up
to 1m clients, up to 2m clients, up to 4m clients, ...
* Before processing a command and before sleep we check if we're over the configured
limit. If we are we start disconnecting clients from larger buckets downwards until we're
under the limit.
#### Config
`maxmemory-clients` max memory all clients are allowed to consume, above this threshold
we disconnect clients.
This config can either be set to 0 (meaning no limit), a size in bytes (possibly with MB/GB
suffix), or as a percentage of `maxmemory` by using the `%` suffix (e.g. setting it to `10%`
would mean 10% of `maxmemory`).
#### Important code changes
* During the development I encountered yet more situations where our io-threads access
global vars. And needed to fix them. I also had to handle keeps the clients sorted into the
memory buckets (which are global) while their memory usage changes in the io-thread.
To achieve this I decided to simplify how we check if we're in an io-thread and make it
much more explicit. I removed the `CLIENT_PENDING_READ` flag used for checking
if the client is in an io-thread (it wasn't used for anything else) and just used the global
`io_threads_op` variable the same way to check during writes.
* I optimized the cleanup of the client from the `clients_pending_read` list on client freeing.
We now store a pointer in the `client` struct to this list so we don't need to search in it
(`pending_read_list_node`).
* Added `evicted_clients` stat to `INFO` command.
* Added `CLIENT NO-EVICT ON|OFF` sub command to exclude a specific client from the
client eviction mechanism. Added corrosponding 'e' flag in the client info string.
* Added `multi-mem` field in the client info string to show how much memory is used up
by buffered multi commands.
* Client `tot-mem` now accounts for buffered multi-commands, pubsub patterns and
channels (partially), tracking prefixes (partially).
* CLIENT_CLOSE_ASAP flag is now handled in a new `beforeNextClient()` function so
clients will be disconnected between processing different clients and not only before sleep.
This new function can be used in the future for work we want to do outside the command
processing loop but don't want to wait for all clients to be processed before we get to it.
Specifically I wanted to handle output-buffer-limit related closing before we process client
eviction in case the two race with each other.
* Added a `DEBUG CLIENT-EVICTION` command to print out info about the client eviction
buckets.
* Each client now holds a pointer to the client eviction memory usage bucket it belongs to
and listNode to itself in that bucket for quick removal.
* Global `io_threads_op` variable now can contain a `IO_THREADS_OP_IDLE` value
indicating no io-threading is currently being executed.
* In order to track memory used by each clients in real-time we can't rely on updating
these stats in `clientsCron()` alone anymore. So now I call `updateClientMemUsage()`
(used to be `clientsCronTrackClientsMemUsage()`) after command processing, after
writing data to pubsub clients, after writing the output buffer and after reading from the
socket (and maybe other places too). The function is written to be fast.
* Clients are evicted if needed (with appropriate log line) in `beforeSleep()` and before
processing a command (before performing oom-checks and key-eviction).
* All clients memory usage buckets are grouped as follows:
* All clients using less than 64k.
* 64K..128K
* 128K..256K
* ...
* 2G..4G
* All clients using 4g and up.
* Added client-eviction.tcl with a bunch of tests for the new mechanism.
* Extended maxmemory.tcl to test the interaction between maxmemory and
maxmemory-clients settings.
* Added an option to flag a numeric configuration variable as a "percent", this means that
if we encounter a '%' after the number in the config file (or config set command) we
consider it as valid. Such a number is store internally as a negative value. This way an
integer value can be interpreted as either a percent (negative) or absolute value (positive).
This is useful for example if some numeric configuration can optionally be set to a percentage
of something else.
Co-authored-by: Oran Agra <oran@redislabs.com>
A write request may be paused unexpectedly because `server.client_pause_end_time` is old.
**Recreate this:**
redis-cli -p 6379
127.0.0.1:6379> client pause 500000000 write
OK
127.0.0.1:6379> client unpause
OK
127.0.0.1:6379> client pause 10000 write
OK
127.0.0.1:6379> set key value
The write request `set key value` is paused util the timeout of 500000000 milliseconds was reached.
**Fix:**
reset `server.client_pause_end_time` = 0 in `unpauseClients`
When a replica paused, it would not apply any commands event the command comes from master, if we feed the non-applied command to replication stream, the replication offset would be wrong, and data would be lost after failover(since replica's `master_repl_offset` grows but command is not applied).
To fix it, here are the changes:
* Don't update replica's replication offset or propagate commands to sub-replicas when it's paused in `commandProcessed`.
* Show `slave_read_repl_offset` in info reply.
* Add an assert to make sure master client should never be blocked unless pause or module (some modules may use block way to do background (parallel) processing and forward original block module command to the replica, it's not a good way but it can work, so the assert excludes module now, but someday in future all modules should rewrite block command to propagate like what `BLPOP` does).