After #13816, we added defragmentation support for moduleDict, which
significantly increased global data size.
As a result, the defragmentation tests for non-global data were
affected.
Now, we move the creation of global data to before the global data test
to avoid it interfering with other tests.
Fixed the simple key test failure due to forgetting to reset stats.
# Problem
Some redis modules need to call `CONFIG GET/SET` commands. Server may be
ran with `rename-command CONFIG ""`(or something similar) which leads to
the module being unable to access the config.
# Solution
Added new API functions for use by modules
```
RedisModuleConfigIterator* RedisModule_GetConfigIterator(RedisModuleCtx *ctx, const char *pattern);
void RedisModule_ReleaseConfigIterator(RedisModuleCtx *ctx, RedisModuleConfigIterator *iter);
const char *RedisModule_ConfigIteratorNext(RedisModuleConfigIterator *iter);
int RedisModule_GetConfigType(const char *name, RedisModuleConfigType *res);
int RedisModule_GetBoolConfig(RedisModuleCtx *ctx, const char *name, int *res);
int RedisModule_GetConfig(RedisModuleCtx *ctx, const char *name, RedisModuleString **res);
int RedisModule_GetEnumConfig(RedisModuleCtx *ctx, const char *name, RedisModuleString **res);
int RedisModule_GetNumericConfig(RedisModuleCtx *ctx, const char *name, long long *res);
int RedisModule_SetBoolConfig(RedisModuleCtx *ctx, const char *name, int value, RedisModuleString **err);
int RedisModule_SetConfig(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err);
int RedisModule_SetEnumConfig(RedisModuleCtx *ctx, const char *name, RedisModuleString *value, RedisModuleString **err);
int RedisModule_SetNumericConfig(RedisModuleCtx *ctx, const char *name, long long value, RedisModuleString **err);
```
## Implementation
The work is mostly done inside `config.c` as I didn't want to expose the
config dict outside of it. That means each of these module functions has
a corresponding method in `config.c` that actually does the job. F.e
`RedisModule_SetEnumConfig` calls `moduleSetEnumConfig` which is
implemented in `config.c`
## Notes
Also, refactored `configSetCommand` and `restoreBackupConfig` functions
for the following reasons:
- code and logic is now way more clear in `configSetCommand`. Only
caveat here is removal of an optimization that skipped running apply
functions that already have ran in favour of code clarity.
- Both functions needlessly separated logic for module configs and
normal configs whereas no such separation is needed. This also had the
side effect of removing some allocations.
- `restoreBackupConfig` now has clearer interface and can be reused with
ease. One of the places I reused it is for the individual
`moduleSet*Config` functions, each of which needs the restoration
functionality but for a single config only.
## Future
Additionally, a couple considerations were made for potentially
extending the API in the future
- if need be an API for atomically setting multiple config values can be
added - `RedisModule_SetConfigsTranscationStart/End` or similar that can
be put around `RedisModule_Set*Config` calls.
- if performance is an issue an API
`RedisModule_GetConfigIteratorNextWithTypehint` or similar may be added
in order not to incur the additional cost of calling
`RedisModule_GetConfigType`.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
## Summary and detailed design for new stream command
## XDELEX
### Syntax
```
XDELEX key [KEEPREF | DELREF | ACKED] IDS numids id [id ...]
```
### Description
The `XDELEX` command extends the Redis Streams `XDEL` command, offering
enhanced control over message entry deletion with respect to consumer
groups. It accepts optional `DELREF` or `ACKED` parameters to modify its
behavior:
- **KEEPREF:** Deletes the specified entries from the stream, but
preserves existing references to these entries in all consumer groups'
PEL. This behavior is similar to XDEL.
- **DELREF:** Deletes the specified entries from the stream and also
removes all references to these entries from all consumer groups'
pending entry lists, effectively cleaning up all traces of the messages.
- **ACKED:** Only trims entries that were read and acknowledged by all
consumer groups.
**Note:** The `IDS` block can appear at any position in the command,
consistent with other commands.
### Reply
Array reply, for each `id`:
- `-1`: No such `id` exists in the provided stream `key`.
- `1`: Entry was deleted from the stream.
- `2`: Entry was not deleted, but there are still dangling references.
(ACKED option)
## XACKDEL
### Syntax
```
XACKDEL key group [KEEPREF | DELREF | ACKED] IDS numids id [id ...]
```
### Description
The `XACKDEL` command combines `XACK` and `XDEL` functionalities in
Redis Streams. It acknowledges specified message IDs in the given
consumer group and attempts to delete corresponding stream entries. It
accepts optional `DELREF` or `ACKED` parameters:
- **KEEPREF:** Acknowledges the messages in the specified consumer group
and deletes the entries from the stream, but preserves existing
references to these entries in all consumer groups' PEL.
- **DELREF:** Acknowledges the messages in the specified consumer group,
deletes the entries from the stream, and also removes all references to
these entries from all consumer groups' pending entry lists, effectively
cleaning up all traces of the messages.
- **ACKED:** Acknowledges the messages in the specified consumer group
and only trims entries that were read and acknowledged by all consumer
groups.
### Reply
Array reply, for each `id`:
- `-1`: No such `id` exists in the provided stream `key`.
- `1`: Entry was acknowledged and deleted from the stream.
- `2`: Entry was acknowledged but not deleted, but there are still
dangling references. (ACKED option)
# Redis Streams Commands Extension
## XTRIM
### Syntax
```
XTRIM key <MAXLEN | MINID> [= | ~] threshold [LIMIT count] [KEEPREF | DELREF | ACKED]
```
### Description
The `XTRIM` command trims a stream by removing entries based on
specified criteria, extended to include optional `DELREF` or `ACKED`
parameters for consumer group handling:
- **KEEPREF:** Trims the stream according to the specified strategy
(MAXLEN or MINID) regardless of whether entries are referenced by any
consumer groups, but preserves existing references to these entries in
all consumer groups' PEL.
- **DELREF:** Trims the stream according to the specified strategy and
also removes all references to the trimmed entries from all consumer
groups' PEL.
- **ACKED:** Only trims entries that were read and acknowledged by all
consumer groups.
### Reply
No change.
## XADD
### Syntax
```
XADD key [NOMKSTREAM] [<MAXLEN | MINID> [= | ~] threshold [LIMIT count]] [KEEPREF | DELREF | ACKED] <* | id> field value [field value ...]
```
### Description
The `XADD` command appends a new entry to a stream and optionally trims
it in the same operation, extended to include optional `DELREF` or
`ACKED` parameters for trimming behavior:
- **KEEPREF:** When trimming, removes entries from the stream according
to the specified strategy (MAXLEN or MINID), regardless of whether they
are referenced by any consumer groups, but preserves existing references
to these entries in all consumer groups' PEL.
- **DELREF:** When trimming, removes entries from the stream according
to the specified strategy and also removes all references to these
entries from all consumer groups' PEL.
- **ACKED:** When trimming, only removes entries that were read and
acknowledged by all consumer groups. Note that if the number of
referenced entries is bigger than MAXLEN, we will still stop.
### Reply
No change.
## Key implementation
Since we currently have no simple way to track the association between
an entry and consumer groups without iterating over all groups, we
introduce two mechanisms to establish this link. This allows us to
determine whether an entry has been seen by all consumer groups, and to
identify which groups are referencing it. With this links, we can break
the association when the entry is either acknowledged or deleted.
1) Added reference tracking between stream messages and consumer groups
using `cgroups_ref`
The cgroups_ref is implemented as a rax that maps stream message IDs to
lists of consumer groups that reference those messages, and streamNACK
stores the corresponding nodes of this list, so that the corresponding
groups can be deleted during `ACK`.
In this way, we can determine whether an entry has been seen but not
ack.
2) Store a cache minimum last_id in the stream structure.
The reason for doing this is that there is a situation where an entry
has never been seen by the consume group. In this case, we think this
entry has not been consumed either. If there is an "ACKED" option, we
cannot directly delete this entry either.
When a consumer group updates its last_id, we don’t immediately update
the cached minimum last_id. Instead, we check whether the group’s
previous last_id was equal to the current minimum, or whether the new
last_id is smaller than the current minimum (when using `XGROUP SETID`).
If either is true, we mark the cached minimum last_id as invalid, and
defer the actual update until the next time it’s needed.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: moticless <moticless@github.com>
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Slavomir Kaslev <slavomir.kaslev@gmail.com>
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
This PR addresses a potential misalignment issue when using `va_args`.
Without this fix,
[argument](9a9aa921bc/src/module.c (L6249-L6264))
values may occasionally become incorrect due to stack alignment
inconsistencies.
## Description
Memory sanitizer (MSAN) is used to detect use-of-uninitialized memory
issues. While Address Sanitizer catches a wide range of memory safety
issues, it doesn't specifically detect uninitialized memory usage.
Therefore, Memory Sanitizer complements Address Sanitizer. This PR adds
MSAN run to the daily build, with the possibility of incorporating it
into the ci.yml workflow in the future if needed.
Changes in source files fix false-positive issues and they should not
introduce any runtime implications.
Note: Valgrind performs similar checks to both ASAN and MSAN but
sanitizers run significantly faster.
## Limitations
- Memory sanitizer is only supported by Clang.
- MSAN documentation states that all dependencies, including the
standard library, must be compiled with MSAN. However, it also mentions
there are interceptors for common libc functions, so compiling the
standard library with the MSAN flag is not strictly necessary.
Therefore, we are not compiling libc with MSAN.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-debian-old (push) Failing after 1m32sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
CI / test-ubuntu-latest (push) Failing after 3m21sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / test-sanitizer-address (push) Failing after 6m36sDetails
CI / build-centos-jemalloc (push) Failing after 6m36sDetails
External Server Tests / test-external-standalone (push) Failing after 2m10sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-nodebug (push) Failing after 2m12sDetails
External Server Tests / test-external-cluster (push) Failing after 2m16sDetails
### Background
The program runs normally in standalone mode, but migrating to cluster
mode may cause errors, this is because some cross slot commands can not
run in cluster mode. We should provide an approach to detect this issue
when running in standalone mode, and need to expose a metric which
indicates the usage of no incompatible commands.
### Solution
To avoid perf impact, we introduce a new config
`cluster-compatibility-sample-ratio` which define the sampling ratio
(0-100) for checking command compatibility in cluster mode. When a
command is executed, it is sampled at the specified ratio to determine
if it complies with Redis cluster constraints, such as cross-slot
restrictions.
A new metric is exposed: `cluster_incompatible_ops` in `info stats`
output.
The following operations will be considered incompatible operations.
- cross-slot command
If a command has multiple cross slot keys, it is incompatible
- `swap, copy, move, select` command
These commands involve multi databases in some cases, we don't allow
multiple DB in cluster mode, so there are not compatible
- Module command with `no-cluster` flag
If a module command has `no-cluster` flag, we will encounter an error
when loading module, leading to fail to load module if cluster is
enabled, so this is incompatible.
- Script/function with `no-cluster` flag
Similar with module command, if we declare `no-cluster` in shebang of
script/function, we also can not run it in cluster mode
- `sort` command by/get pattern
When `sort` command has `by/get` pattern option, we must ask that the
pattern slot is equal with the slot of keys, otherwise it is
incompatible in cluster mode.
- The script/function command accesses the keys and declared keys have
different slots
For the script/function command, we not only check the slot of declared
keys, but only check the slot the accessing keys, if they are different,
we think it is incompatible.
**Besides**, commands like `keys, scan, flushall, script/function
flush`, that in standalone mode iterate over all data to perform the
operation, are only valid for the server that executes the command in
cluster mode and are not broadcasted. However, this does not lead to
errors, so we do not consider them as incompatible commands.
### Performance impact test
**cross slot test**
Below are the test commands and results. When using MSET with 8 keys,
performance drops by approximately 3%.
**single key test**
It may be due to the overhead of the sampling function, and single-key
commands could cause a 1-2% performance drop.
After https://github.com/redis/redis/pull/13816, we make a new API to
defrag RedisModuleDict.
Currently, we only support incremental defragmentation of the dictionary
itself, but the defragmentation of values is still not incremental. If
the values are very large, it could lead to significant blocking.
Therefore, in this PR, we have added incremental defragmentation for the
values.
The main change is to the `RedisModuleDefragDictValueCallback`, we
modified the return value of this callback.
When the callback returns 1, we will save the `seekTo` as the key of the
current unfinished node, and the next time we enter, we will continue
defragmenting this node.
When the return value is 0, we will proceed to the next node.
## Test
Since each dictionary in the global dict originally contained only 10
strings, but now it has been changed to a nested dictionary, each
dictionary now has 10 sub-dictionaries, with each sub-dictionary
containing 10 strings, this has led to a corresponding reduction in the
defragmentation time obtained from other tests.
Therefore, the other tests have been modified to always wait for
defragmentation to be turned off before the test begins, then start it
after creating fragmentation, ensuring that they can always run for a
full defragmentation cycle.
---------
Co-authored-by: ephraimfeldblum <ephraim.feldblum@redis.com>
1) Enable the callback to be NULL for RM_DefragRedisModuleDict()
Because the dictionary may store only the key without the value.
2) Reduce the system calls of RM_DefragShouldStop()
The API checks the following thresholds before performing a time check:
over 512 defrag hits, or over 1024 defrag misses, and performs the time
judgment if any of these thresholds are reached.
3) Added defragmentation statistics for dictionary items to cover the
associated code for RM_DefragRedisModuleDict().
4) Removed `module_ctx` from `defragModuleCtx` struct, which can be
replaced by a temporary variable.
---------
Co-authored-by: oranagra <oran@redislabs.com>
1) Fix a bug that passing an incorrect endtime to module.
This bug was found by @ShooterIT.
After #13814, all endtime will be monotonic time, and we should no
longer convert it to ustime relative.
Add assertions to prevent endtime from being much larger thatn the
current time.
2) Fix a race in test `Reduce defrag CPU usage when module data can't be
defragged`
---------
Co-authored-by: ShooterIT <wangyuancode@163.com>
After #13815, we introduced incremental defragmentation for global data
for module.
Now we added a new module API `RM_DefragRedisModuleDict` to incremental
defrag `RedisModuleDict`.
This PR adds a new APIs and a new defrag callback:
```c
RedisModuleDict *RM_DefragRedisModuleDict(RedisModuleDefragCtx *ctx, RedisModuleDict *dict, RedisModuleDefragDictValueCallback valueCB, RedisModuleString **seekTo);
typedef void *(*RedisModuleDefragDictValueCallback)(RedisModuleDefragCtx *ctx, void *data, unsigned char *key, size_t keylen);
```
Usage:
```c
RedisModuleString *seekTo = NULL;
RedisModuleDict *dict = = RedisModule_CreateDict(ctx);
... populate the dict code ...
/* Defragment a dictionary completely */
do {
RedisModuleDict *new = RedisModule_DefragRedisModuleDict(ctx, dict, defragGlobalDictValueCB, &seekTo);
if (new != NULL) {
dict = new;
}
} while (seekTo);
```
---------
Co-authored-by: ShooterIT <wangyuancode@163.com>
Co-authored-by: oranagra <oran@redislabs.com>
## Description
Currently, when performing defragmentation on non-key data within the
module, we cannot process the defragmentation incrementally. This
limitation affects the efficiency and flexibility of defragmentation in
certain scenarios.
The primary goal of this PR is to introduce support for incremental
defragmentation of global module data.
## Interface Change
New module API `RegisterDefragFunc2`
This is a more advanced version of `RM_RegisterDefragFunc`, in that it
takes a new callbacks(`RegisterDefragFunc2`) that has a return value,
and can use RM_DefragShouldStop in and indicate that it should be called
again later, or is it done (returned 0).
## Note
The `RegisterDefragFunc` API remains available.
---------
Co-authored-by: ShooterIT <wangyuancode@163.com>
Co-authored-by: oranagra <oran@redislabs.com>
# PR: Add Mechanism for Internal Commands and Connections in Redis
This PR introduces a mechanism to handle **internal commands and
connections** in Redis. It includes enhancements for command
registration, internal authentication, and observability.
## Key Features
1. **Internal Command Flag**:
- Introduced a new **module command registration flag**: `internal`.
- Commands marked with `internal` can only be executed by **internal
connections**, AOF loading flows, and master-replica connections.
- For any other connection, these commands will appear as non-existent.
2. **Support for internal authentication added to `AUTH`**:
- Used by depicting the special username `internal connection` with the
right internal password, i.e.,: `AUTH "internal connection"
<internal_secret>`.
- No user-defined ACL username can have this name, since spaces are not
aloud in the ACL parser.
- Allows connections to authenticate as **internal connections**.
- Authenticated internal connections can execute internal commands
successfully.
4. **Module API for Internal Secret**:
- Added the `RedisModule_GetInternalSecret()` API, that exposes the
internal secret that should be used as the password for the new `AUTH
"internal connection" <password>` command.
- This API enables the modules to authenticate against other shards as
local connections.
## Notes on Behavior
- **ACL validation**:
- Commands dispatched by internal connections bypass ACL validation, to
give the caller full access regardless of the user with which it is
connected.
- **Command Visibility**:
- Internal commands **do not appear** in `COMMAND <subcommand>` and
`MONITOR` for non-internal connections.
- Internal commands **are logged** in the slow log, latency report and
commands' statistics to maintain observability.
- **`RM_Call()` Updates**:
- **Non-internal connections**:
- Cannot execute internal commands when the command is sent with the `C`
flag (otherwise can).
- Internal connections bypass ACL validations (i.e., run as the
unrestricted user).
- **Internal commands' success**:
- Internal commands succeed upon being sent from either an internal
connection (i.e., authenticated via the new `AUTH "internal connection"
<internal_secret>` API), an AOF loading process, or from a master via
the replication link.
Any other connections that attempt to execute an internal command fail
with the `unknown command` error message raised.
- **`CLIENT LIST` flags**:
- Added the `I` flag, to indicate that the connection is internal.
- **Lua Scripts**:
- Prevented internal commands from being executed via Lua scripts.
---------
Co-authored-by: Meir Shpilraien <meir@redis.com>
This PR adds a flag to the `RM_GetContextFlags` module-API function that
depicts whether the context may execute debug commands, according to
redis's standards.
This PR introduces API to query Expiration time of hash fields.
# New `RedisModule_HashFieldMinExpire()`
For a given hash, retrieves the minimum expiration time across all
fields. If no fields have expiration or if the key is not a hash then
return `REDISMODULE_NO_EXPIRE` (-1).
```
mstime_t RM_HashFieldMinExpire(RedisModuleKey *hash);
```
# Extension to `RedisModule_HashGet()`
Adds a new flag, `REDISMODULE_HASH_EXPIRE_TIME`, to retrieve the
expiration time of a specific hash field. If the field does not exist or
has no expiration, returns `REDISMODULE_NO_EXPIRE`. It is fully
backward-compatible (RM_HashGet retains its original behavior unless the
new flag is used).
Example:
```
mstime_t expiry1, expiry2;
RedisModule_HashGet(mykey, REDISMODULE_HASH_EXPIRE_TIME, "field1", &expiry1, NULL);
RedisModule_HashGet(mykey, REDISMODULE_HASH_EXPIRE_TIME, "field1", &expiry1, "field2", &expiry2, NULL);
```
This PR introduces a new API function to the Redis Module API:
```
int RedisModule_ACLCheckKeyPrefixPermissions(RedisModuleUser *user, RedisModuleString *prefix, int flags);
```
Purpose:
The function checks if a given user has access permissions to any key
that match a specific prefix. This validation is based on the user’s ACL
permissions and the specified flags.
Note, this prefix-based approach API may fail to detect prefixes that
are individually uncovered but collectively covered by the patterns. For
example the prefix `ID-*` is not fully included in pattern `ID-[0]*` and
is not fully included in pattern `ID-[^0]*` but it is fully included in
the set of patterns `{ID-[0]*, ID-[^0]*}`
PR #10285 introduced support for modules to register four types of
configurations — Bool, Numeric, String, and Enum. Accessible through the
Redis config file and the CONFIG command.
With this PR, it will be possible to register configuration parameters
without automatically prefixing the parameter names. This provides
greater flexibility in configuration naming, enabling, for instance,
both `bf-initial-size` or `initial-size` to be defined in the module
without automatically prefixing with `<MODULE-NAME>.`. In addition it
will also be possible to create a single additional alias via the same
API. This brings us another step closer to integrate modules into redis
core.
**Example:** Register a configuration parameter `bf-initial-size` with
an alias `initial-size` without the automatic module name prefix, set
with new `REDISMODULE_CONFIG_UNPREFIXED` flag:
```
RedisModule_RegisterBoolConfig(ctx, "bf-initial-size|initial-size", default_val, optflags | REDISMODULE_CONFIG_UNPREFIXED, getfn, setfn, applyfn, privdata);
```
# API changes
Related functions that now support unprefixed configuration flag
(`REDISMODULE_CONFIG_UNPREFIXED`) along with optional alias:
```
RedisModule_RegisterBoolConfig
RedisModule_RegisterEnumConfig
RedisModule_RegisterNumericConfig
RedisModule_RegisterStringConfig
```
# Implementation Details:
`config.c`: On load server configuration, at function
`loadServerConfigFromString()`, it collects all unknown configurations
into `module_configs_queue` dictionary. These may include valid module
configurations or invalid ones. They will be validated later by
`loadModuleConfigs()` against the configurations declared by the loaded
module(s).
`Module.c:` The `ModuleConfig` structure has been modified to store now:
(1) Full configuration name (2) Alias (3) Unprefixed flag status -
ensuring that configurations retain their original registration format
when triggered in notifications.
Added error printout:
This change introduces an error printout for unresolved configurations,
detailing each unresolved parameter detected during startup. The last
line in the output existed prior to this change and has been retained to
systems relies on it:
```
595011:M 18 Nov 2024 08:26:23.616 # Unresolved Configuration(s) Detected:
595011:M 18 Nov 2024 08:26:23.616 # >>> 'bf-initiel-size 8'
595011:M 18 Nov 2024 08:26:23.616 # >>> 'search-sizex 32'
595011:M 18 Nov 2024 08:26:23.616 # Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting
```
# Backward Compatibility:
Existing modules will function without modification, as the new
functionality only applies if REDISMODULE_CONFIG_UNPREFIXED is
explicitly set.
# Module vs. Core API Conflict Behavior
The new API allows to modules loading duplication of same configuration
name or same configuration alias, just like redis core configuration
allows (i.e. the users sets two configs with a different value, but
these two configs are actually the same one). Unlike redis core, given a
name and its alias, it doesn't allow have both configuration on load. To
implement it, it is required to modify DS `module_configs_queue` to
reflect the order of their loading and later on, during
`loadModuleConfigs()`, resolve pairs of names and aliases and which one
is the last one to apply. "Relaxing" this limitation can be deferred to
a future update if necessary, but for now, we error in this case.
Fix to https://github.com/redis/redis/issues/13650
providing an invalid config to a module with datatype crashes when redis
tries to unload the module due to the invalid config
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
If `hide-user-data-from-log` config is enabled, we don't print client
argv in the crashlog to avoid leaking user info.
Though, debugging a crash becomes harder as we don't see the command
arguments causing the crash.
With this PR, we'll be printing command tokens to the log. As we have
command tokens defined in json schema for each command, using this data,
we can find tokens in the client argv.
e.g.
`SET key value GET EX 10` ---> we'll print `SET * * GET EX *` in the
log.
Modules should introduce their command structure via
`RM_SetCommandInfo()`.
Then, on a crash we'll able to know module command tokens.
The PR extends `RedisModule_OpenKey`'s flags to include
`REDISMODULE_OPEN_KEY_ACCESS_EXPIRED`, which allows to access expired
keys.
It also allows to access expired subkeys. Currently relevant only for
hash fields
and has its impact on `RM_HashGet` and `RM_Scan`.
All the defrag allocations API expects to get a value and replace it, leaving the old value untouchable.
In some cases a value might be shared between multiple keys, in such cases we can not simply replace
it when the defrag callback is called.
To allow support such use cases, the PR adds two new API's to the defrag API:
1. `RM_DefragAllocRaw` - allocate memory base on a given size.
2. `RM_DefragFreeRaw` - Free the given pointer.
Those API's avoid using tcache so they operate just like `RM_DefragAlloc` but allows the user to split
the allocation and the memory free operations into two stages and control when those happen.
In addition the PR adds new API to allow the module to receive notifications when defrag start and end: `RM_RegisterDefragCallbacks`
Those callbacks are the same as `RM_RegisterDefragFunc` but promised to be called and the start
and the end of the defrag process.
In certain situations, we might generate a large number of propagates
(e.g., multi/exec, Lua script, or a single command generating tons of
propagations) within an event loop.
During the process of propagating to a replica, if the replica is
disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
buffer limit, we should remove its reference to the global replication
buffer to avoid the global replication buffer being unable to be
properly trimmed due to being referenced.
---------
Co-authored-by: oranagra <oran@redislabs.com>
Sometimes it's useful to compute a key's cluster slot in a module.
This API function is just like the command CLUSTER KEYSLOT (but faster).
A "reverse" API is also added:
`RedisModule_ClusterCanonicalKeyNameInSlot`. Given a slot, it returns a
short string that we can call a canonical key for the slot.
The block timeout is passed in the test case, but we do not pass
in the timeout_callback, and it will crash when unlocking. In this
case, in moduleBlockedClientTimedOut we will check timeout_callback.
There is the stack:
```
beforeSleep
blockedBeforeSleep
handleBlockedClientsTimeout
checkBlockedClientTimeout
unblockClientOnTimeout
replyToBlockedClientTimedOut
moduleBlockedClientTimedOut
-- timeout_callback is NULL, invalidFunctionWasCalled
bc->timeout_callback(&ctx,(void**)c->argv,c->argc);
```
Modules may want to handle allocation failures gracefully. Adding
RM_TryCalloc() and RM_TryRealloc() for it.
RM_TryAlloc() was added before:
https://github.com/redis/redis/pull/10541
Fix#12785 and other race condition issues.
See the following isolated comments.
The following report was obtained using SANITIZER thread.
```sh
make SANITIZER=thread
./runtest-moduleapi --config io-threads 4 --config io-threads-do-reads yes --accurate
```
1. Fixed thread-safe issue in RM_UnblockClient()
Related discussion:
https://github.com/redis/redis/pull/12817#issuecomment-1831181220
* When blocking a client in a module using `RM_BlockClientOnKeys()` or
`RM_BlockClientOnKeysWithFlags()`
with a timeout_callback, calling RM_UnblockClient() in module threads
can lead to race conditions
in `updateStatsOnUnblock()`.
- Introduced:
Version: 6.2
PR: #7491
- Touch:
`server.stat_numcommands`, `cmd->latency_histogram`, `server.slowlog`,
and `server.latency_events`
- Harm Level: High
Potentially corrupts the memory data of `cmd->latency_histogram`,
`server.slowlog`, and `server.latency_events`
- Solution:
Differentiate whether the call to moduleBlockedClientTimedOut() comes
from the module or the main thread.
Since we can't know if RM_UnblockClient() comes from module threads, we
always assume it does and
let `updateStatsOnUnblock()` asynchronously update the unblock status.
* When error reply is called in timeout_callback(), ctx is not
thread-safe, eventually lead to race conditions in `afterErrorReply`.
- Introduced:
Version: 6.2
PR: #8217
- Touch
`server.stat_total_error_replies`, `server.errors`,
- Harm Level: High
Potentially corrupts the memory data of `server.errors`
- Solution:
Make the ctx in `timeout_callback()` with `REDISMODULE_CTX_THREAD_SAFE`,
and asynchronously reply errors to the client.
2. Made RM_Reply*() family API thread-safe
Related discussion:
https://github.com/redis/redis/pull/12817#discussion_r1408707239
Call chain: `RM_Reply*()` -> `_addReplyToBufferOrList()` -> touch
server.current_client
- Introduced:
Version: 7.2.0
PR: #12326
- Harm Level: None
Since the module fake client won't have the `CLIENT_PUSHING` flag, even
if we touch server.current_client,
we can still exit after `c->flags & CLIENT_PUSHING`.
- Solution
Checking `c->flags & CLIENT_PUSHING` earlier.
3. Made freeClient() thread-safe
Fix#12785
- Introduced:
Version: 4.0
Commit:
3fcf959e60
- Harm Level: Moderate
* Trigger assertion
It happens when the module thread calls freeClient while the io-thread
is in progress,
which just triggers an assertion, and doesn't make any race condiaions.
* Touch `server.current_client`, `server.stat_clients_type_memory`, and
`clientMemUsageBucket->clients`.
It happens between the main thread and the module threads, may cause
data corruption.
1. Error reset `server.current_client` to NULL, but theoretically this
won't happen,
because the module has already reset `server.current_client` to old
value before entering freeClient.
2. corrupts `clientMemUsageBucket->clients` in
updateClientMemUsageAndBucket().
3. Causes server.stat_clients_type_memory memory statistics to be
inaccurate.
- Solution:
* No longer counts memory usage on fake clients, to avoid updating
`server.stat_clients_type_memory` in freeClient.
* No longer resetting `server.current_client` in unlinkClient, because
the fake client won't be evicted or disconnected in the mid of the
process.
* Judgment assertion `io_threads_op == IO_THREADS_OP_IDLE` only if c is
not a fake client.
4. Fixed free client args without GIL
Related discussion:
https://github.com/redis/redis/pull/12817#discussion_r1408706695
When freeing retained strings in the module thread (refcount decr), or
using them in some way (refcount incr), we should do so while holding
the GIL,
otherwise, they might be simultaneously freed while the main thread is
processing the unblock client state.
- Introduced:
Version: 6.2.0
PR: #8141
- Harm Level: Low
Trigger assertion or double free or memory leak.
- Solution:
Documenting that module API users need to ensure any access to these
retained strings is done with the GIL locked
5. Fix adding fake client to server.clients_pending_write
It will incorrectly log the memory usage for the fake client.
Related discussion:
https://github.com/redis/redis/pull/12817#issuecomment-1851899163
- Introduced:
Version: 4.0
Commit:
9b01b64430
- Harm Level: None
Only result in NOP
- Solution:
* Don't add fake client into server.clients_pending_write
* Add c->conn assertion for updateClientMemUsageAndBucket() and
updateClientMemoryUsage() to avoid same
issue in the future.
So now it will be the responsibility of the caller of both of them to
avoid passing in fake client.
6. Fix calling RM_BlockedClientMeasureTimeStart() and
RM_BlockedClientMeasureTimeEnd() without GIL
- Introduced:
Version: 6.2
PR: #7491
- Harm Level: Low
Causes inaccuracies in command latency histogram and slow logs, but does
not corrupt memory.
- Solution:
Module API users, if know that non-thread-safe APIs will be used in
multi-threading, need to take responsibility for protecting them with
their own locks instead of the GIL, as using the GIL is too expensive.
### Other issue
1. RM_Yield is not thread-safe, fixed via #12905.
### Summarize
1. Fix thread-safe issues for `RM_UnblockClient()`, `freeClient()` and
`RM_Yield`, potentially preventing memory corruption, data disorder, or
assertion.
2. Updated docs and module test to clarify module API users'
responsibility for locking non-thread-safe APIs in multi-threading, such
as RM_BlockedClientMeasureTimeStart/End(), RM_FreeString(),
RM_RetainString(), and RM_HoldString().
### About backpot to 7.2
1. The implement of (1) is not too satisfying, would like to get more
eyes.
2. (2), (3) can be safely for backport
3. (4), (6) just modifying the module tests and updating the
documentation, no need for a backpot.
4. (5) is harmless, no need for a backpot.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
This change is trying to make two failure modes a bit easier to deep dive:
1. If a serverPanic or serverAssert occurs during the info (or module)
printing, it will recursively panic, which is a lot of fun as it will
just keep recursively printing. It will eventually stack overflow, but
will generate a lot of text in the process.
2. When a segfault happens during the segfault handler, no information
is communicated other than it happened. This can be problematic because
`info` may help diagnose the real issue, but without fixing the
recursive crash it might be hard to get at that info.
This is a follow-up fix to #12733. We need to apply the same changes to
delKeysInSlot. Refer to #12733 for more details.
This PR contains some other minor cleanups / improvements to the test
suite and docs.
It uses the postnotifications test module in a cluster mode test which
revealed a leak in the test module (fixed).
Warning:
```
postnotifications.c:216:77: warning: format specifies type 'long' but the argument has type 'uint64_t' (aka 'unsigned long long') [-Wformat]
RedisModule_Log(ctx, "warning", "Got an unexpected subevent '%ld'", subevent);
~~~ ^~~~~~~~
%llu
```
CI:
https://github.com/redis/redis/actions/runs/6937308713/job/18871124342#step:6:115
## Other
Add `CFLAGS=-Werror` flag for module CI.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
When we register notification or server event in RedisModule_OnLoad, but
RedisModule_OnLoad eventually fails, triggering notification or server
event
will cause the server to crash.
If the loading fails on a later stage of moduleLoad, we do call
moduleUnload
which handles all un-registration, but when it fails on the
RedisModule_OnLoad
call, we only un-register several specific things and these were
missing:
- moduleUnsubscribeNotifications
- moduleUnregisterFilters
- moduleUnsubscribeAllServerEvents
Refactored the code to reuse the code from moduleUnload.
Fixes#12808.
Redis 7.2 (#9406) introduced a new modules event, `RedisModuleEvent_Key`.
This new event allows the module to read the key data just before it is removed
from the database (either deleted, expired, evicted, or overwritten).
When the key is removed from the database, either by active expire or eviction.
The new event was not called as part of an execution unit. This can cause an
issue if the module registers a post notification job inside the event. This job will
not be executed atomically with the expiration/eviction operation and will not
replicated inside a Multi/Exec. Moreover, the post notification job will be executed
right after the event where it is still not safe to perform any write operation, this will
violate the promise that post notification job will be called atomically with the
operation that triggered it and **only when it is safe to write**.
This PR fixes the issue by wrapping each expiration/eviction of a key with an execution
unit. This makes sure the entire operation will run atomically and all the post notification
jobs will be executed at the end where it is safe to write.
Tests were modified to verify the fix.
This PR adds a new Module API int RM_AddACLCategory(RedisModuleCtx *ctx, const char *category_name) to add a new ACL command category.
Here, we initialize the ACLCommandCategories array by allocating space for 64 categories and duplicate the 21 default categories from the predefined array 'ACLDefaultCommandCategories' into the ACLCommandCategories array while ACL initialization. Valid ACL category names can only contain alphanumeric characters, underscores, and dashes.
The API when called, checks for the onload flag, category name validity, and for duplicate category name if present. If the conditions are satisfied, the API adds the new category to the trailing end of the ACLCommandCategories array and assigns the acl_categories flag bit according to the index at which the category is added.
If any error is encountered the errno is set accordingly by the API.
---------
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
blocking RM_Call was introduced on: #11568, It allows a module to perform
blocking commands and get the reply asynchronously.If the command gets
block, a special promise CallReply is returned that allow to set the unblock
handler. The unblock handler will be called when the command invocation
finish and it gets, as input, the command real reply.
The issue was that the real CallReply was created using a stack allocated
RedisModuleCtx which is no longer available after the unblock handler finishes.
So if the module keeps the CallReply after the unblock handler finished, the
CallReply holds a pointer to invalid memory and will try to access it when the
CallReply will be released.
The solution is to create the CallReply with a NULL context to make it totally
detached and can be freed freely when the module wants.
Test was added to cover this case, running the test with valgrind before the
fix shows the use after free error. With the fix, there are no valgrind errors.
unrelated: adding a missing `$rd close` in many tests in that file.
Apart from adding the missing coverage, this PR also adds `blockedBeforeSleep`
that gathers all block-related functions from `beforeSleep`
The order inside `blockedBeforeSleep` is different: now `handleClientsBlockedOnKeys`
(which may unblock clients) is called before `processUnblockedClients` (which handles
unblocked clients).
It makes sense to have this order.
There are no visible effects of the wrong ordering, except some cleanups of the now-unblocked
client would have happen in the next `beforeSleep` (will now happen in the current one)
The reason we even got into it is because i triggers an assertion in logresreq.c (breaking
the assumption that `unblockClient` is called **before** actually flushing the reply to the socket):
`handleClientsBlockedOnKeys` is called, then it calls `moduleUnblockClientOnKey`, which calls
`moduleUnblockClient`, which adds the client to `moduleUnblockedClients` back to `beforeSleep`,
we call `handleClientsWithPendingWritesUsingThreads`, it writes the data of buf to the client, so
`client->bufpos` became 0
On the next `beforeSleep`, we call `moduleHandleBlockedClients`, which calls `unblockClient`,
which calls `reqresAppendResponse`, triggering the assert. (because the `bufpos` is 0) - see https://github.com/redis/redis/pull/12301#discussion_r1226386716
When a connection that's subscribe to a channel emits PUBLISH inside MULTI-EXEC,
the push notification messes up the EXEC response.
e.g. MULTI, PING, PUSH foo bar, PING, EXEC
the EXEC's response will contain: PONG, {message foo bar}, 1. and the second PONG
will be delivered outside the EXEC's response.
Additionally, this PR changes the order of responses in case of a plain PUBLISH (when
the current client also subscribed to it), by delivering the push after the command's
response instead of before it.
This also affects modules calling RM_PublishMessage in a similar way, so that we don't
run the risk of getting that push mixed together with the module command's response.
Adds API
- RedisModule_CommandFilterGetClientId()
Includes addition to commandfilter test module to validate that it works
by performing the same command from 2 different clients
So far clients being blocked and unblocked by a module command would
update the c->woff variable and so WAIT was ineffective and got released
without waiting for the command actions to propagate.
This seems to have existed since forever, but not for RM_BlockClientOnKeys.
It is problematic though to know if the module did or didn't propagate
anything in that command, so for now, instead of adding an API, we'll
just update the woff to the latest offset when unblocking, this will
cause the client to possibly wait excessively, but that's not that bad.
When `RM_ZsetAdd()`/`RM_ZsetIncrby()`/`RM_StreamAdd()` fails, if a new key happens to
be created using `moduleCreateEmptyKey()`, we should clean up the empty key.
## Test
1) Add new module commands(`zset.add` and `zset.incrby`) to cover `RM_ZsetAdd()`/`RM_ZsetIncrby()`.
2) Add a large-memory test to cover `RM_StreamAdd()`.
Technically declaring a prototype with an empty declaration has been deprecated since the early days of C, but we never got a warning for it. C2x will apparently be introducing a breaking change if you are using this type of declarator, so Clang 15 has started issuing a warning with -pedantic. Although not apparently a problem for any of the compiler we build on, if feels like the right thing is to properly adhere to the C standard and use (void).
* Add RM_ReplyWithErrorFormat that can support format
Reply with the error create from a printf format and arguments.
If the error code is already passed in the string 'fmt', the error
code provided is used, otherwise the string "-ERR " for the generic
error code is automatically added.
The usage is, for example:
RedisModule_ReplyWithErrorFormat(ctx, "An error: %s", "foo");
RedisModule_ReplyWithErrorFormat(ctx, "-WRONGTYPE Wrong Type: %s", "foo");
The function always returns REDISMODULE_OK.
Add `RM_RdbLoad()` and `RM_RdbSave()` to load/save RDB files from the module API.
In our use case, we have our clustering implementation as a module. As part of this
implementation, the module needs to trigger RDB save operation at specific points.
Also, this module delivers RDB files to other nodes (not using Redis' replication).
When a node receives an RDB file, it should be able to load the RDB. Currently,
there is no module API to save/load RDB files.
This PR adds four new APIs:
```c
RedisModuleRdbStream *RM_RdbStreamCreateFromFile(const char *filename);
void RM_RdbStreamFree(RedisModuleRdbStream *stream);
int RM_RdbLoad(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
int RM_RdbSave(RedisModuleCtx *ctx, RedisModuleRdbStream *stream, int flags);
```
The first step is to create a `RedisModuleRdbStream` object. This PR provides a function to
create RedisModuleRdbStream from the filename. (You can load/save RDB with the filename).
In the future, this API can be extended if needed:
e.g., `RM_RdbStreamCreateFromFd()`, `RM_RdbStreamCreateFromSocket()` to save/load
RDB from an `fd` or a `socket`.
Usage:
```c
/* Save RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbSave(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
/* Load RDB */
RedisModuleRdbStream *stream = RedisModule_RdbStreamCreateFromFile("example.rdb");
RedisModule_RdbLoad(ctx, stream, 0);
RedisModule_RdbStreamFree(stream);
```