The bug was introduced in #13558 .
When merging dense hll structures, `hllDenseCompress` writes to wrong
location and the result will be zero. The unit tests didn't cover this
case.
This PR
+ fixes the bug
+ adds `PFDEBUG SIMD (ON|OFF)` for unit tests
+ adds a new TCL test to cover the cases
Synchronized from https://github.com/valkey-io/valkey/pull/1293
---------
Signed-off-by: Xuyang Wang <xuyangwang@link.cuhk.edu.cn>
Co-authored-by: debing.sun <debing.sun@redis.com>
This PR eliminates unnecessary creation and destruction of hfield
objects, ensuring only required updates or insertions are performed.
This reduces overhead and improves performance by streamlining field
management in hash dictionaries, particularly in scenarios involving
frequent updates, like the benchmarks in:
-
[memtier_benchmark-100Kkeys-load-hash-50-fields-with-100B-values](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-100Kkeys-load-hash-50-fields-with-100B-values.yml)
-
[memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values-pipeline-10](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values-pipeline-10.yml)
To test it we can simply focus on the hfield related tests
```
tclsh tests/test_helper.tcl --single unit/type/hash-field-expire
tclsh tests/test_helper.tcl --single unit/type/hash
tclsh tests/test_helper.tcl --dump-logs --single unit/other
```
Extra check on full CI:
- [x] https://github.com/filipecosta90/redis/actions/runs/12225788759
## microbenchmark results
16.7% improvement (drop in time) in dictAddNonExistingRaw vs dictAddRaw
```
make REDIS_CFLAGS="-g -fno-omit-frame-pointer -O3 -DREDIS_TEST" -j
$ ./src/redis-server test dict --accurate
(...)
Inserting via dictAddRaw() non existing: 5000000 items in 2592 ms
(...)
Inserting via dictAddNonExistingRaw() non existing: 5000000 items in 2160 ms
```
8% improvement (drop in time) in find (non existing) and adding via
`dictGetHash()+dictFindWithHash()+dictAddNonExistingRaw()` vs
`dictFind()+dictAddRaw()`
```
make REDIS_CFLAGS="-g -fno-omit-frame-pointer -O3 -DREDIS_TEST" -j
$ ./src/redis-server test dict --accurate
(...)
Find() and inserting via dictFind()+dictAddRaw() non existing: 5000000 items in 2983 ms
Find() and inserting via dictGetHash()+dictFindWithHash()+dictAddNonExistingRaw() non existing: 5000000 items in 2740 ms
```
## benchmark results
To benchmark:
```
pip3 install redis-benchmarks-specification==0.1.250
taskset -c 0 ./src/redis-server --save '' --protected-mode no --daemonize yes
redis-benchmarks-spec-client-runner --tests-regexp ".*load-hash.*" --flushall_on_every_test_start --flushall_on_every_test_end --cpuset_start_pos 2 --override-memtier-test-time 60
```
Improvements on achievable throughput in:
test | ops/sec unstable (59953d2df6) |
ops/sec this PR (24af7190fd) | % change
-- | -- | -- | --
memtier_benchmark-1key-load-hash-1K-fields-with-5B-values | 4097 | 5032
| 22.8%
memtier_benchmark-100Kkeys-load-hash-50-fields-with-100B-values | 37658
| 44688 | 18.7%
memtier_benchmark-100Kkeys-load-hash-50-fields-with-1000B-values | 14736
| 17350 | 17.7%
memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values-pipeline-10
| 131848 | 143485 | 8.8%
memtier_benchmark-1Mkeys-load-hash-hmset-5-fields-with-1000B-values |
82071 | 85681 | 4.4%
memtier_benchmark-1Mkeys-load-hash-5-fields-with-1000B-values | 82882 |
86336 | 4.2%
memtier_benchmark-10Mkeys-load-hash-5-fields-with-100B-values-pipeline-10
| 262502 | 273376 | 4.1%
memtier_benchmark-10Kkeys-load-hash-50-fields-with-10000B-values | 2821
| 2936 | 4.1%
---------
Co-authored-by: Moti Cohen <moticless@gmail.com>
- Add empty string test for the new API
`RedisModule_ACLCheckKeyPrefixPermissions`.
- Fix order of checks: `(pattern[patternLen - 1] != '*' || patternLen ==
0)`
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This PR introduces API to query Expiration time of hash fields.
# New `RedisModule_HashFieldMinExpire()`
For a given hash, retrieves the minimum expiration time across all
fields. If no fields have expiration or if the key is not a hash then
return `REDISMODULE_NO_EXPIRE` (-1).
```
mstime_t RM_HashFieldMinExpire(RedisModuleKey *hash);
```
# Extension to `RedisModule_HashGet()`
Adds a new flag, `REDISMODULE_HASH_EXPIRE_TIME`, to retrieve the
expiration time of a specific hash field. If the field does not exist or
has no expiration, returns `REDISMODULE_NO_EXPIRE`. It is fully
backward-compatible (RM_HashGet retains its original behavior unless the
new flag is used).
Example:
```
mstime_t expiry1, expiry2;
RedisModule_HashGet(mykey, REDISMODULE_HASH_EXPIRE_TIME, "field1", &expiry1, NULL);
RedisModule_HashGet(mykey, REDISMODULE_HASH_EXPIRE_TIME, "field1", &expiry1, "field2", &expiry2, NULL);
```
This PR focused on refining listpack encoding/decoding functions and
optimizing reply handling mechanisms related to it.
Each commit has the measured improvement up until the last accumulated
improvement of 16.3% on
[memtier_benchmark-1key-list-100-elements-lrange-all-elements-pipeline-10](https://github.com/redis/redis-benchmarks-specification/blob/main/redis_benchmarks_specification/test-suites/memtier_benchmark-1key-list-100-elements-lrange-all-elements-pipeline-10.yml)
benchmark.
Connection mode | CE Baseline (Nov 14th)
701f06657d | CE PR #13652 | CE PR vs CE
Unstable
-- | -- | -- | --
TCP | 155696 | 178874 | 14.9%
Unix socket | 169743 | 197428 | 16.3%
To test it we can simply focus on the scan.tcl
```
tclsh tests/test_helper.tcl --single unit/replybufsize
```
### Commit details:
- 2e58d048fd +
29c6c86c6b : Eliminate an indirect memory
access on lpCurrentEncodedSizeBytes and completely avoid passing p*
fully to lpCurrentEncodedSizeBytes + Add lpNextWithBytes helper function
and optimize addListListpackRangeReply
**- Improvement of 3.1%, from 168969.88 ops/sec to 174239.75 ops/sec**
- af52aacff8 Refactor lpDecodeBacklen for
loop-based decoding, improving readability and branch efficiency.
**- NO CHANGE. REVERTED in 09f6680ba0d0b5acabca537c651008f0c8ec061b**
- 048bfe4eda +
03e8ff3af7 : reducing condition checks in
_addReplyToBuffer, inlining it, and avoid entering it when there are
there already entries in the reply list
and check if the reply length exceeds available buffer space before
calling _addReplyToBuffer
**- accumulated Improvement of 12.4%, from 168969.88 ops/sec to
189726.81 ops/sec**
- 9a63d4d6a9fa946505e31ecce4c7796845fc022c: always update the buf_peak
on _addReplyToBufferOrList
**- accumulated Improvement of 14.2%, from 168969.88 ops/sec to 193887
ops/sec**
- b544ade67628a1feaf714d6cfd114930e0c7670b: Introduce
lpEncodeBacklenBytes to avoid any indirect memory access on previous
usage of lpEncodeBacklen(NULL,...). inline lpEncodeBacklenBytes().
**- accumulated Improvement of 16.3%, from 168969.88 ops/sec to
197427.70 ops/sec**
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This pull request optimizes the `dictFind` function by adding software
prefetching and branch prediction hints to improve cache efficiency and
reduce memory latency.
It introduces 2 prefetch hints (read/write) that became no-ops in case
the compiler does not support it.
Baseline profiling with Intel VTune indicated that dictFind was
significantly back-end bound, with memory latency accounting for 59.6%
of clockticks, with frequent stalls from DRAM-bound operations due to
cache misses during hash table lookups.

---------
Co-authored-by: Yuan Wang <wangyuancode@163.com>
In https://github.com/redis/redis/pull/13495, we introduced a feature to
reply -LOADING while flushing a large db on a replica.
While `_dictClear()` is in progress, it calls a callback for every 65k
items and we yield back to eventloop to reply -LOADING.
This change has made some tests unstable as those tests don't expect new
-LOADING reply.
One observation, inside `_dictClear()`, we call the callback even if db
has a few keys. Most tests run with small amount of keys. So, each
replication and cluster test has to handle potential -LOADING reply now.
This PR changes this behavior, skips calling callback when `i=0` to
stabilize replication tests.
Callback will be called after the first 65k items. Most tests use less
than 65k keys and they won't get -LOADING reply.
This PR introduces a new API function to the Redis Module API:
```
int RedisModule_ACLCheckKeyPrefixPermissions(RedisModuleUser *user, RedisModuleString *prefix, int flags);
```
Purpose:
The function checks if a given user has access permissions to any key
that match a specific prefix. This validation is based on the user’s ACL
permissions and the specified flags.
Note, this prefix-based approach API may fail to detect prefixes that
are individually uncovered but collectively covered by the patterns. For
example the prefix `ID-*` is not fully included in pattern `ID-[0]*` and
is not fully included in pattern `ID-[^0]*` but it is fully included in
the set of patterns `{ID-[0]*, ID-[^0]*}`
1. Ubuntu Lunar reached End of Life on January 25, 2024, so upgrade the
ubuntu version to plucky in action `test-ubuntu-jemalloc-fortify` to
pass the daily CI
2. The macOS-12 environment is deprecated so upgrade macos-12 to
macos-13 in daily CI
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
### Summary
By profing 1KiB 100% GET's use-case, on high pipeline use-cases, we can
see that addReplyBulk and it's inner calls takes 8.30% of the CPU
cycles. This PR reduces from 2.2% to 4% the CPU time spent on
addReplyBulk. Specifically for GET use-cases, we saw an improvement from
2.7% to 9.1% on the achievable ops/sec
### Improvement
By reducing the duplicate work we can improve by around 2.7% on sds
encoded strings, and around 9% on int encoded strings. This PR does the
following:
- Avoid duplicate sdslen on addReplyBulk() for sds enconded objects
- Avoid duplicate sdigits10() call on int incoded objects on
addReplyBulk()
- avoid final "\r\n" addReplyProto() in the OBJ_ENCODING_INT type on
addReplyBulk
Altogether this improvements results in the following improvement on the
achievable ops/sec :
Encoding | unstable (commit 9906daf5c9) |
this PR | % improvement
-- | -- | -- | --
1KiB Values string SDS encoded | 1478081.88 | 1517635.38 | 2.7%
Values string "1" OBJ_ENCODING_INT | 1521139.36 | 1658876.59 | 9.1%
### CPU Time: Total of addReplyBulk
Encoding | unstable (commit 9906daf5c9) |
this PR | reduction of CPU Time: Total
-- | -- | -- | --
1KiB Values string SDS encoded | 8.30% | 6.10% | 2.2%
Values string "1" OBJ_ENCODING_INT | 7.20% | 3.20% | 4.0%
### To reproduce
Run redis with unix socket enabled
```
taskset -c 0 /root/redis/src/redis-server --unixsocket /tmp/1.socket --save '' --enable-debug-command local
```
#### 1KiB Values string SDS encoded
Load data
```
taskset -c 2-5 memtier_benchmark --ratio 1:0 -n allkeys --key-pattern P:P --key-maximum 1000000 --hide-histogram --pipeline 10 -S /tmp/1.socket
```
Benchmark
```
taskset -c 2-6 memtier_benchmark --ratio 0:1 -c 1 -t 5 --test-time 60 --hide-histogram -d 1000 --pipeline 500 -S /tmp/1.socket --key-maximum 1000000 --json-out-file results.json
```
#### Values string "1" OBJ_ENCODING_INT
Load data
```
$ taskset -c 2-5 memtier_benchmark --command "SET __key__ 1" -n allkeys --command-key-pattern P --key-maximum 1000000 --hide-histogram -c 1 -t 1 --pipeline 100 -S /tmp/1.socket
# confirm we have the expected reply and format
$ redis-cli get memtier-1
"1"
$ redis-cli debug object memtier-1
Value at:0x7f14cec57570 refcount:2147483647 encoding:int serializedlength:2 lru:2861503 lru_seconds_idle:8
```
Benchmark
```
taskset -c 2-6 memtier_benchmark --ratio 0:1 -c 1 -t 5 --test-time 60 --hide-histogram -d 1000 --pipeline 500 -S /tmp/1.socket --key-maximum 1000000 --json-out-file results.json
```
Starting from https://github.com/redis/redis/pull/13133, we allocate a
jemalloc thread cache and use it for lua vm.
On certain cases, like `script flush` or `function flush` command, we
free the existing thread cache and create a new one.
Though, for `function flush`, we were not actually destroying the
existing thread cache itself. Each call creates a new thread cache on
jemalloc and we leak the previous thread cache instances. Jemalloc
allows maximum 4096 thread cache instances. If we reach this limit,
Redis prints "Failed creating the lua jemalloc tcache" log and abort.
There are other cases that can cause this memory leak, including
replication scenarios when emptyData() is called.
The implication is that it looks like redis `used_memory` is low, but
`allocator_allocated` and RSS remain high.
Co-authored-by: debing.sun <debing.sun@redis.com>
PR #10285 introduced support for modules to register four types of
configurations — Bool, Numeric, String, and Enum. Accessible through the
Redis config file and the CONFIG command.
With this PR, it will be possible to register configuration parameters
without automatically prefixing the parameter names. This provides
greater flexibility in configuration naming, enabling, for instance,
both `bf-initial-size` or `initial-size` to be defined in the module
without automatically prefixing with `<MODULE-NAME>.`. In addition it
will also be possible to create a single additional alias via the same
API. This brings us another step closer to integrate modules into redis
core.
**Example:** Register a configuration parameter `bf-initial-size` with
an alias `initial-size` without the automatic module name prefix, set
with new `REDISMODULE_CONFIG_UNPREFIXED` flag:
```
RedisModule_RegisterBoolConfig(ctx, "bf-initial-size|initial-size", default_val, optflags | REDISMODULE_CONFIG_UNPREFIXED, getfn, setfn, applyfn, privdata);
```
# API changes
Related functions that now support unprefixed configuration flag
(`REDISMODULE_CONFIG_UNPREFIXED`) along with optional alias:
```
RedisModule_RegisterBoolConfig
RedisModule_RegisterEnumConfig
RedisModule_RegisterNumericConfig
RedisModule_RegisterStringConfig
```
# Implementation Details:
`config.c`: On load server configuration, at function
`loadServerConfigFromString()`, it collects all unknown configurations
into `module_configs_queue` dictionary. These may include valid module
configurations or invalid ones. They will be validated later by
`loadModuleConfigs()` against the configurations declared by the loaded
module(s).
`Module.c:` The `ModuleConfig` structure has been modified to store now:
(1) Full configuration name (2) Alias (3) Unprefixed flag status -
ensuring that configurations retain their original registration format
when triggered in notifications.
Added error printout:
This change introduces an error printout for unresolved configurations,
detailing each unresolved parameter detected during startup. The last
line in the output existed prior to this change and has been retained to
systems relies on it:
```
595011:M 18 Nov 2024 08:26:23.616 # Unresolved Configuration(s) Detected:
595011:M 18 Nov 2024 08:26:23.616 # >>> 'bf-initiel-size 8'
595011:M 18 Nov 2024 08:26:23.616 # >>> 'search-sizex 32'
595011:M 18 Nov 2024 08:26:23.616 # Module Configuration detected without loadmodule directive or no ApplyConfig call: aborting
```
# Backward Compatibility:
Existing modules will function without modification, as the new
functionality only applies if REDISMODULE_CONFIG_UNPREFIXED is
explicitly set.
# Module vs. Core API Conflict Behavior
The new API allows to modules loading duplication of same configuration
name or same configuration alias, just like redis core configuration
allows (i.e. the users sets two configs with a different value, but
these two configs are actually the same one). Unlike redis core, given a
name and its alias, it doesn't allow have both configuration on load. To
implement it, it is required to modify DS `module_configs_queue` to
reflect the order of their loading and later on, during
`loadModuleConfigs()`, resolve pairs of names and aliases and which one
is the last one to apply. "Relaxing" this limitation can be deferred to
a future update if necessary, but for now, we error in this case.
To complement the work done in #13133.
it added the script VMs memory to be counted as part of zmalloc, but
that means they
should be also counted as part of the non-value overhead.
this commit contains some refactoring to make variable names and
function names less confusing.
it also adds a new field named `script.VMs` into the `MEMORY STATS`
command.
additionally, clear scripts and stats between tests in external mode
(which is related to how this issue was discovered)
Fix to https://github.com/redis/redis/issues/13650
providing an invalid config to a module with datatype crashes when redis
tries to unload the module due to the invalid config
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
inspred by https://github.com/redis/redis/pull/12730
Before this PR, we allocate new memory to store the user command
arguments, however, if the size of the current `c->argv` is larger than
the current command, we can reuse the previously allocated argv to avoid
allocating new memory for the current command.
And we will free `c->argv` in client cron when the client is idle for 2
seconds.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Improve the performance of crc64 for large batches by processing large
number
of bytes in parallel and combining the results.
---------
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Josiah Carlson <josiah.carlson@gmail.com>
If `hide-user-data-from-log` config is enabled, we don't print client
argv in the crashlog to avoid leaking user info.
Though, debugging a crash becomes harder as we don't see the command
arguments causing the crash.
With this PR, we'll be printing command tokens to the log. As we have
command tokens defined in json schema for each command, using this data,
we can find tokens in the client argv.
e.g.
`SET key value GET EX 10` ---> we'll print `SET * * GET EX *` in the
log.
Modules should introduce their command structure via
`RM_SetCommandInfo()`.
Then, on a crash we'll able to know module command tokens.
This PR replaces old .../topics/... links with current links,
specifically for the modules-api-ref.md file and the new automation that
Paolo Lazzari is working on. A few of the topics links have redirects,
but some don't. Best to use updated links.
`lpCompare()` in `quicklistCompare()` will call `lpGet()` again, which
would be a waste.
The change will result in a boost for all commands that use
`quicklistCompre()`, including `linsert`, `lpos` and `lrem`.
This PR adds a new section to the `INFO` command output, called
`keysizes`. This section provides detailed statistics on the
distribution of key sizes for each data type (strings, lists, sets,
hashes and zsets) within the dataset. The distribution is tracked using
a base-2 logarithmic histogram.
# Motivation
Currently, Redis lacks a built-in feature to track key sizes and item
sizes per data type at a granular level. Understanding the distribution
of key sizes is critical for monitoring memory usage and optimizing
performance, particularly in large datasets. This enhancement will allow
users to inspect the size distribution of keys directly from the `INFO`
command, assisting with performance analysis and capacity planning.
# Changes
New Section in `INFO` Command: A new section called `keysizes` has been
added to the `INFO` command output. This section reports a per-database,
per-type histogram of key sizes. It provides insights into how many keys
fall into specific size ranges (represented in powers of 2).
**Example output:**
```
127.0.0.1:6379> INFO keysizes
# Keysizes
db0_distrib_strings_sizes:1=19,2=655,512=100899,1K=31,2K=29,4K=23,8K=16,16K=3,32K=2
db0_distrib_lists_items:1=5784492,32=3558,64=1047,128=676,256=533,512=218,4K=1,8K=42
db0_distrib_sets_items:1=735564=50612,8=21462,64=1365,128=974,2K=292,4K=154,8K=89,
db0_distrib_hashes_items:2=1,4=544,32=141169,64=207329,128=4349,256=136226,1K=1
```
## Future Use Cases:
The key size distribution is collected per slot as well, laying the
groundwork for future enhancements related to Redis Cluster.
The crash happens whenever the user isn't accessible, for example, it
isn't set for the context (when it is temporary) or in some other cases
like `notifyKeyspaceEvent`. To properly check for the ACL compliance, we
need to get the user name and the user to invoke other APIs. However, it
is not possible if it crashes, and it is impossible to work that around
in the code since we don't know (and **shouldn't know**!) when it is
available and when it is not.
From 7.4, Redis allows `GET` options in cluster mode when the pattern maps to
the same slot as the key, but GET # pattern that represents key itself is missed.
This commit resolves it, bug report #13607.
---------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
Nowadays popcnt instruction is almost supported by X86 machine, which is
used to calculate "Hamming weight", it can bring much performance boost
in redis bitcount comand.
---------
Signed-off-by: hanhui365(hanhui@hygon.cn)
Co-authored-by: debing.sun <debing.sun@redis.com>
Co-authored-by: oranagra <oran@redislabs.com>
Co-authored-by: Nugine <nugine@foxmail.com>
After running test in local, there will be a file named
`.rediscli_history_test`, and it is not in `.gitignore` file, so this is
considered to have changed the code base. It is a little annoying, this
commit just clean up the temporary file.
We should delete `.rediscli_history_test` in the end since the second
server tests also write somethings into it, to make it corresponding, i
put `set ::env(REDISCLI_HISTFILE) ".rediscli_history_test"` at the
beginning.
Maybe we also can add this file into `.gitignore`
- Add a new 'EXPERIMENTAL' command flag, which causes the command
generator to skip over it and make the command to be unavailable for
execution
- Skip experimental tests by default
- Move the SFLUSH tests from the old framework to the new one
---------
Co-authored-by: YaacovHazan <yaacov.hazan@redislabs.com>
1. `dbRandomKey`: excessive call to `dbFindExpires` (will always return
1 if `allvolatile` + anyway called inside `expireIfNeeded`
2. Add `deleteKeyAndPropagate` that is used by both expiry/eviction
3. Change the order of calls in `expireIfNeeded` to save redundant calls
to `keyIsExpired`
4. `expireIfNeeded`: move `OBJ_STATIC_REFCOUNT` to
`deleteKeyAndPropagate`
5. `performEvictions` now uses `deleteEvictedKeyAndPropagate`
6. active-expire: moved `postExecutionUnitOperations` inside
`activeExpireCycleTryExpire`
7. `activeExpireCycleTryExpire`: less indentation + expire a key if `now
== t`
8. rename `lazy_expire_disabled` to `allow_access_expired`
This PR introduces a new `SFLUSH` command to cluster mode that allows
partial flushing of nodes based on specified slot ranges. Current
implementation is designed to flush all slots of a shard, but future
extensions could allow for more granular flushing.
**Command Usage:**
`SFLUSH <start-slot> <end-slot> [<start-slot> <end-slot>]* [SYNC|ASYNC]`
This command removes all data from the specified slots, either
synchronously or asynchronously depending on the optional SYNC/ASYNC
argument.
**Functionality:**
Current imp of `SFLUSH` command verifies that the provided slot ranges
are valid and cover all of the node's slots before proceeding. If slots
are partially or incorrectly specified, the command will fail and return
an error, ensuring that all slots of a node must be fully covered for
the flush to proceed.
The function supports both synchronous (default) and asynchronous
flushing. In addition, if possible, SFLUSH SYNC will be run as blocking
ASYNC as an optimization.
This PR is based on valkey-io/valkey#829
Previously, ZUNION and ZUNIONSTORE commands used a temporary accumulator dict
and at the end copied it as-is to dstzset->dict. This PR removes accumulator and directly
stores into dstzset->dict, eliminating the extra copy.
Co-authored-by: Rayacoo zisong.cw@alibaba-inc.com
Test 1 - give more time for expiration
Test 2 - Evaluate expiration time boundaries [+1,+2] before setting expiration [+1]
Test 3 - Avoid race on test HFEs propagated to replica
The PR extends `RedisModule_OpenKey`'s flags to include
`REDISMODULE_OPEN_KEY_ACCESS_EXPIRED`, which allows to access expired
keys.
It also allows to access expired subkeys. Currently relevant only for
hash fields
and has its impact on `RM_HashGet` and `RM_Scan`.
This PR introduces the installation of the `musl`-based version of Rust,
in order to support alpine-based runtime environments (Rust is used by
[RedisJSON](https://github.com/RedisJSON/RedisJSON)).
Instead of adding runtime logic to decide which prefix/shared object to
use when doing the reply we can simply use an inline method to avoid runtime
overhead of condition checks, and also keep the code change small.
Preliminary data show improvements on commands that heavily rely on
bulk/mbulk replies (example of LRANGE).
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Fixes#8825
We're using the fast_float library[1] in our (compiled-in)
floating-point fast_float_strtod implementation for faster and more
portable parsing of 64 decimal strings.
The single file fast_float.h is an amalgamation of the entire library,
which can be (re)generated with the amalgamate.py script (from the
fast_float repository) via the command:
```
python3 ./script/amalgamate.py --license=MIT > $REDIS_SRC/deps/fast_float/fast_float.h
```
[1]: https://github.com/fastfloat/fast_float
The used commit from fast_float library was the one from
https://github.com/fastfloat/fast_float/releases/tag/v3.10.1
---------
Co-authored-by: fcostaoliveira <filipe@redis.com>
Similar to #13530 , applied to HSCAN and ZSCAN in case of listpack
encoding.
**Preliminary benchmark results showcase an improvement of 108% on the
achievable ops/sec for ZSCAN and 65% for HSCAN**.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>