PR https://github.com/redis/redis/pull/13916 introduced a regression -
by overriding the `CFLAGS` and `LDFLAGS` variables for all of the
dependencies hiredis and fast_float lost some of their compiler/linker
flags.
This PR makes it so we can pass additional CFLAGS/LDFLAGS to hiredis,
without overriding them as it contains a bit more complex Makefile. As
for fast_float - passing CFLAGS/LDFLAGS from outside now doesn't break
the expected behavior.
The build step in the CI was changed so that the MacOS is now build with
TLS to catch such errors in the future.
In pipeline mode, especially with TLS, two IO threads may have worse
performance than single thread, one reason is the io thread and the main
thread cannot process in parallel, now, the IO threads will deliver
clients if pending client list is more than 16 instead of finishing
processing all clients, this approach can make IO threads and main
thread process in parallel as much as possible.
IO threads may do some unnecessary notification with the main thread,
the notification is based on eventfd, read(2) and write(2) eventfd are
system calls that are costly. When they are running, they can check the
pending client list to process in `beforeSleep`, so in this commit, if
both the main thread and the IO thread are running, they can pass the
client without notification, and these transferred clients will be
processed in `beforeSleep`.
Hi all, this PR fixes two things:
1. An assertion, that prevented the RDB loading from recovery if there
was a quantization type mismatch (with regression test).
2. Two code paths that just returned NULL without proper cleanup during
RDB loading.
The idea of packing the key (`sds`), value (`robj`) and optionally TTL
into a single struct in memory was mentioned a few times in the past by
the community in various flavors. This approach improves memory
efficiency, reduces pointer dereferences for faster lookups, and
simplifies expiration management by keeping all relevant data in one
place. This change goes along with setting keyspace's dict to
no_value=1, and saving considerable amount of memory.
Two more motivations that well aligned with this unification are:
- Prepare the groundwork for replacing EXPIRE scan based implementation
and evaluate instead new `ebuckets` data structure that was introduced
as part of [Hash Field Expiration
feature](https://redis.io/blog/hash-field-expiration-architecture-and-benchmarks/).
Using this data structure requires embedding the ExpireMeta structure
within each object.
- Consider replacing dict with a more space efficient open addressing
approach hash table that might rely on keeping a single pointer to
object.
Before this PR, I POC'ed on a variant of open addressing hash-table and
was surprised to find that dict with no_value actually could provide a
good balance between performance, memory efficiency, and simplicity.
This realization prompted the separation of the unification step from
the evaluation of a new hash table to avoid introducing too many changes
at once and to evaluate its impact independently before considering
replacement of existing hash-table. On an earlier
[commit](https://github.com/redis/redis/pull/13683) I extended dict
no_value optimization (which saves keeping dictEntry where possible) to
be relevant also for objects with even addresses in memory. Combining it
with this unification saves a considerable amount of memory for
keyspace.
# kvobj
This PR adopts Valkey’s
[packing](3eb8314be6)
layout and logic for key, value, and TTL. However, unlike Valkey
implementation, which retained a common `robj` throughout the project,
this PR distinguishes between the general-purpose, overused `robj`, and
the new `kvobj`, which embeds both the key and value and used by the
keyspace. Conceptually, `robj` serves as a base class, while `kvobj`
acts as a derived class.
Two new flags introduced into redis object, `iskvobj` and `expirable`:
```
struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS;
unsigned iskvobj : 1; /* new flag */
unsigned expirable : 1; /* new flag */
unsigned refcount : 30; /* modified: 32bits->30bits */
void *ptr;
};
typedef struct redisObject robj;
typedef struct redisObject kvobj;
```
When the `iskvobj` flag is set, the object includes also the key and it
is appended to the end of the object. If the `expirable` flag is set, an
additional 8 bytes are added to the object. If the object is of type
string, and the string is rather short, then it will be embedded as
well.
As a result, all keys in the keyspace are promoted to be of type
`kvobj`. This term attempts to align with the existing Redis object,
robj, and the kvstore data structure.
# EXPIRE Implementation
As `kvobj` embeds expiration time as well, looking up expiration times
is now an O(1) operation. And the hash-table of EXPIRE is set now to be
`no_value` mode, directly referencing `kvobj` entries, and in turn,
saves memory.
Next, I plan to evaluate replacing the EXPIRE implementation with the
[ebuckets](https://github.com/redis/redis/blob/unstable/src/ebuckets.h)
data structure, which would eliminate keyspace scans for expired keys.
This requires embedding `ExpireMeta` within each `kvobj` of each key
with expiration. In such implementation, the `expirable` flag will be
shifted to indicate whether `ExpireMeta` is attached.
# Implementation notes
## Manipulating keyspace (find, modify, insert)
Initially, unifying the key and value into a single object and storing
it in dict with `no_value` optimization seemed like a quick win.
However, it (quickly) became clear that this change required deeper
modifications to how keys are manipulated. The challenge was handling
cases where a dictEntry is opt-out due to no_value optimization. In such
cases, many of the APIs that return the dictEntry from a lookup become
insufficient, as it just might be the key itself. To address this issue,
a new-old approach of returning a "link" to the looked-up key's
`dictEntry` instead of the `dictEntry` itself. The term `link` was
already somewhat available in dict API, and is well aligned with the new
dictEntLink declaration:
```
typedef dictEntry **dictEntLink;
```
This PR introduces two new function APIs to dict to leverage returned
link from the search:
```
dictEntLink dictFindLink(dict *d, const void *key, dictEntLink *bucket);
void dictSetKeyAtLink(dict *d, void *key, dictEntLink *link, int newItem);
```
After calling `link = dictFindLink(...)`, any necessary updates must be
performed immediately after by calling `dictSetKeyAtLink()` without any
intervening operations on given dict. Otherwise, `dictEntLink` may
become invalid. Example:
```
/* replace existing key */
link = dictFindLink(d, key, &bucket, 0);
// ... Do something, but don't modify the dict ...
// assert(link != NULL);
dictSetKeyAtLink(d, kv, &link, 0);
/* Add new value (If no space for the new key, dict will be expanded and
bucket will be looked up again.) */
link = dictFindLink(d, key, &bucket);
// ... Do something, but don't modify the dict ...
// assert(link == NULL);
dictSetKeyAtLink(d, kv, &bucket, 1);
```
## dict.h
- The dict API has became cluttered with many unused functions. I have
removed these from dict.h.
- Additionally, APIs specifically related to hash maps (no_value=0),
primarily those handling key-value access, have been gathered and
isolated.
- Removed entirely internal functions ending with “*ByHash()” that were
originally added for optimization and not required any more.
- Few other legacy dict functions were adapted at API level to work with
the term dictEntLink as well.
- Simplified and generalized an optimization that related to comparison
of length of keys of type strings.
## Hash Field Expiration
Until now each hash object with expiration on fields needed to maintain
a reference to its key-name (of the hash object), such that in case it
will be active-expired, then it will be possible to resolve the key-name
for the notification sake. Now there is no need anymore.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
## Description
Memory sanitizer (MSAN) is used to detect use-of-uninitialized memory
issues. While Address Sanitizer catches a wide range of memory safety
issues, it doesn't specifically detect uninitialized memory usage.
Therefore, Memory Sanitizer complements Address Sanitizer. This PR adds
MSAN run to the daily build, with the possibility of incorporating it
into the ci.yml workflow in the future if needed.
Changes in source files fix false-positive issues and they should not
introduce any runtime implications.
Note: Valgrind performs similar checks to both ASAN and MSAN but
sanitizers run significantly faster.
## Limitations
- Memory sanitizer is only supported by Clang.
- MSAN documentation states that all dependencies, including the
standard library, must be compiled with MSAN. However, it also mentions
there are interceptors for common libc functions, so compiling the
standard library with the MSAN flag is not strictly necessary.
Therefore, we are not compiling libc with MSAN.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
We can reclaim page cache memory used by the AOF file after loading,
since we don't read AOF again, corresponding to
https://github.com/redis/redis/pull/11248
There is a test after loading 9.5GB AOF, this PR uses much less
`buff/cache` than unstable.
**Unstable**
```
$ free -m
total used free shared buff/cache available
Mem: 31293 16181 4562 13 10958 15111
Swap: 0 0 0
```
**This PR**
```
$ free -m
total used free shared buff/cache available
Mem: 31293 15391 15854 13 439 15902
Swap: 0 0 0
```
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
CI / build-centos-jemalloc (push) Failing after 2sDetails
Codecov / code-coverage (push) Failing after 7sDetails
CI / test-sanitizer-address (push) Failing after 30sDetails
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
CI / build-32bit (push) Failing after 1m31sDetails
CI / test-ubuntu-latest (push) Failing after 6m35sDetails
CI / build-debian-old (push) Failing after 6m39sDetails
Spellcheck / Spellcheck (push) Failing after 6m42sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-nodebug (push) Failing after 2m47sDetails
External Server Tests / test-external-standalone (push) Failing after 6m36sDetails
External Server Tests / test-external-cluster (push) Failing after 7m4sDetails
CI / build-macos-latest (push) Has been cancelledDetails
The PR aims to improve the README usability for new users as well as
developers looking to go in depth.
Key improvements include:
- **Structure & Navigation:**
- Introduces a detailed Table of Contents for easier navigation.
- Improved overall organization of sections.
- **Content:**
- Expanded "What is Redis?" with section for "Key use cases"
- Expanded "Why choose Redis?" section
- New "Getting started" section, including Redis starter projects and
ordering of sections based on desired use for new users
- Changes to "Redis data types, processing engines, and capabilities"
section for better readability and consistency
- Formatting markdown blocks to specify language
There are several issues with maintaining histogram counters.
Ideally, the hooks would be placed in the low-level datatype
implementations. However, this logic is triggered in various contexts
and doesn’t always map directly to a stored DB key. As a result, the
hooks sit closer to the high-level commands layer. It’s a bit messy, but
the right way to ensure histogram counters behave correctly is through
broad test coverage.
* Fix inaccuracies around deletion scenarios.
* Fix inaccuracies around modules calls. Added corresponding tests.
* The info-keysizes.tcl test has been extended to operate on meaningful
datasets
* Validate histogram correctness in edge cases involving collection
deletions.
* Add new macro debugServerAssert(). Effective only if compiled with
DEBUG_ASSERTIONS.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Now we have RDB channel in https://github.com/redis/redis/pull/13732,
child process can transfer RDB in a background method, instead of
handled by main thread. So when redis-cli gets RDB from server, we can
adopt this way to reduce the main thread load.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
This PR adds support for REDISMODULE_OPTIONS_HANDLE_IO_ERRORS.
and tests for short read and corrupted RESTORE payload.
Please: note that I also removed the comment about async loading support
since we should be already covered. No manipulation of global data
structures in Vector Sets, if not for the unique ID used to create new
vector sets with different IDs.
Spellcheck / Spellcheck (push) Failing after 30sDetails
CI / build-libc-malloc (push) Successful in 2m13sDetails
CI / test-sanitizer-address (push) Failing after 3m26sDetails
CI / test-ubuntu-latest (push) Failing after 4m14sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 1m31sDetails
External Server Tests / test-external-nodebug (push) Failing after 2m5sDetails
External Server Tests / test-external-standalone (push) Failing after 6m35sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Close#13973
This PR fixed two bugs.
1) `overhead_hashtable_lut` isn't updated correctly
This bug was introduced by https://github.com/redis/redis/pull/12913
We only update `overhead_hashtable_lut` at the beginning and end of
rehashing, but we forgot to update it when a dict is emptied or
released.
This PR introduces a new `bucketChanged` callback to track the change
changes in the bucket size.
Now, `rehashingStarted` and `rehashingCompleted` callbacks are no longer
responsible for bucket changes, but are entirely handled by
`bucketChanged`, this can also avoid that we need to register three
callbacks to track the change of bucket size, now only one is needed.
In most cases, it will be triggered together with `rehashingStarted` or
`rehashingCompleted`,
except when a dict is being emptied or released, in these cases, even if
the dict is not rehashing, we still need to subtract its current size.
On the other hand, `overhead_hashtable_lut` was duplicated with
`bucket_count`, so we remove `overhead_hashtable_lut` and use
`bucket_count` instead.
Note that this bug only happens with cluster mode, because we don't use
KVSTORE_FREE_EMPTY_DICTS without cluster.
2) The size of `dict_size_index` repeatedly counted in terms of memory
usage.
`dict_size_index` is created at startup, so its memory usage has been
counted into `used_memory_startup`.
However, when we want to count the overhead, we repeat the calculation,
which may cause the overhead to exceed the total memory usage.
---------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
The log message incorrectly referred to the expected state as
`RECEIVE_PSYNC`,
while it should be `RECEIVE_PSYNC_REPLY`. This aligns the log with the
actual state check.
From flame graph, we can find `ERR_clear_error` costs much cpu in tls
mode, some calls of `ERR_clear_error` are duplicate, in function
`tlsHandleEvent`, we call `ERR_clear_error` but we also call
`ERR_clear_error` when reading and writing, so it is not necessary.
from benchmark, this commit can bring 2-3% performance improvement.
This PR fixes an issue in the CI test for client-output-buffer-limit,
which was causing an infinite loop when running on macOS 15.4.
### Problem
This test start two clients, R and R1:
```c
R1 subscribe foo
R publish foo bar
```
When R executes `PUBLISH foo bar`, the server first stores the message
`bar` in R1‘s buf. Only when the space in buf is insufficient does it
call `_addReplyProtoToList`.
Inside this function, `closeClientOnOutputBufferLimitReached` is invoked
to check whether the client’s R1 output buffer has reached its
configured limit.
On macOS 15.4, because the server writes to the client at a high speed,
R1’s buf never gets full. As a result,
`closeClientOnOutputBufferLimitReached` in the test is never triggered,
causing the test to never exit and fall into an infinite loop.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This PR replaces cJSON with an home-made parser designed for the kind of
access pattern the FILTER option of VSIM performs on JSON objects. The
main points here are:
* cJSON forces us to parse the whole JSON, create a graph of cJSON
objects, then we need to seek in O(N) to find the right field.
* The cJSON object associated with the value is not of the same format
as the expr.c virtual machine. We needed a conversion function doing
more allocation and work.
* Right now we only support top level fields in the JSON object, so a
full parser is not needed.
With all these things in mind, and after carefully profiling the old
code, I realized that a specialized parser able to parse JSON in a
zero-allocation fashion and only actually parse the value associated to
our key would be much more efficient. Moreover, after this change, the
dependencies of Vector Sets to external code drops to zero, and the
count of lines of code is 3000 lines less. The new line count with LOC
is 4200, making Vector Sets easily the smallest full featured
implementation of a Vector store available.
# Speedup achieved
In a dataset with JSON objects with 30 fields, 1 million elements, the
following query shows a 3.5x speedup:
vsim vectors:million ele ele943903 FILTER ".field29 > 1000 and .field15
< 50"
Please note that we get **3.5x speedup** in the VSIM command itself.
This means that the actual JSON parsing speedup is significantly greater
than that. However, in Redis land, under my past kingdom of many years
ago, the rule was that an improvement would produce speedups that are
*user facing*. This PR definitely qualifies.
What is interesting is that even with a JSON containing a single element
the speedup is of about 70%, so we are faster even in the worst case.
# Further info
Note that the new skipping parser, may happily process JSON objects that
are not perfectly valid, as soon as they look valid from the POV of
balancing [] and {} and so forth. This should not be an issue. Anyway
invalid JSON produces random results (the element is skipped at all even
if it would pass the filter).
Please feel free to ask me anything about the new implementation before
merging.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 30sDetails
CI / build-debian-old (push) Failing after 43sDetails
CI / build-centos-jemalloc (push) Failing after 1m31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 1m52sDetails
External Server Tests / test-external-nodebug (push) Failing after 1m58sDetails
External Server Tests / test-external-cluster (push) Failing after 3m3sDetails
Since after https://github.com/redis/redis/pull/13695,
`io-threads-do-reads` config is deprecated, we should remove it from
normal config list and only keep it in deprecated config list, but we
forgot to do this, this PR fixes this.
thanks @YaacovHazan for reporting this
Used the augment agent to fix a given commands.json
Agent summary:
I've successfully fixed the `vectorset-commands.json` file to make it
coherent with the standard command files under `src/commands`. Here's a
summary of the changes I made:
1. Changed `type: "enum"` with `enum: ["TOKEN"]` to use the standard
format:
- For fixed tokens: token: `"TOKEN"` and `type: "pure-token"`
- For multiple choice options: `type: "oneof"` with nested arguments
2. Added missing fields to each command:
- `arity`: The number of arguments the command takes
- `function`: The C function that implements the command
- `command_flags`: Flags that describe the command's behavior
- Reorganized the structure to match the standard format:
3. Moved `group` and `since` to be consistent with other command files
- Properly structured the arguments with the correct types
4. Fixed the `multiple` attribute for parameters that can accept
multiple values
These changes make the vectorset-commands.json file consistent with the
standard command files under src/commands, while still keeping it as a
single file containing all the vector set commands as requested.
### Problem
A previous PR (https://github.com/redis/redis/pull/13932) fixed the TCP
port issue in CLUSTER SLOTS, but it seems the handling of the TLS port
was overlooked.
There is this comment in the `addNodeToNodeReply` function in the
`cluster.c` file:
```c
/* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */
addReplyLongLong(c, clusterNodeClientPort(node, shouldReturnTlsInfo()));
addReplyBulkCBuffer(c, clusterNodeGetName(node), CLUSTER_NAMELEN);
```
### Fixed
This PR fixes the TLS port issue and adds relevant tests.
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 32sDetails
CI / test-sanitizer-address (push) Failing after 1m22sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
External Server Tests / test-external-cluster (push) Failing after 31sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
This PR fix the lag calculation by ensuring that when consumer group's last_id
is behind the first entry, the consumer group's entries read is considered
invalid and recalculated from the start of the stream
Supplement to PR #13473Close#13957
Signed-off-by: Ernesto Alejandro Santana Hidalgo <ernesto.alejandrosantana@gmail.com>
This MR includes minor improvements and grammatical fixes in the
documentation. Specifically:
• Corrected grammatical mistakes in sentences for better clarity.
• Fixed typos and improved phrasing to enhance readability.
• Ensured consistency in terminology and sentence structure.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Close https://github.com/redis/redis/issues/13892
config set port cmd updates server.port. cluster slot retrieves
information about cluster slots and their associated nodes. the fix
updates this info when config set port cmd is done, so cluster slots cmd
returns the right value.
from the master's perspective, the replica can become online before it's
actually done loading the rdb file.
this was always like that, in disk-based repl, and thus ok with diskless
and rdb channel.
in this test, because all the keys are added before the backlog is
created, the replication offset is 0, so the test proceeds and could get
a LOADING error when trying to run the function.
If HGETEX command deletes the only field due to lazy expiry, Redis
currently sends `del` KSN (Keyspace Notification) first, followed by
`hexpired` KSN. The order should be reversed, `hexpired` should be sent
first and `del` later.
Additonal changes: More test coverage for HGETDEL KSN
---------
Co-authored-by: hristosko <hristosko.chaushev@redis.com>
This test was introduced by https://github.com/redis/redis/issues/13853
We determine if the client is in blocked status, but if async flushdb is
completed before checking the blocked status, the test will fail.
So modify the test to only determine if `lazyfree_pending_objects` is
correct to ensure that flushdb is async, that is, the client must be
blocked.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 32sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 1m56sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Test fails time to time:
```
*** [err]: Slave is able to detect timeout during handshake in tests/integration/replication.tcl
Replica is not able to detect timeout
```
Depending on the timing, "debug sleep" may occur during rdbchannel
handshake and required log statement won't be printed to the log in that
case. Better to wait after rdbchannel handshake.
Fix a couple of compiler warnings
1. gcc-14 prints a warning:
```
In function ‘memcpy’,
inlined from ‘zipmapSet’ at zipmap.c:255:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:29:10: warning:
‘__builtin_memcpy’ writing between 254 and 4294967295
bytes into a region of size 0 overflows the destination
[-Wstringop-overflow=]
29 | return __builtin___memcpy_chk (__dest, __src, __len,
| ^
In function ‘zipmapSet’:
lto1: note: destination object is likely at address zero
```
2. I occasionally get another warning while building with different
options:
```
redis-cli.c: In function ‘clusterManagerNodeMasterRandom’:
redis-cli.c:6053:1: warning: control reaches end of non-void function
[-Wreturn-type]
6053 | }
```
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
CodeQL / Analyze (cpp) (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 32sDetails
External Server Tests / test-external-standalone (push) Failing after 1m43sDetails
The code of 'decrRefCount' included a validity check that would panic
the server if the refcount ever became invalid. However, due to the way
it was written, this could only happen if a corrupted value was written
to the field, or we attempted to decrement a newly-allocated and
never-incremented object. Incorrectly-tracked refcounts would not be
caught, as the code would never actually reduce the refcount from 1 to
0. This left potential use-after-free errors unhandled.
Improved the code so that incorrect tracking of refcounts causes a
panic, even if the freed memory happens to still be owned by the
application and not re-allocated.