## Description
Memory sanitizer (MSAN) is used to detect use-of-uninitialized memory
issues. While Address Sanitizer catches a wide range of memory safety
issues, it doesn't specifically detect uninitialized memory usage.
Therefore, Memory Sanitizer complements Address Sanitizer. This PR adds
MSAN run to the daily build, with the possibility of incorporating it
into the ci.yml workflow in the future if needed.
Changes in source files fix false-positive issues and they should not
introduce any runtime implications.
Note: Valgrind performs similar checks to both ASAN and MSAN but
sanitizers run significantly faster.
## Limitations
- Memory sanitizer is only supported by Clang.
- MSAN documentation states that all dependencies, including the
standard library, must be compiled with MSAN. However, it also mentions
there are interceptors for common libc functions, so compiling the
standard library with the MSAN flag is not strictly necessary.
Therefore, we are not compiling libc with MSAN.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
We can reclaim page cache memory used by the AOF file after loading,
since we don't read AOF again, corresponding to
https://github.com/redis/redis/pull/11248
There is a test after loading 9.5GB AOF, this PR uses much less
`buff/cache` than unstable.
**Unstable**
```
$ free -m
total used free shared buff/cache available
Mem: 31293 16181 4562 13 10958 15111
Swap: 0 0 0
```
**This PR**
```
$ free -m
total used free shared buff/cache available
Mem: 31293 15391 15854 13 439 15902
Swap: 0 0 0
```
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
CI / build-centos-jemalloc (push) Failing after 2sDetails
Codecov / code-coverage (push) Failing after 7sDetails
CI / test-sanitizer-address (push) Failing after 30sDetails
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
CI / build-32bit (push) Failing after 1m31sDetails
CI / test-ubuntu-latest (push) Failing after 6m35sDetails
CI / build-debian-old (push) Failing after 6m39sDetails
Spellcheck / Spellcheck (push) Failing after 6m42sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-nodebug (push) Failing after 2m47sDetails
External Server Tests / test-external-standalone (push) Failing after 6m36sDetails
External Server Tests / test-external-cluster (push) Failing after 7m4sDetails
CI / build-macos-latest (push) Has been cancelledDetails
The PR aims to improve the README usability for new users as well as
developers looking to go in depth.
Key improvements include:
- **Structure & Navigation:**
- Introduces a detailed Table of Contents for easier navigation.
- Improved overall organization of sections.
- **Content:**
- Expanded "What is Redis?" with section for "Key use cases"
- Expanded "Why choose Redis?" section
- New "Getting started" section, including Redis starter projects and
ordering of sections based on desired use for new users
- Changes to "Redis data types, processing engines, and capabilities"
section for better readability and consistency
- Formatting markdown blocks to specify language
There are several issues with maintaining histogram counters.
Ideally, the hooks would be placed in the low-level datatype
implementations. However, this logic is triggered in various contexts
and doesn’t always map directly to a stored DB key. As a result, the
hooks sit closer to the high-level commands layer. It’s a bit messy, but
the right way to ensure histogram counters behave correctly is through
broad test coverage.
* Fix inaccuracies around deletion scenarios.
* Fix inaccuracies around modules calls. Added corresponding tests.
* The info-keysizes.tcl test has been extended to operate on meaningful
datasets
* Validate histogram correctness in edge cases involving collection
deletions.
* Add new macro debugServerAssert(). Effective only if compiled with
DEBUG_ASSERTIONS.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Now we have RDB channel in https://github.com/redis/redis/pull/13732,
child process can transfer RDB in a background method, instead of
handled by main thread. So when redis-cli gets RDB from server, we can
adopt this way to reduce the main thread load.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
This PR adds support for REDISMODULE_OPTIONS_HANDLE_IO_ERRORS.
and tests for short read and corrupted RESTORE payload.
Please: note that I also removed the comment about async loading support
since we should be already covered. No manipulation of global data
structures in Vector Sets, if not for the unique ID used to create new
vector sets with different IDs.
Spellcheck / Spellcheck (push) Failing after 30sDetails
CI / build-libc-malloc (push) Successful in 2m13sDetails
CI / test-sanitizer-address (push) Failing after 3m26sDetails
CI / test-ubuntu-latest (push) Failing after 4m14sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 1m31sDetails
External Server Tests / test-external-nodebug (push) Failing after 2m5sDetails
External Server Tests / test-external-standalone (push) Failing after 6m35sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Close#13973
This PR fixed two bugs.
1) `overhead_hashtable_lut` isn't updated correctly
This bug was introduced by https://github.com/redis/redis/pull/12913
We only update `overhead_hashtable_lut` at the beginning and end of
rehashing, but we forgot to update it when a dict is emptied or
released.
This PR introduces a new `bucketChanged` callback to track the change
changes in the bucket size.
Now, `rehashingStarted` and `rehashingCompleted` callbacks are no longer
responsible for bucket changes, but are entirely handled by
`bucketChanged`, this can also avoid that we need to register three
callbacks to track the change of bucket size, now only one is needed.
In most cases, it will be triggered together with `rehashingStarted` or
`rehashingCompleted`,
except when a dict is being emptied or released, in these cases, even if
the dict is not rehashing, we still need to subtract its current size.
On the other hand, `overhead_hashtable_lut` was duplicated with
`bucket_count`, so we remove `overhead_hashtable_lut` and use
`bucket_count` instead.
Note that this bug only happens with cluster mode, because we don't use
KVSTORE_FREE_EMPTY_DICTS without cluster.
2) The size of `dict_size_index` repeatedly counted in terms of memory
usage.
`dict_size_index` is created at startup, so its memory usage has been
counted into `used_memory_startup`.
However, when we want to count the overhead, we repeat the calculation,
which may cause the overhead to exceed the total memory usage.
---------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
The log message incorrectly referred to the expected state as
`RECEIVE_PSYNC`,
while it should be `RECEIVE_PSYNC_REPLY`. This aligns the log with the
actual state check.
From flame graph, we can find `ERR_clear_error` costs much cpu in tls
mode, some calls of `ERR_clear_error` are duplicate, in function
`tlsHandleEvent`, we call `ERR_clear_error` but we also call
`ERR_clear_error` when reading and writing, so it is not necessary.
from benchmark, this commit can bring 2-3% performance improvement.
This PR fixes an issue in the CI test for client-output-buffer-limit,
which was causing an infinite loop when running on macOS 15.4.
### Problem
This test start two clients, R and R1:
```c
R1 subscribe foo
R publish foo bar
```
When R executes `PUBLISH foo bar`, the server first stores the message
`bar` in R1‘s buf. Only when the space in buf is insufficient does it
call `_addReplyProtoToList`.
Inside this function, `closeClientOnOutputBufferLimitReached` is invoked
to check whether the client’s R1 output buffer has reached its
configured limit.
On macOS 15.4, because the server writes to the client at a high speed,
R1’s buf never gets full. As a result,
`closeClientOnOutputBufferLimitReached` in the test is never triggered,
causing the test to never exit and fall into an infinite loop.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This PR replaces cJSON with an home-made parser designed for the kind of
access pattern the FILTER option of VSIM performs on JSON objects. The
main points here are:
* cJSON forces us to parse the whole JSON, create a graph of cJSON
objects, then we need to seek in O(N) to find the right field.
* The cJSON object associated with the value is not of the same format
as the expr.c virtual machine. We needed a conversion function doing
more allocation and work.
* Right now we only support top level fields in the JSON object, so a
full parser is not needed.
With all these things in mind, and after carefully profiling the old
code, I realized that a specialized parser able to parse JSON in a
zero-allocation fashion and only actually parse the value associated to
our key would be much more efficient. Moreover, after this change, the
dependencies of Vector Sets to external code drops to zero, and the
count of lines of code is 3000 lines less. The new line count with LOC
is 4200, making Vector Sets easily the smallest full featured
implementation of a Vector store available.
# Speedup achieved
In a dataset with JSON objects with 30 fields, 1 million elements, the
following query shows a 3.5x speedup:
vsim vectors:million ele ele943903 FILTER ".field29 > 1000 and .field15
< 50"
Please note that we get **3.5x speedup** in the VSIM command itself.
This means that the actual JSON parsing speedup is significantly greater
than that. However, in Redis land, under my past kingdom of many years
ago, the rule was that an improvement would produce speedups that are
*user facing*. This PR definitely qualifies.
What is interesting is that even with a JSON containing a single element
the speedup is of about 70%, so we are faster even in the worst case.
# Further info
Note that the new skipping parser, may happily process JSON objects that
are not perfectly valid, as soon as they look valid from the POV of
balancing [] and {} and so forth. This should not be an issue. Anyway
invalid JSON produces random results (the element is skipped at all even
if it would pass the filter).
Please feel free to ask me anything about the new implementation before
merging.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 30sDetails
CI / build-debian-old (push) Failing after 43sDetails
CI / build-centos-jemalloc (push) Failing after 1m31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 1m52sDetails
External Server Tests / test-external-nodebug (push) Failing after 1m58sDetails
External Server Tests / test-external-cluster (push) Failing after 3m3sDetails
Since after https://github.com/redis/redis/pull/13695,
`io-threads-do-reads` config is deprecated, we should remove it from
normal config list and only keep it in deprecated config list, but we
forgot to do this, this PR fixes this.
thanks @YaacovHazan for reporting this
Used the augment agent to fix a given commands.json
Agent summary:
I've successfully fixed the `vectorset-commands.json` file to make it
coherent with the standard command files under `src/commands`. Here's a
summary of the changes I made:
1. Changed `type: "enum"` with `enum: ["TOKEN"]` to use the standard
format:
- For fixed tokens: token: `"TOKEN"` and `type: "pure-token"`
- For multiple choice options: `type: "oneof"` with nested arguments
2. Added missing fields to each command:
- `arity`: The number of arguments the command takes
- `function`: The C function that implements the command
- `command_flags`: Flags that describe the command's behavior
- Reorganized the structure to match the standard format:
3. Moved `group` and `since` to be consistent with other command files
- Properly structured the arguments with the correct types
4. Fixed the `multiple` attribute for parameters that can accept
multiple values
These changes make the vectorset-commands.json file consistent with the
standard command files under src/commands, while still keeping it as a
single file containing all the vector set commands as requested.
### Problem
A previous PR (https://github.com/redis/redis/pull/13932) fixed the TCP
port issue in CLUSTER SLOTS, but it seems the handling of the TLS port
was overlooked.
There is this comment in the `addNodeToNodeReply` function in the
`cluster.c` file:
```c
/* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */
addReplyLongLong(c, clusterNodeClientPort(node, shouldReturnTlsInfo()));
addReplyBulkCBuffer(c, clusterNodeGetName(node), CLUSTER_NAMELEN);
```
### Fixed
This PR fixes the TLS port issue and adds relevant tests.
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 32sDetails
CI / test-sanitizer-address (push) Failing after 1m22sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
External Server Tests / test-external-cluster (push) Failing after 31sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
This PR fix the lag calculation by ensuring that when consumer group's last_id
is behind the first entry, the consumer group's entries read is considered
invalid and recalculated from the start of the stream
Supplement to PR #13473Close#13957
Signed-off-by: Ernesto Alejandro Santana Hidalgo <ernesto.alejandrosantana@gmail.com>
This MR includes minor improvements and grammatical fixes in the
documentation. Specifically:
• Corrected grammatical mistakes in sentences for better clarity.
• Fixed typos and improved phrasing to enhance readability.
• Ensured consistency in terminology and sentence structure.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Close https://github.com/redis/redis/issues/13892
config set port cmd updates server.port. cluster slot retrieves
information about cluster slots and their associated nodes. the fix
updates this info when config set port cmd is done, so cluster slots cmd
returns the right value.
from the master's perspective, the replica can become online before it's
actually done loading the rdb file.
this was always like that, in disk-based repl, and thus ok with diskless
and rdb channel.
in this test, because all the keys are added before the backlog is
created, the replication offset is 0, so the test proceeds and could get
a LOADING error when trying to run the function.
If HGETEX command deletes the only field due to lazy expiry, Redis
currently sends `del` KSN (Keyspace Notification) first, followed by
`hexpired` KSN. The order should be reversed, `hexpired` should be sent
first and `del` later.
Additonal changes: More test coverage for HGETDEL KSN
---------
Co-authored-by: hristosko <hristosko.chaushev@redis.com>
This test was introduced by https://github.com/redis/redis/issues/13853
We determine if the client is in blocked status, but if async flushdb is
completed before checking the blocked status, the test will fail.
So modify the test to only determine if `lazyfree_pending_objects` is
correct to ensure that flushdb is async, that is, the client must be
blocked.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 32sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 1m56sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Test fails time to time:
```
*** [err]: Slave is able to detect timeout during handshake in tests/integration/replication.tcl
Replica is not able to detect timeout
```
Depending on the timing, "debug sleep" may occur during rdbchannel
handshake and required log statement won't be printed to the log in that
case. Better to wait after rdbchannel handshake.
Fix a couple of compiler warnings
1. gcc-14 prints a warning:
```
In function ‘memcpy’,
inlined from ‘zipmapSet’ at zipmap.c:255:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:29:10: warning:
‘__builtin_memcpy’ writing between 254 and 4294967295
bytes into a region of size 0 overflows the destination
[-Wstringop-overflow=]
29 | return __builtin___memcpy_chk (__dest, __src, __len,
| ^
In function ‘zipmapSet’:
lto1: note: destination object is likely at address zero
```
2. I occasionally get another warning while building with different
options:
```
redis-cli.c: In function ‘clusterManagerNodeMasterRandom’:
redis-cli.c:6053:1: warning: control reaches end of non-void function
[-Wreturn-type]
6053 | }
```
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
CodeQL / Analyze (cpp) (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 32sDetails
External Server Tests / test-external-standalone (push) Failing after 1m43sDetails
The code of 'decrRefCount' included a validity check that would panic
the server if the refcount ever became invalid. However, due to the way
it was written, this could only happen if a corrupted value was written
to the field, or we attempted to decrement a newly-allocated and
never-incremented object. Incorrectly-tracked refcounts would not be
caught, as the code would never actually reduce the refcount from 1 to
0. This left potential use-after-free errors unhandled.
Improved the code so that incorrect tracking of refcounts causes a
panic, even if the freed memory happens to still be owned by the
application and not re-allocated.
With RDB channel replication, we introduced parallel replication stream
and RDB delivery to the replica during a full sync. Currently, after the
replica loads the RDB and begins streaming the accumulated buffer to the
database, it does not read from the master connection during this
period. Although streaming the local buffer is generally a fast
operation, it can take some time if the buffer is large. This PR
introduces buffering during the streaming of the local buffer. One
important consideration is ensuring that we consume more than we read
during this operation; otherwise, it could take indefinitely. To
guarantee that it will eventually complete, we limit the read to at most
half of what we consume, e.g. read at most 1 mb once we consume at least
2 mb.
**Additional changes**
**Bug fix**
- Currently, when replica starts draining accumulated buffer, we call
protectClient() for the master client as we occasionally yield back to
event loop via processEventsWhileBlocked(). So, it prevents freeing the
master client. While we are in this loop, if replica receives "replicaof
newmaster" command, we call replicaSetMaster() which expects to free the
master client and trigger a new connection attempt. As the client object
is protected, its destruction will happen asynchronously. Though, a new
connection attempt to new master will be made immediately. Later, when
the replication buffer is drained, we realize master client was marked
as CLOSE_ASAP, and freeing master client triggers another connection
attempt to the new master. In most cases, we realize something is wrong
in the replication state machine and abort the second attempt later. So,
the bug may go undetected. Fix is not calling protectClient() for the
master client. Instead, trying to detect if master client is
disconnected during processEventsWhileBlocked() and if so, breaking the
loop immediately.
**Related improvement:**
- Currently, the replication buffer is a linked list of buffers, each of
which is 1 MB in size. While consuming the buffer, we process one buffer
at a time and check if we need to yield back to
`processEventsWhileBlocked()`. However, if
`loading-process-events-interval-bytes` is set to less than 1 MB, this
approach doesn't handle it well. To improve this, I've modified the code
to process 16KB at a time and check
`loading-process-events-interval-bytes` more frequently. This way,
depending on the configuration, we may yield back to networking more
often.
- In replication.c, `disklessLoadingRio` will be set before a call to
`emptyData()`. This change should not introduce any behavioral change
but it is logically more correct as emptyData() may yield to networking
and we may need to call rioAbort() on disklessLoadingRio. Otherwise,
failure of main channel may go undetected until a failure on rdb channel
on a corner case.
**Config changes**
- The default value for the `loading-process-events-interval-bytes`
configuration is being lowered from 2MB to 512KB. This configuration
primarily used for testing and controls the frequency of networking
during the loading phase, specifically when loading the RDB or applying
accumulated buffers during a full sync on the replica side.
Before the introduction of RDB channel replication, the 2MB value was
sufficient for occasionally yielding to networking, mainly to reply
-loading to the clients. However, with RDB channel replication, during a
full sync on the replica side (either while loading the RDB or applying
the accumulated buffer), we need to yield back to networking more
frequently to continue accumulating the replication stream. If this
doesn’t happen often enough, the replication stream can accumulate on
the master side, which is undesirable.
To address this, we’ve decided to lower the default value to 512KB. One
concern with frequent yielding to networking is the potential
performance impact, as each call to processEventsWhileBlocked() involves
4 syscalls, which could slow down the RDB loading phase. However,
benchmarking with various configuration values has shown that using
512KB or higher does not negatively impact RDB loading performance.
Based on these results, 512KB is now selected as the default value.
**Test changes**
- Added improved version of a replication test which checks memory usage
on master during full sync.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 1m38sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
External Server Tests / test-external-cluster (push) Failing after 5m40sDetails
CI / build-macos-latest (push) Has been cancelledDetails
The vector-sets module is a part of Redis Core and is available by
default,
just like any other data type in Redis.
As a result, when building Redis from the source, the vector-sets module
is also compiled as part of the Redis binary and loaded at server
start-up (internal module).
This new data type added as a preview feature and currently doesn't
support all the capabilities in Redis like:
* 32-bit build
* C99 (requires C11 stdatomic)
* Short-read from RDB isn't handled and might lead to a memory leak
* AOF rewirte (when aof-use-rdb-preamble is off)
* active defrag
* others?
The vector-sets module is a part of Redis Core and is available by default,
just like any other data type in Redis.
As a result, when building Redis from the source, the vector-sets module
is also compiled as part of the Redis binary and loaded at server start-up.
This new data type added as a preview currently doesn't support
all the capabilities in Redis like:
32-bit OS
C99
Short-read that might end with memory leak
AOF rewirte
defrag