If HGETEX command deletes the only field due to lazy expiry, Redis
currently sends `del` KSN (Keyspace Notification) first, followed by
`hexpired` KSN. The order should be reversed, `hexpired` should be sent
first and `del` later.
Additonal changes: More test coverage for HGETDEL KSN
---------
Co-authored-by: hristosko <hristosko.chaushev@redis.com>
This test was introduced by https://github.com/redis/redis/issues/13853
We determine if the client is in blocked status, but if async flushdb is
completed before checking the blocked status, the test will fail.
So modify the test to only determine if `lazyfree_pending_objects` is
correct to ensure that flushdb is async, that is, the client must be
blocked.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 32sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 1m56sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Test fails time to time:
```
*** [err]: Slave is able to detect timeout during handshake in tests/integration/replication.tcl
Replica is not able to detect timeout
```
Depending on the timing, "debug sleep" may occur during rdbchannel
handshake and required log statement won't be printed to the log in that
case. Better to wait after rdbchannel handshake.
Fix a couple of compiler warnings
1. gcc-14 prints a warning:
```
In function ‘memcpy’,
inlined from ‘zipmapSet’ at zipmap.c:255:5:
/usr/include/x86_64-linux-gnu/bits/string_fortified.h:29:10: warning:
‘__builtin_memcpy’ writing between 254 and 4294967295
bytes into a region of size 0 overflows the destination
[-Wstringop-overflow=]
29 | return __builtin___memcpy_chk (__dest, __src, __len,
| ^
In function ‘zipmapSet’:
lto1: note: destination object is likely at address zero
```
2. I occasionally get another warning while building with different
options:
```
redis-cli.c: In function ‘clusterManagerNodeMasterRandom’:
redis-cli.c:6053:1: warning: control reaches end of non-void function
[-Wreturn-type]
6053 | }
```
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
CodeQL / Analyze (cpp) (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 32sDetails
External Server Tests / test-external-standalone (push) Failing after 1m43sDetails
The code of 'decrRefCount' included a validity check that would panic
the server if the refcount ever became invalid. However, due to the way
it was written, this could only happen if a corrupted value was written
to the field, or we attempted to decrement a newly-allocated and
never-incremented object. Incorrectly-tracked refcounts would not be
caught, as the code would never actually reduce the refcount from 1 to
0. This left potential use-after-free errors unhandled.
Improved the code so that incorrect tracking of refcounts causes a
panic, even if the freed memory happens to still be owned by the
application and not re-allocated.
With RDB channel replication, we introduced parallel replication stream
and RDB delivery to the replica during a full sync. Currently, after the
replica loads the RDB and begins streaming the accumulated buffer to the
database, it does not read from the master connection during this
period. Although streaming the local buffer is generally a fast
operation, it can take some time if the buffer is large. This PR
introduces buffering during the streaming of the local buffer. One
important consideration is ensuring that we consume more than we read
during this operation; otherwise, it could take indefinitely. To
guarantee that it will eventually complete, we limit the read to at most
half of what we consume, e.g. read at most 1 mb once we consume at least
2 mb.
**Additional changes**
**Bug fix**
- Currently, when replica starts draining accumulated buffer, we call
protectClient() for the master client as we occasionally yield back to
event loop via processEventsWhileBlocked(). So, it prevents freeing the
master client. While we are in this loop, if replica receives "replicaof
newmaster" command, we call replicaSetMaster() which expects to free the
master client and trigger a new connection attempt. As the client object
is protected, its destruction will happen asynchronously. Though, a new
connection attempt to new master will be made immediately. Later, when
the replication buffer is drained, we realize master client was marked
as CLOSE_ASAP, and freeing master client triggers another connection
attempt to the new master. In most cases, we realize something is wrong
in the replication state machine and abort the second attempt later. So,
the bug may go undetected. Fix is not calling protectClient() for the
master client. Instead, trying to detect if master client is
disconnected during processEventsWhileBlocked() and if so, breaking the
loop immediately.
**Related improvement:**
- Currently, the replication buffer is a linked list of buffers, each of
which is 1 MB in size. While consuming the buffer, we process one buffer
at a time and check if we need to yield back to
`processEventsWhileBlocked()`. However, if
`loading-process-events-interval-bytes` is set to less than 1 MB, this
approach doesn't handle it well. To improve this, I've modified the code
to process 16KB at a time and check
`loading-process-events-interval-bytes` more frequently. This way,
depending on the configuration, we may yield back to networking more
often.
- In replication.c, `disklessLoadingRio` will be set before a call to
`emptyData()`. This change should not introduce any behavioral change
but it is logically more correct as emptyData() may yield to networking
and we may need to call rioAbort() on disklessLoadingRio. Otherwise,
failure of main channel may go undetected until a failure on rdb channel
on a corner case.
**Config changes**
- The default value for the `loading-process-events-interval-bytes`
configuration is being lowered from 2MB to 512KB. This configuration
primarily used for testing and controls the frequency of networking
during the loading phase, specifically when loading the RDB or applying
accumulated buffers during a full sync on the replica side.
Before the introduction of RDB channel replication, the 2MB value was
sufficient for occasionally yielding to networking, mainly to reply
-loading to the clients. However, with RDB channel replication, during a
full sync on the replica side (either while loading the RDB or applying
the accumulated buffer), we need to yield back to networking more
frequently to continue accumulating the replication stream. If this
doesn’t happen often enough, the replication stream can accumulate on
the master side, which is undesirable.
To address this, we’ve decided to lower the default value to 512KB. One
concern with frequent yielding to networking is the potential
performance impact, as each call to processEventsWhileBlocked() involves
4 syscalls, which could slow down the RDB loading phase. However,
benchmarking with various configuration values has shown that using
512KB or higher does not negatively impact RDB loading performance.
Based on these results, 512KB is now selected as the default value.
**Test changes**
- Added improved version of a replication test which checks memory usage
on master during full sync.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 1m38sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
External Server Tests / test-external-cluster (push) Failing after 5m40sDetails
CI / build-macos-latest (push) Has been cancelledDetails
The vector-sets module is a part of Redis Core and is available by
default,
just like any other data type in Redis.
As a result, when building Redis from the source, the vector-sets module
is also compiled as part of the Redis binary and loaded at server
start-up (internal module).
This new data type added as a preview feature and currently doesn't
support all the capabilities in Redis like:
* 32-bit build
* C99 (requires C11 stdatomic)
* Short-read from RDB isn't handled and might lead to a memory leak
* AOF rewirte (when aof-use-rdb-preamble is off)
* active defrag
* others?
The vector-sets module is a part of Redis Core and is available by default,
just like any other data type in Redis.
As a result, when building Redis from the source, the vector-sets module
is also compiled as part of the Redis binary and loaded at server start-up.
This new data type added as a preview currently doesn't support
all the capabilities in Redis like:
32-bit OS
C99
Short-read that might end with memory leak
AOF rewirte
defrag
Before https://github.com/redis/redis/pull/13732, replicas were brought
online immediately after master wrote the last bytes of the RDB file to
the socket. This behavior remains unchanged if rdbchannel replication is
not used. However, with rdbchannel replication, the replica is brought
online after receiving the first ack which is sent by replica after rdb
is loaded.
To align the behavior, reverting this change to put replica online once
bgsave is done.
Additonal changes:
- INFO field `mem_total_replication_buffers` will also contain
`server.repl_full_sync_buffer.mem_used` which shows accumulated
replication stream during rdbchannel replication on replica side.
- Deleted debug level logging from some replication tests. These tests
generate thousands of keys and it may cause per key logging on some
cases.
CI / build-libc-malloc (push) Failing after 31sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 2m18sDetails
Close https://github.com/redis/redis/issues/13868
This bug was introduced by https://github.com/redis/redis/pull/13468
## Issue
To maintain compatibility with older versions that do not support
shardid, when a replica passes a shardid, we also update the master’s
shardid accordingly.
However, when both the master and replica support shardid, an issue
arises: in one moment, the master may pass a shardid, causing us to
update both the master and all its replicas to match the master’s
shardid. But if the replica later passes a different shardid, we would
then update the master’s shardid again, leading to continuous changes in
shardid.
## Solution
Regardless of the situation, we always ensure that the replica’s shardid
remains consistent with the master’s shardid.
CI / build-libc-malloc (push) Failing after 32sDetails
CI / build-centos-jemalloc (push) Failing after 32sDetails
CI / build-debian-old (push) Failing after 48sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CodeQL / Analyze (cpp) (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
External Server Tests / test-external-cluster (push) Failing after 31sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
When the `restore foo 0 $encoded freq 100` command and `set freq [r
object freq foo]` run in different minute timestamps (i.e., when
server.unixtime/60 changes between these operations), the assertion may
fail due to the LFU decay.
This PR updates the “RESTORE can set LFU” test to verify the actual freq
value based on minute timestamps.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
This way we don't need to mess with node->value at a latter time
where an explicit lock would be required. Now we have:
1. Prepare context (neighbors).
2. Commit, and set the associated value.
CI / test-sanitizer-address (push) Failing after 32sDetails
CI / build-libc-malloc (push) Failing after 32sDetails
CI / build-debian-old (push) Failing after 32sDetails
CI / build-centos-jemalloc (push) Failing after 31sDetails
CI / build-old-chain-jemalloc (push) Failing after 31sDetails
Codecov / code-coverage (push) Failing after 31sDetails
CI / test-ubuntu-latest (push) Failing after 2m7sDetails
Spellcheck / Spellcheck (push) Failing after 31sDetails
CI / build-macos-latest (push) Has been cancelledDetails
External Server Tests / test-external-standalone (push) Failing after 31sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
This PR is based on: https://github.com/valkey-io/valkey/pull/1801
[SoftlyRaining](https://github.com/SoftlyRaining) was hunting for defrag
bugs with Jim and found a couple of improvements to make. Jim pointed
out that in several of the callbacks, if the encoding were to change it
simply returns without doing anything to `cursor` to make it reach 0,
meaning that it would continue no-op working on that item without making
any progress. Type and encoding can change while the defrag scan is in
progress if the value is mutated or replaced by something else with the
same key.
---------
Signed-off-by: Rain Valentine <rsg000@gmail.com>
Co-authored-by: Rain Valentine <rsg000@gmail.com>
We pass our aborting allocation function to the HNSW lib, the
only other reason for it to fail is pthread mutex locking failing
but this is also practically impossible AFAIK in modern systems,
and if it happens (for kernel reosurces shortage) anyway to
abort is the best thing to do: otherwise we would have to return
that we could not complete the operation for some reason, which
is not uniform with everything Redis does. In Redis under
normal conditions writes must succeed if they are semantically
correct, or the server crash for OOM.
CI / build-old-chain-jemalloc (push) Failing after 32sDetails
Codecov / code-coverage (push) Failing after 31sDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
External Server Tests / test-external-nodebug (push) Failing after 32sDetails
CI / test-ubuntu-latest (push) Failing after 1m37sDetails
Spellcheck / Spellcheck (push) Failing after 32sDetails
When the diskless load configuration is set to on-empty-db, we retain a
pointer to the function library context. When emptyData() is called, it
frees this function library context pointer, leading to a use-after-free
situation.
I refactored code to ensure that emptyData() is called first, followed
by retrieving the valid pointer to the function library context.
Refactored code should not introduce any runtime implications.
Bug introduced by https://github.com/redis/redis/pull/13495 (Redis 8.0)
Co-authored-by: Oran Agra <oran@redislabs.com>
Codecov / code-coverage (push) Failing after 8sDetails
CI / build-libc-malloc (push) Successful in 50sDetails
CI / test-ubuntu-latest (push) Failing after 2m9sDetails
CI / test-sanitizer-address (push) Failing after 2m40sDetails
Spellcheck / Spellcheck (push) Successful in 9m2sDetails
External Server Tests / test-external-standalone (push) Failing after 32sDetails
External Server Tests / test-external-cluster (push) Failing after 32sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-nodebug (push) Failing after 31sDetails
This fixes an error that occurs in the job
[test-valgrind-no-malloc-usable-size-test](https://github.com/redis/redis/actions/runs/13912357739/job/38929051397)
of the Daily workflow:
```
*** [err]: HEXPIREAT - Set time and then get TTL (listpackex) in tests/unit/type/hash-field-expire.tcl
Expected '999' to be between to '1000' and '2000' (context: type eval line 6 cmd {assert_range [r hpttl myhash FIELDS 1 field1] 1000 2000} proc ::test)
```
CI / build-libc-malloc (push) Successful in 53sDetails
CI / test-sanitizer-address (push) Failing after 1m6sDetails
CI / test-ubuntu-latest (push) Failing after 2m57sDetails
Spellcheck / Spellcheck (push) Successful in 9m5sDetails
Coverity Scan / coverity (push) Has been skippedDetails
External Server Tests / test-external-cluster (push) Failing after 31sDetails
External Server Tests / test-external-standalone (push) Failing after 6m35sDetails
External Server Tests / test-external-nodebug (push) Failing after 15m1sDetails
CI / build-macos-latest (push) Has been cancelledDetails
in #13505, we changed the code to use the string value of the key rather
than the integer value on the stack, but we have a test in
unit/moduleapi/keyspace_events that uses keyspace notification hook to
modify the value with RM_StringDMA, which can cause this value to be
released before used. the reason it didn't happen so far is because we
were using shared integers, so releasing the object doesn't free it.
First, when we do `raxSeek()` and then call raxNext, we will get the
`RAX_ITER_JUST_SEEKED` flag and return success directly.
We always set the node defrag callback after `raxSeek()`, which means
that when we break from defragmentation, the first node that comes in
again will never be defragged.
In this PR, we save the last as the next node to be processed, not the
last node to be completed.
This way we defrag the next node when we exit to avoid it being skipped
on the next resume.
---------
Co-authored-by: oranagra <oran@redislabs.com>