Fix the assertion when a busy script (timeout) signal ready keys (like LPUSH),
and then an arbitrary client's `allow-busy` command steps into `handleClientsBlockedOnKeys`
try wake up clients blocked on keys (like BLPOP).
Reproduction process:
1. start a redis with aof
`./redis-server --appendonly yes`
2. exec blpop
`127.0.0.1:6379> blpop a 0`
3. use another client call a busy script and this script push the blocked key
`127.0.0.1:6379> eval "redis.call('lpush','a','b') while(1) do end" 0`
4. user a new client call an allow-busy command like auth
`127.0.0.1:6379> auth a`
BTW, this issue also break the atomicity of script.
This bug has been around for many years, the old versions only have the
atomic problem, only 7.0/7.2 has the assertion problem.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 8226f39fb2)
Process loss of slot ownership in cluster bus
When a node no longer owns a slot, it clears the bit corresponding
to the slot in the cluster bus messages. The receiving nodes
currently don't record the fact that the sender stopped claiming
a slot until some other node in the cluster starts claiming the slot.
This can cause a slot to go missing during slot migration when subjected
to inopportune race with addition of new shards or a failover.
This fix forces the receiving nodes to process the loss of ownership
to avoid spreading wrong information.
(cherry picked from commit 1190f25ca7)
Test `trim on SET with big value` (introduced from #11817) fails under mac m1 with libc mem_allocator.
The reason is that malloc(33000) will allocate 65536 bytes(>42000).
This test still passes under ubuntu with libc mem_allocator.
```
*** [err]: trim on SET with big value in tests/unit/type/string.tcl
Expected [r memory usage key] < 42000 (context: type source line 471 file /Users/iospack/data/redis_fork/tests/unit/type/string.tcl cmd {assert {[r memory usage key] < 42000}} proc ::test)
```
simple test under mac m1 with libc mem_allocator:
```c
void *p = zmalloc(33000);
printf("malloc size: %zu\n", zmalloc_size(p));
# output
malloc size: 65536
```
(cherry picked from commit 3fba3ccd96)
This is a partial cherry-pick from Redis 7.2
## Fix various compilation warnings and errors
5) server.c
COMPILER: gcc-13 with FORTIFY_SOURCE
WARNING:
```
In function 'lookupCommandLogic',
inlined from 'lookupCommandBySdsLogic' at server.c:3139:32:
server.c:3102:66: error: '*(robj **)argv' may be used uninitialized [-Werror=maybe-uninitialized]
3102 | struct redisCommand *base_cmd = dictFetchValue(commands, argv[0]->ptr);
| ~~~~^~~
```
REASON: The compiler thinks that the `argc` returned by `sdssplitlen()` could be 0,
resulting in an empty array of size 0 being passed to lookupCommandLogic.
this should be a false positive, `argc` can't be 0 when strings are not NULL.
SOLUTION: add an assert to let the compiler know that `argc` is positive.
## Other changes
1) Fixed `ps -p [pid]` doesn't output `<defunct>` when using procps 4.x causing `replication
child dies when parent is killed - diskless` test to fail.
(cherry picked from commit 42c8c61813)
When getKeysUsingKeySpecs processes a command with more than one key-spec,
and called with a total of more than 256 keys, it'll call getKeysPrepareResult again,
but since numkeys isn't updated, getKeysPrepareResult will not bother to copy key
names from the old result (leaving these slots uninitialized). Furthermore, it did not
consider the keys it already found when allocating more space.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit b7559d9f3b)
## Issue:
When a dict has a long chain or the length of the chain is longer than
the number of samples, we will never be able to sample the elements
at the end of the chain using dictGetSomeKeys().
This could mean that SRANDMEMBER can be hang in and endless loop.
The most severe case, is the pathological case of when someone uses SCAN+DEL
or SSCAN+SREM creating an unevenly distributed dict.
This was amplified by the recent change in #11692 which prevented a
down-sizing rehashing while there is a fork.
## Solution
1. Before, we will stop sampling when we reach the maximum number
of samples, even if there is more data after the current chain.
Now when we reach the maximum we use the Reservoir Sampling
algorithm to fairly sample the end of the chain that cannot be sampled
2. Fix the rehashing code, so that the same as it allows rehashing for up-sizing
during fork when the ratio is extreme, it will allow it for down-sizing as well.
Issue was introduced (or became more severe) by #11692
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit b00a235186)
So far clients being blocked and unblocked by a module command would
update the c->woff variable and so WAIT was ineffective and got released
without waiting for the command actions to propagate.
This seems to have existed since forever, but not for RM_BlockClientOnKeys.
It is problematic though to know if the module did or didn't propagate
anything in that command, so for now, instead of adding an API, we'll
just update the woff to the latest offset when unblocking, this will
cause the client to possibly wait excessively, but that's not that bad.
(cherry picked from commit 6117f28822)
When master reboot from RDB, if rsi in RDB is valid we should not free replication backlog, even if master_repl_offset or repl-offset is 0.
Since if master doesn't send any data to replicas master_repl_offset is 0, it's a valid number.
A clear example:
1. start a master and apply some write commands, the master's master_repl_offset is 0 since it has no replicas.
2. stop write commands on master, and start another instance and replicaof the master, trigger an FULLRESYNC
3. the master's master_repl_offset is still 0 (set a large number for repl-ping-replica-period), do BGSAVE and restart the master
4. master load master_repl_offset from RDB's rsi and it's still 0, and we should make sure replica can partially resync with master.
(cherry picked from commit b0dd7b3245)
* Fix integer overflows due to using wrong integer size.
* Add assertions / panic when overflow still happens.
* Deletion of dead code to avoid need to maintain it
* Some changes are not because of bugs, but rather paranoia.
* Improve cmsgpack and cjson test coverage.
Co-authored-by: Yossi Gottlieb <yossigo@gmail.com>
New test fails on valgrind because strtold("+inf") with valgrind returns a non-inf result
same thing is done in incr.tcl.
(cherry picked from commit c3b7bde914)
The corrupt dump fuzzer uncovered a valgrind warning saying:
```
==76370== Argument 'size' of function malloc has a fishy (possibly negative) value: -3744781444216323815
```
This allocation would have failed (returning NULL) and being handled properly by redis (even before this change), but we also want to silence the valgrind warnings (which are checking that casting to ssize_t produces a non-negative value).
The solution i opted for is to explicitly fail these allocations (returning NULL), before even reaching `malloc` (which would have failed and return NULL too).
The implication is that we will not be able to support a single allocation of more than 2GB on a 32bit system (which i don't think is a realistic scenario).
i.e. i do think we could be facing cases were redis consumes more than 2gb on a 32bit system, but not in a single allocation.
The byproduct of this, is that i dropped the overflow assertions, since these will now lead to the same OOM panic we have for failed allocations.
(cherry picked from commit 599e59ebc5)
The existing logic for killing pub-sub clients did not handle the `allchannels`
permission correctly. For example, if you:
ACL SETUSER foo allchannels
Have a client authenticate as the user `foo` and subscribe to a channel, and then:
ACL SETUSER foo resetchannels
The subscribed client would not be disconnected, though new clients under that user
would be blocked from subscribing to any channels.
This was caused by an incomplete optimization in `ACLKillPubsubClientsIfNeeded`
checking whether the new channel permissions were a strict superset of the old ones.
(cherry picked from commit f38aa6bfb7)
Check the validity of the value before performing the create operation,
prevents new data from being generated even if the request fails to execute.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: chendianqiang <chendianqiang@meituan.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
(cherry picked from commit bc7fe41e58)
In #11666, we added a while loop and will split a big reply
node to multiple nodes. The update of tail->repl_offset may
be wrong. Like before #11666, we would have created at most
one new reply node, and now we will create multiple nodes if
it is a big reply node.
Now we are creating more than one node, and the tail->repl_offset
of all the nodes except the last one are incorrect. Because we
update master_repl_offset at the beginning, and then use it to
update the tail->repl_offset. This would have lead to an assertion
during PSYNC, a test was added to validate that case.
Besides that, the calculation of size was adjusted to fix
tests that failed due to a combination of a very low backlog size,
and some thresholds of that get violated because of the relatively
high overhead of replBufBlock. So now if the backlog size / 16 is too
small, we'll take PROTO_REPLY_CHUNK_BYTES instead.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 7997874f4d)
This can happen when a key almost equal or larger than the
client output buffer limit of the replica is written.
Example:
1. DB is empty
2. Backlog size is 1 MB
3. Client out put buffer limit is 2 MB
4. Client writes a 3 MB key
5. The shared replication buffer will have a single node which contains
the key written above, and it exceeds the backlog size.
At this point the client output buffer usage calculation will report the
replica buffer to be 3 MB (or more) even after sending all the data to
the replica.
The primary drops the replica connection for exceeding the limits,
the replica reconnects and successfully executes partial sync but the
primary will drop the connection again because the buffer usage is still
3 MB. This happens over and over.
To mitigate the problem, this fix limits the maximum size of a single
backlog node to be (repl_backlog_size/16). This way a single node can't
exceed the limits of the COB (the COB has to be larger than the
backlog).
It also means that if the backlog has some excessive data it can't trim,
it would be at most about 6% overuse.
other notes:
1. a loop was added in feedReplicationBuffer which caused a massive LOC
change due to indentation, the actual changes are just the `min(max` and the loop.
3. an unrelated change in an existing test to speed up a server termination which took 10 seconds.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 7be7834e65)
This bug seems to be there forever, CLIENT REPLY OFF|SKIP will
mark the client with CLIENT_REPLY_OFF or CLIENT_REPLY_SKIP flags.
With these flags, prepareClientToWrite called by addReply* will
return C_ERR directly. So the client can't receive the Pub/Sub
messages and any other push notifications, e.g client side tracking.
In this PR, we adding a CLIENT_PUSHING flag, disables the reply
silencing flags. When adding push replies, set the flag, after the reply,
clear the flag. Then add the flag check in prepareClientToWrite.
Fixes#11874
Note, the SUBSCRIBE command response is a bit awkward,
see https://github.com/redis/redis-doc/pull/2327
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 416842e6c0)
If a command gets an OOM response and then if we set maxmemory to zero
to disable the limit, server.pre_command_oom_state never gets updated
and it stays true. As RM_Call() calls with "respect deny-oom" flag checks
server.pre_command_oom_state, all calls will fail with OOM.
Added server.maxmemory check in RM_Call() to process deny-oom flag
only if maxmemory is configured.
(cherry picked from commit 18920813a9)
Authenticated users can use string matching commands with a
specially crafted pattern to trigger a denial-of-service attack on Redis,
causing it to hang and consume 100% CPU time.
Avoid calling unwatchAllKeys when running touchAllWatchedKeysInDb (which was unnecessary)
This can potentially lead to use-after-free and memory corruption when the next entry
pointer held by the watched keys iterator is freed when unwatching all keys of a specific client.
found with address sanitizer, added a test which will not always fail (depending on the random
dict hashing seed)
problem introduced in #9829 (Reids 7.0)
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 18017df7c1)
This test case is to cover a edge scenario: when a writable replica enabled AOF
at the same time, active expiry keys which was created in writable replicas should
propagate to the AOF file, and some versions might crash (fixed by #11615).
For details, please refer to #11778
(cherry picked from commit 40659c3424)
Starting from Redis 7.0 (#9890) we started wrapping everything a command
propagates with MULTI/EXEC. The problem is that both SCAN and RANDOMKEY can
lazy-expire arbitrary keys (similar behavior to active-expire), and put DELs in a transaction.
Fix: When these commands are called without a parent exec-unit (e.g. not in EVAL or
MULTI) we avoid wrapping their DELs in a transaction (for the same reasons active-expire
and eviction avoids a transaction)
This PR adds a per-command flag that indicates that the command may touch arbitrary
keys (not the ones in the arguments), and uses that flag to avoid the MULTI-EXEC.
For now, this flag is internal, since we're considering other solutions for the future.
Note for cluster mode: if SCAN/RANDOMKEY is inside EVAL/MULTI it can still cause the
same situation (as it always did), but it won't cause a CROSSSLOT because replicas and AOF
do not perform slot checks.
The problem with the above is mainly for 3rd party ecosystem tools that propagate commands
from master to master, or feed an AOF file with redis-cli into a master.
This PR aims to fix the regression in redis 7.0, and we opened #11792 to try to handle the
bigger problem with lazy expire better for another release.
(cherry picked from commit fd82bccd0e)
Currently while a sharded pubsub message publish tries to propagate the message across the cluster, a NULL check is missing for clusterLink. clusterLink could be NULL if the link is causing memory beyond the set threshold cluster-link-sendbuf-limit and server terminates the link.
This change introduces two things:
Avoids the engine crashes on the publishing node if a message is tried to be sent to a node and the link is NULL.
Adds a debugging tool CLUSTERLINK KILL to terminate the clusterLink between two nodes.
(cherry picked from commit fd3975684a)
In #7875 (Redis 6.2), we changed the sds alloc to be the usable allocation
size in order to:
> reduce the need for realloc calls by making the sds implicitly take over
the internal fragmentation
This change was done most sds functions, excluding `sdsRemoveFreeSpace` and
`sdsResize`, the reason is that in some places (e.g. clientsCronResizeQueryBuffer)
we call sdsRemoveFreeSpace when we see excessive free space and want to trim it.
so if we don't trim it exactly to size, the caller may still see excessive free space and
call it again and again.
However, this resulted in some excessive calls to realloc, even when there's no need
and it's gonna be a no-op (e.g. when reducing 15 bytes allocation to 13).
It turns out that a call for realloc with jemalloc can be expensive even if it ends up
doing nothing, so this PR adds a check using `je_nallocx`, which is cheap to avoid
the call for realloc.
in addition to that this PR unifies sdsResize and sdsRemoveFreeSpace into common
code. the difference between them was that sdsResize would avoid using SDS_TYPE_5,
since it want to keep the string ready to be resized again, while sdsRemoveFreeSpace
would permit using SDS_TYPE_5 and get an optimal memory consumption.
now both methods take a `would_regrow` argument that makes it more explicit.
the only actual impact of that is that in clientsCronResizeQueryBuffer we call both sdsResize
and sdsRemoveFreeSpace for in different cases, and we now prevent the use of SDS_TYPE_5 in both.
The new test that was added to cover this concern used to pass before this PR as well,
this PR is just a performance optimization and cleanup.
Benchmark:
`redis-benchmark -c 100 -t set -d 512 -P 10 -n 100000000`
on i7-9850H with jemalloc, shows improvement from 1021k ops/sec to 1067k (average of 3 runs).
some 4.5% improvement.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 46393f9819)
According to the source code, the commands can be executed with only key name,
and no GET/SET/INCR operation arguments.
change the docs to reflect that by marking these arguments as optional.
also add tests.
(cherry picked from commit fea9bbbe0f)
Authenticated users issuing specially crafted SETRANGE and SORT(_RO)
commands can trigger an integer overflow, resulting with Redis attempting
to allocate impossible amounts of memory and abort with an OOM panic.
Related to the hang reported in #11671
Currently, redis can disconnect a client due to reaching output buffer limit,
it'll also avoid feeding that output buffer with more data, but it will keep
running the loop in the command (despite the client already being marked for
disconnection)
This PR is an attempt to mitigate the problem, specifically for commands that
are easy to abuse, specifically: KEYS, HRANDFIELD, SRANDMEMBER, ZRANDMEMBER.
The RAND family of commands can take a negative COUNT argument (which is not
bound to the number of elements in the key), so it's enough to create a key
with one field, and then these commands can be used to hang redis.
For KEYS the caller can use the existing keyspace in redis (if big enough).
Any value in the range of [0-1) turns to 0 when being cast from double to long long. This change rounds up instead of down for values that can't be stored precisely as long doubles.
(cherry picked from commit eef29b68a2)
TLDR: solve a problem introduced in Redis 7.0.6 (#11541) with
RM_CommandFilterArgInsert being called from scripts, which can
lead to memory corruption.
Libc realloc can return the same pointer even if the size was changed. The code in
freeLuaRedisArgv had an assumption that if the pointer didn't change, then the
allocation didn't change, and the cache can still be reused.
However, if rewriteClientCommandArgument or RM_CommandFilterArgInsert were
used, it could be that we realloced the argv array, and the pointer didn't change, then
a consecutive command being executed from Lua can use that argv cache reaching
beyond its size.
This was actually only possible with modules, since the decision to realloc was based
on argc, rather than argv_len.
(cherry picked from commit c8052122a2)
Fixes a regression introduced by #11552 in 7.0.6.
it causes replies in the GEO commands to contain garbage when the
result is a very small distance (less than 1)
Includes test to confirm indeed with junk in buffer now we properly reply
(cherry picked from commit d7b4c9175e)
In replica, the key expired before master's `INCR` was arrived, so INCR
creates a new key in the replica and the test failed.
```
*** [err]: Replication of an expired key does not delete the expired key in tests/integration/replication-4.tcl
Expected '0' to be equal to '1' (context: type eval line 13 cmd {assert_equal 0 [$slave exists k]} proc ::test)
```
This test is very likely to do a false positive if the `wait_for_ofs_sync`
takes longer than the expiration time, so give it a few more chances.
The test was introduced in #9572.
(cherry picked from commit 06b577aad0)
There is a race condition in the test:
```
*** [err]: redis-cli --cluster add-node with cluster-port in tests/unit/cluster/cli.tcl
Expected '5' to be equal to '4' {assert_equal 5 [CI 0 cluster_known_nodes]} proc ::test)
```
When using cli to add node, there can potentially be a race condition
in which all nodes presenting cluster state o.k even though the added
node did not yet meet all cluster nodes.
This comment and the fix were taken from #11221. Also apply it in several
other similar places.
(cherry picked from commit a549b78c48)
When using cli to add node, there can potentially be a race condition in
which all nodes presenting cluster state o.k even though the added node
did not yet meet all cluster nodes.
this adds another utility function to wait until all cluster nodes see the same cluster size
(cherry picked from commit c0ce97facc)
A timing issue like this was reported in freebsd daily CI:
```
*** [err]: Sanity test push cmd after resharding in tests/unit/cluster/cli.tcl
Expected 'CLUSTERDOWN The cluster is down' to match '*MOVED*'
```
We additionally wait for each node to reach a consensus on the cluster
state in wait_for_condition to avoid the cluster down error.
The fix just like #10495, quoting madolson's comment:
Cluster check just verifies the the config state is self-consistent,
waiting for cluster_state to be okay is an independent check that all
the nodes actually believe each other are healthy.
At the same time i noticed that unit/moduleapi/cluster.tcl has an exact
same test, may have the same problem, also modified it.
(cherry picked from commit 5ce64ab010)
Clang Address Sanitizer tests started reporting unknown-crash on these
tests due to the memcheck, disable the memcheck to avoid that noise.
(cherry picked from commit 18ff6a3269)
This test sets the master ping interval to 1 hour, in order to avoid
pings in the replicatoin stream incrementing the replication offset,
however, it didn't increase the repl-timeout so on slow machines
where the test took more than 60 seconds, the replicas would drop
and reconnect.
```
*** [err]: PSYNC2: Partial resync after restart using RDB aux fields in tests/integration/psync2.tcl
Replica didn't partial sync
```
The test would detect 4 additional partial syncs where it expects
only one.
(cherry picked from commit b0250b4508)
The kill above is sometimes successful and sometimes already too late.
The PING in pysnc wrong offset test got rejected by bgsaveerr because
lastbgsave_status is C_ERR.
In theory, using diskless can avoid PING being affected, because when
the replica is dropped, we will kill the child with SIGUSR1, and this
will not affect lastbgsave_status.
Anyway, this kill is not particularly needed here, dropping the kill
is the best one, since we do have the waitForBgsave, so just let it
take care of the bgsave. No need for fast termination.
(cherry picked from commit e7144693e2)
## Issue
During the client input/output buffer processing, the memory usage is
incrementally updated to keep track of clients going beyond a certain
threshold `maxmemory-clients` to be evicted. However, this additional
tracking activity leads to unnecessary CPU cycles wasted when no
client-eviction is required. It is applicable in two cases.
* `maxmemory-clients` is set to `0` which equates to no client eviction
(applicable to all clients)
* `CLIENT NO-EVICT` flag is set to `ON` which equates to a particular
client not applicable for eviction.
## Solution
* Disable client memory usage tracking during the read/write flow when
`maxmemory-clients` is set to `0` or `client no-evict` is `on`.
The memory usage is tracked only during the `clientCron` i.e. it gets
periodically updated.
* Cleanup the clients from the memory usage bucket when client eviction
is disabled.
* When the maxmemory-clients config is enabled or disabled at runtime,
we immediately update the memory usage buckets for all clients (tested
scanning 80000 took some 20ms)
Benchmark shown that this can improve performance by about 5% in
certain situations.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit c0267b3fa5)
This mechanism aims to reduce calls to malloc and free when
preparing the arguments the script sends to redis commands.
This is a mechanism was originally implemented in 48c49c4
and 4f68655, and was removed in #10220 (thinking it's not needed
and that it has no impact), but it now turns out it was wrong, and it
indeed provides some 5% performance improvement.
The implementation is a little bit too simplistic, it assumes consecutive
calls use the same size in the same arg index, but that's arguably
sufficient since it's only aimed at caching very small things.
We could even consider always pre-allocating args to the full
LUA_CMD_OBJCACHE_MAX_LEN (64 bytes) rather than the right size for the argument,
that would increase the chance they'll be able to be re-used.
But in some way this is already happening since we're using
sdsalloc, which in turn uses s_malloc_usable and takes ownership
of the full side of the allocation, so we are padded to the allocator
bucket size.
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: sundb <sundbcn@gmail.com>
(cherry picked from commit 2d80cd7840)
GEODIST used snprintf("%.4f") for the reply using addReplyDoubleDistance,
which was slow. This PR optimizes it without breaking compatibility by following
the approach of ll2string with some changes to match the use case of distance
and precision. I.e. we multiply it by 10000 format it as an integer, and then add
a decimal point. This can achieve about 35% increase in the achievable ops/sec.
Co-authored-by: Oran Agra <oran@redislabs.com>
(cherry picked from commit 61c85a2b20)
The cluster-announce-port/cluster-announce-bus-port/cluster-announce-tls-port should take effect at runtime
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
(cherry picked from commit 25ffa79b64)
Both functions and eval are marked as "no-monitor", since we want to explicitly feed in the script command before the commands generated by the script. Note that we want this behavior generally, so that commands can redact arguments before being added to the monitor.
(cherry picked from commit d136bf2830)
In moduleFireServerEvent we change the real client DB to 0 on freeClient in case the event is REDISMODULE_EVENT_CLIENT_CHANGE.
It results in a crash if the client is blocked on a key on other than DB 0.
The DB change is not necessary even for module-client, as we set its DB to 0 on either createClient or moduleReleaseTempClient.
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
(cherry picked from commit e4eb18b303)
Fix a few issues with the recent #11463
* use exitFromChild instead of exit
* test should ignore defunct process since that's what we expect to
happen for thees child processes when the parent dies.
* fix typo
Co-authored-by: Binbin <binloveplay1314@qq.com>
(cherry picked from commit 4c54528f0f)
During a diskless sync, if the master main process crashes, the child would
have hung in `write`. This fix closes the read fd on the child side, so that if the
parent crashes, the child will get a write error and exit.
This change also fixes disk-based replication, BGSAVE and AOFRW.
In that case the child wouldn't have been hang, it would have just kept
running until done which may be pointless.
There is a certain degree of risk here. in case there's a BGSAVE child that could
maybe succeed and the parent dies for some reason, the old code would have let
the child keep running and maybe succeed and avoid data loss.
On the other hand, if the parent is restarted, it would have loaded an old rdb file
(or none), and then the child could reach the end and rename the rdb file (data
conflicting with what the parent has), or also have a race with another BGSAVE
child that the new parent started.
Note that i removed a comment saying a write error will be ignored in the child
and handled by the parent (this comment was very old and i don't think relevant).
(cherry picked from commit ccaef5c923)