Close#13973
This PR fixed two bugs.
1) `overhead_hashtable_lut` isn't updated correctly
This bug was introduced by https://github.com/redis/redis/pull/12913
We only update `overhead_hashtable_lut` at the beginning and end of
rehashing, but we forgot to update it when a dict is emptied or
released.
This PR introduces a new `bucketChanged` callback to track the change
changes in the bucket size.
Now, `rehashingStarted` and `rehashingCompleted` callbacks are no longer
responsible for bucket changes, but are entirely handled by
`bucketChanged`, this can also avoid that we need to register three
callbacks to track the change of bucket size, now only one is needed.
In most cases, it will be triggered together with `rehashingStarted` or
`rehashingCompleted`,
except when a dict is being emptied or released, in these cases, even if
the dict is not rehashing, we still need to subtract its current size.
On the other hand, `overhead_hashtable_lut` was duplicated with
`bucket_count`, so we remove `overhead_hashtable_lut` and use
`bucket_count` instead.
Note that this bug only happens with cluster mode, because we don't use
KVSTORE_FREE_EMPTY_DICTS without cluster.
2) The size of `dict_size_index` repeatedly counted in terms of memory
usage.
`dict_size_index` is created at startup, so its memory usage has been
counted into `used_memory_startup`.
However, when we want to count the overhead, we repeat the calculation,
which may cause the overhead to exceed the total memory usage.
---------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
### Problem
A previous PR (https://github.com/redis/redis/pull/13932) fixed the TCP
port issue in CLUSTER SLOTS, but it seems the handling of the TLS port
was overlooked.
There is this comment in the `addNodeToNodeReply` function in the
`cluster.c` file:
```c
/* Report TLS ports to TLS client, and report non-TLS port to non-TLS client. */
addReplyLongLong(c, clusterNodeClientPort(node, shouldReturnTlsInfo()));
addReplyBulkCBuffer(c, clusterNodeGetName(node), CLUSTER_NAMELEN);
```
### Fixed
This PR fixes the TLS port issue and adds relevant tests.
This PR fix the lag calculation by ensuring that when consumer group's last_id
is behind the first entry, the consumer group's entries read is considered
invalid and recalculated from the start of the stream
Supplement to PR #13473Close#13957
Signed-off-by: Ernesto Alejandro Santana Hidalgo <ernesto.alejandrosantana@gmail.com>
Close https://github.com/redis/redis/issues/13892
config set port cmd updates server.port. cluster slot retrieves
information about cluster slots and their associated nodes. the fix
updates this info when config set port cmd is done, so cluster slots cmd
returns the right value.
The bug mentioned in this
[#13862](https://github.com/redis/redis/issues/13862) has been fixed.
---------
Signed-off-by: li-benson <1260437731@qq.com>
Signed-off-by: youngmore1024 <youngmore1024@outlook.com>
Co-authored-by: youngmore1024 <youngmore1024@outlook.com>
After https://github.com/redis/redis/pull/13167, when a client calls
`FLUSHDB` command, we still async empty database, and the client was
blocked until the lazyfree completes.
1) If another client calls `SLAVEOF` command during this time, the
server will unblock all blocked clients, including those blocked by the
lazyfree. However, when unblocking a lazyfree blocked client, we forgot
to call `updateStatsOnUnblock()`, which ultimately triggered the
following assertion.
2) If a client blocked by Lazyfree is unblocked midway, and at this
point the `bio_comp_list` has already received the completion
notification for the bio, we might end up processing a client that has
already been unblocked in `flushallSyncBgDone()`. Therefore, we need to
filter it out.
---------
Co-authored-by: oranagra <oran@redislabs.com>
Close#13628
This PR changes behavior of special `+` id of XREAD command. Now it uses
`streamLastValidID` to find last entry instead of `last_id` field of
stream object.
This PR adds test for the issue.
**Notes**
Initial idea to update `last_id` while executing XDEL seems to be wrong.
`last_id` is used to strore last generated id and not id of last entry.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Co-authored-by: guybe7 <guy.benoish@redislabs.com>
Starting from https://github.com/redis/redis/pull/13133, we allocate a
jemalloc thread cache and use it for lua vm.
On certain cases, like `script flush` or `function flush` command, we
free the existing thread cache and create a new one.
Though, for `function flush`, we were not actually destroying the
existing thread cache itself. Each call creates a new thread cache on
jemalloc and we leak the previous thread cache instances. Jemalloc
allows maximum 4096 thread cache instances. If we reach this limit,
Redis prints "Failed creating the lua jemalloc tcache" log and abort.
There are other cases that can cause this memory leak, including
replication scenarios when emptyData() is called.
The implication is that it looks like redis `used_memory` is low, but
`allocator_allocated` and RSS remain high.
Co-authored-by: debing.sun <debing.sun@redis.com>
Fix to https://github.com/redis/redis/issues/13650
providing an invalid config to a module with datatype crashes when redis
tries to unload the module due to the invalid config
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
From 7.4, Redis allows `GET` options in cluster mode when the pattern maps to
the same slot as the key, but GET # pattern that represents key itself is missed.
This commit resolves it, bug report #13607.
---------
Co-authored-by: Yuan Wang <yuan.wang@redis.com>
Test 1 - give more time for expiration
Test 2 - Evaluate expiration time boundaries [+1,+2] before setting expiration [+1]
Test 3 - Avoid race on test HFEs propagated to replica
If the hash previously had HFEs (hash-fields with expiration) but later no longer
does, the key ref in the hash might become outdated after a MOVE, COPY,
RENAME or RESTORE operation. These commands maintain the key ref only
if HFEs are present. That is, we can only be sure that key ref is valid as long as the
hash has HFEs.
## Describe
When using the `XTRIM` command to trim a stream, it does not update the
maximal tombstone (`max_deleted_entry_id`). This leads to an issue where
the lag calculation incorrectly assumes that there are no tombstones
after the consumer group's last_id, resulting in an inaccurate lag.
The reason XTRIM doesn't need to update the maximal tombstone is that it
always trims from the beginning of the stream. This means that it
consistently changes the position of the first entry, leading to the
following scenarios:
1) First entry trimmed after maximal tombstone:
If the first entry is trimmed to a position after the maximal tombstone,
all tombstones will be before the first entry, so they won't affect the
consumer group's lag.
2) First entry trimmed before maximal tombstone:
If the first entry is trimmed to a position before the maximal
tombstone, the maximal tombstone will not be updated.
## Solution
Therefore, this PR optimizes the lag calculation by ensuring that when
both the consumer group's last_id and the maximal tombstone are behind
the first entry, the consumer group's lag is always equal to the number
of remaining elements in the stream.
Supplement to PR https://github.com/redis/redis/pull/13338
Hash field expiration is optimized to avoid frequent update global HFE DS for
each field deletion. Eventually active-expiration will run and update or remove
the hash from global HFE DS gracefully. Nevertheless, statistic "subexpiry"
might reflect wrong number of hashes with HFE to the user if HDEL deletes
the last field with expiration in hash (yet there are more fields without expiration).
Following this change, if HDEL the last field with expiration in the hash then
take care to remove the hash from global HFE DS as well.
This PR is based on the commits from PR
https://github.com/valkey-io/valkey/pull/52.
Ref: https://github.com/redis/redis/pull/12760
Close https://github.com/redis/redis/issues/13401
This PR will replace https://github.com/redis/redis/pull/13449
Fixes compatibilty of Redis cluster (7.2 - extensions enabled by
default) with older Redis cluster (< 7.0 - extensions not handled) .
With some of the extensions enabled by default in 7.2 version, new nodes
running 7.2 and above start sending out larger clusterbus message
payload including the ping extensions. This caused an incompatibility
with node running engine versions < 7.0. Old nodes (< 7.0) would receive
the payload from new nodes (> 7.2) would observe a payload length
(totlen) > (estlen) and would perform an early exit and won't process
the message.
This fix does the following things:
1. Always set `CLUSTERMSG_FLAG0_EXT_DATA`, because during the meet
phase, we do not know whether the connected node supports ext data, we
need to make sure that it knows and send back its ext data if it has.
2. If another node does not support ext data, we will not send it ext
data to avoid the handshake failure due to the incorrect payload length.
Note: A successful `PING`/`PONG` is required as a sender for a given
node to be marked as `CLUSTERMSG_FLAG0_EXT_DATA` and then extensions
message
will be sent to it. This could cause a slight delay in receiving the
extensions message(s).
---------
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
---------
Signed-off-by: Harkrishn Patro <harkrisp@amazon.com>
Co-authored-by: Harkrishn Patro <harkrisp@amazon.com>
Fix#13337
Ths PR fixes fixed two bugs that caused lag calculation errors.
1. When the latest tombstone is before the first entry, the tombstone
may stil be after the last id of consume group.
2. When a tombstone is after the last id of consume group, the group's
counter will be invalid, we should caculate the entries_read by using
estimates.
[exception]: Executing test client: ERR FAILOVER target replica is not
online.. ERR FAILOVER target replica is not online.
while executing
"$node_0 failover to $node_1_host $node_1_port"
("uplevel" body line 16)
invoked from within
"uplevel 1 $code"
(procedure "test" line 58)
invoked from within
"test {failover command to specific replica works} {
[err]: client evicted due to percentage of maxmemory in
tests/unit/client-eviction.tcl
Expected 33622 >= 220200 && 33622 < 440401 (context: type eval line 17
cmd {assert {$tot_mem >= $n && $tot_mem < $maxmemory_clients_actual}}
proc ::test)
When the tests are run against an external server in this order:
`--single unit/introspection --single unit/moduleapi/blockonbackground
--single integration/redis-cli`
the test would hang when the "ASK redirect test" test attempts to create
a listening socket (it fails, and then redis-cli itself hangs waiting
for a non-responsive socket created by the introspection test).
the reasons are:
1. the blockedbackground test includes util.tcl and resets the
`::last_port_attempted` variable
2. the test in introspection didn't close the listening server, so it's
still alive.
3. find_available_port doesn't properly detect the busy port, and it
thinks that the port is free even though it's busy.
fixing all 3 of these problems, even though fixing just one would be
enough to let the test pass.
Nowdays we do not trigger LUA GC after loading lua script. This means
that when a large number of scripts are loaded, such as when functions
are propagating from the master to the replica, if the LUA scripts are
never touched on the replica, the garbage might remain there
indefinitely.
Before this PR, we would share a gc_count between scripts and functions.
This means that, under certain circumstances, the GC trigger for scripts
and functions was not fair.
For example, loading a large number of scripts followed by a small
number of functions could result in the functions triggering GC.
In this PR, we assign a unique `gc_count` to each of them, so the GC
triggers between them will no longer affect each other.
on the other hand, this PR will to bring regession for script loading
commands(`FUNCTION LOAD` and `SCRIPT LOAD`), but they are not hot path,
we can ignore it, and it will be replaced
https://github.com/redis/redis/pull/13375 in the future.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
If `config resetstat` is executed and a defrag is started after it, the
`total_active_defrag_time` will not be 0.
When we start the defrag again, we will skip the following steps:
1. waiting for the defrag to start. (s total_active_defrag_time is equal
0)
2. waiting for the test to complete. (active_defrag_running is euqal 0)
which result in the test failed.
---------
Co-authored-by: oranagra <oran@redislabs.com>
### Issue
The current implementation of `FUNCTION FLUSH` command uses
`lua_unref()` to unreference script closures in Lua vm. However,
invoking `lua_unref()` during lazy free (`ASYNC` argument) is risky
since it is not thread-safe.
Another issue is that using `lua_unref()` to unreference references does
not trigger GC, This can result in the Lua VM leaves a significant
amount of garbage, which may never be cleaned up if not properly GC.
### Solution
The proposed solution is to completely rebuild the engines, resulting in
a brand new Lua VM.
---------
Co-authored-by: meir <meir@redis.com>
* INFO command : rename `hashes_with_expiry_fields` to `subexpiry`
* INFO command : rename `expired_hash_fields` to `expired_subkeys`
* Fix statistic of `expired_subkeys` to count also lazy expired
* Remove TODOs comments leftover in TCL
* Fix potential flaky test of rdb load of hash-field-expiration
If we run FLUSHALL when the 'save' config is set, and there's a fork
child ding BGSAVE, there's a chance the child is already finished, and
the parent process is unaware of it. in that case the child will not get
the kill signal and will finish successfully, but the parent process
thinks it killed it and will reset the dirty counter to 0, then the
backgroundSaveDoneHandlerDisk method can set the dirty counter to a
negative value.
getKeysUsingKeySpece had the range check AFTER the allocation, of the
keys buffer, which could lead to an OOM panic when invalid arguments are
provided, leading to an overflow.
The allocated memory is only used after the range check, so there's no
risk of buffer overrun.
The OOM panic can happen on 32bit builds, or 64 builds running on
systems with less than 4GB of RAM, and is reachable via the COMMAND
GETKEYSANDFLAGS, and ACL key name validation.
There was wrong preliminary assumption that we can optionally provide
vector of arguments more than count.
This is error-prone approach that leaded to actual error in that case.
This PR enforce that vector of argument match count.
Also fixed flaky HRANDFIELD test.
In certain situations, we might generate a large number of propagates
(e.g., multi/exec, Lua script, or a single command generating tons of
propagations) within an event loop.
During the process of propagating to a replica, if the replica is
disconnected(marked as CLIENT_CLOSE_ASAP) due to exceeding the output
buffer limit, we should remove its reference to the global replication
buffer to avoid the global replication buffer being unable to be
properly trimmed due to being referenced.
---------
Co-authored-by: oranagra <oran@redislabs.com>
H(P)EXPIREAT command might delete fields in case the absolute time is in the
past. Those HDELs need to be propagated as well.
In general, as we need to propagate H(P)EXPIRE(AT) command to the replica, each
field that is mentioned in the command should be categorized into one of the four
options:
1. Managed to update field’s expiration time - propagate it to replica as part
of the HPEXPIREAT command.
2. Deleted the field because the time is in the past - propagate also HDEL command
to delete the field and remove the field from the propagated HPEXPIREAT.
3. Condition not met for the field - Remove the field from the propagated
HPEXPIREAT command.
4. Field does not exists - Remove the field from the propagated HPEXPIREAT command.
If none of the provided fields match option number 1, then avoid also propagating
the HPEXPIREAT command to the replica.
This approach is aligned with the EXPIRE command:
If a given key has already expired, then DEL will be propagated instead of
EXPIRE command. If condition not met, then command will be rejected. Otherwise,
EXPIRE command will be propagated for given key.
Considerations for the selected imp of HRANDFIELD & HFE feature:
HRANDFIELD might access any of the fields in the hash as some of them
might be expired. And so the Implementation of HRANDFIELD along with HFEs
might be one of the two options:
1. Expire hash-fields before diving into handling HRANDFIELD.
2. Refine HRANDFIELD cases to deal with expired fields.
Regarding the first option, as reference, the command RANDOMKEY also
declareson O(1) complexity, yet might be stuck on a very long (but not infinite)
loop trying to find non-expired keys. Furthermore RANDOMKEY also evicts expired
keys along the way even though it is categorized as a read-only command. Note
that the case of HRANDFIELD is more lightweight versus RANDOMKEY since
HFEs have much more effective and aggressive active-expiration for fields behind.
The second option introduces additional implementation complexity to HRANDFIELD.
We could further refine HRANDFIELD cases to differentiate between scenarios
with many expired fields versus few expired fields, and adjust based on the
percentage of expired fields. However, this approach could still lead to long
loops or necessitate expiring fields before selecting them. For the “lightweight”
cases it is also expected to have a lightweight expiration.
Considering the pros and cons, and the fact that HRANDFIELD is an infrequent
command (particularly with HFEs) and the fact we have effective active-expiration
behind for hash-fields, it is better to keep it simple and choose option number 1.
Other changes:
* Don't mark command dirty by internal hashTypeExpire(). It causes to read
only command of HRANDFIELD to be accidently propagated (This flag
should be indicated at higher level, by the command functions).
* Align `hashTypeExpireIfNeeded()` and `hashTypeGetValue()` to be more
aligned with `expireIfNeeded()` logic of keyspace.
Currently, HFE commands reply with empty array if the key does not
exist. Though, non-existing key and empty key is the same thing.
It means fields given in the command do not exist in the empty key.
So, replying with an array of 'no field' error codes (-2) suits better
to Redis logic. e.g. Similarly, `hmget` returns array of nulls if the
key does not exist.
After this PR:
```
127.0.0.1:6379> hpersist missingkey fields 2 a b
1) (integer) -2
2) (integer) -2
```
When the hash field expired, we will send a new `hexpired` notification.
It mainly includes the following three cases:
1. When field expired by active expiration.
2. When field expired by lazy expiration.
3. When the user uses the `h(p)expire(at)` command, the user will also
get a `hexpired` notification if the field expires during the command.
## Improvement
1. Now if more than one field expires in the hmget command, we will only
send a `hexpired` notification.
2. When a field with TTL is deleted by commands like hdel without
updating the global DS, active expire will not send a notification.
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
Co-authored-by: Moti Cohen <moti.cohen@redis.com>
Reserve 2 bits out of hash-field expiration time (`EB_EXPIRE_TIME_MAX`)
for possible future lightweight indexing/categorizing of fields. It can
be achieved by hacking HFE as follows:
```
HPEXPIREAT key [ 2^47 + USER_INDEX ] FIELDS numfields field [field …]
```
Redis will also need to expose kind of `HEXPIRESCAN` and `HEXPIRECOUNT`
for this idea. Yet to be better defined.
`HFE_MAX_ABS_TIME_MSEC` constraint must be enforced only at API level.
Internally, the expiration time can be up to `EB_EXPIRE_TIME_MAX` for
future readiness.
Need to be carefull if called by modules since modules API allow to open
and close key handler. We don't want to invalidate the handler
underneath.
* hashTypeExists(), hashTypeGetValueObject() - will return the logical
state of the field. A flag will indicate noExpire.
* RM_HashGet() - Will get NULL if the field expired. Fields won’t be
deleted.
* RM_ScanKey() - might return 0 items if all fields got expired. Fields
won’t be deleted.
* RM_HashSet() - If set, then override expired field. If delete, we can
either delete or leave it to active-expiration. XX/NX - logically
correct (Verify with tests).
Nice to have (not implemented):
* RedisModule_CloseKey() - We can local active-expire up-to 100 items.
Note:
Length will be wrong to modules just like redis (Count expired fields).
In the old test, we give the `hexpire` a very short expire time, which
caused the filed to be deleted by the time `hpersist` command was
executed. As a result, the `hpersist` command won't be able to give a
`hpersist` notification, leading to test stuck.
fail CI:
https://github.com/redis/redis/actions/runs/9342175887/job/25709886471
1. Don't allow HEXPIRE/HEXPIREAT/HPEXPIRE/HPEXPIREAT command expire
parameters is negative
2. Remove a dead code reported from Coverity.
when `unit` is not `UNIT_SECONDS`, the second `if (expire > (long long)
EB_EXPIRE_TIME_MAX)` will be dead code.
```c
# t_hash.c
2988 /* Check expire overflow */
cond_at_most: Condition expire > 281474976710655LL, taking false branch. Now the value of expire is at most 281474976710655.
2989 if (expire > (long long) EB_EXPIRE_TIME_MAX) {
2990 addReplyErrorExpireTime(c);
2991 return;
2992 }
2994 if (unit == UNIT_SECONDS) {
2995 if (expire > (long long) EB_EXPIRE_TIME_MAX / 1000) {
2996 addReplyErrorExpireTime(c);
2997 return;
2998 }
2999 expire *= 1000;
3000 } else {
at_most: At condition expire > 281474976710655LL, the value of expire must be at most 281474976710655.
dead_error_condition: The condition expire > 281474976710655LL cannot be true.
3001 if (expire > (long long) EB_EXPIRE_TIME_MAX) {
CID 494223: (#1 of 1): Logically dead code (DEADCODE)
dead_error_begin: Execution cannot reach this statement: addReplyErrorExpireTime(c);.
3002 addReplyErrorExpireTime(c);
3003 return;
3004 }
3005 }
```
---------
Co-authored-by: Ozan Tezcan <ozantezcan@gmail.com>
RM_ScanKey() was overlooked while introducing hash field expiration.
An assert is triggered when it is called on a hash key with
OBJ_ENCODING_LISTPACK_EX encoding.
I've changed to code to handle listpackex encoding properly.
The crash happens when the user that triggers the permission changes
should be affected (and should be disconnected eventually).
To handle such a scenario, we should use the
`CLIENT_CLOSE_AFTER_COMMAND` flag.
This commit encapsulates all the places that should be handled in the
same way in `deauthenticateAndCloseClient`
Also:
* bugfix: during the ACL LOAD we ignore clients that are marked as
`CLIENT MASTER`
**Related issue**
https://github.com/redis/redis/issues/13219
**Motivation**
Currently we have to manually update the all_tests variable when
introducing new test files.
**Modification**
I have modified it to list test files dynamically, but instead of
modifying it to add all test files, I have modified it to only add only
test files from the following 4 paths
- unit
- unit/type
- unit/cluster
- integration
so that it doesn't deviate too much from what we already do
**Result**
- dynamically list test files to all_tests variable
- close issue https://github.com/redis/redis/issues/13219
**Additional information**
- removed `list-common.tcl` file and added
`generate_largevalue_test_array` proc in `util.tcl`. because
`list-common.tcl` is not a test file
- There is an order dependency. So I added a code to the "Is a ziplist
encoded Hash promoted on big payload?" test that resets
hash-max-listpack-value to the default (64).
---------
Signed-off-by: jonghoonpark <dev@jonghoonpark.com>
Co-authored-by: debing.sun <debing.sun@redis.com>
## Background
This PR introduces support for field-level expiration in Redis hashes. Previously, Redis supported expiration only at the key level, but this enhancement allows setting expiration times for individual fields within a hash.
## New commands
* HEXPIRE
* HEXPIREAT
* HEXPIRETIME
* HPERSIST
* HPEXPIRE
* HPEXPIREAT
* HPEXPIRETIME
* HPTTL
* HTTL
## Short example
from @moticless
```sh
127.0.0.1:6379> hset myhash f1 v1 f2 v2 f3 v3
(integer) 3
127.0.0.1:6379> hpexpire myhash 10000 NX fields 2 f2 f3
1) (integer) 1
2) (integer) 1
127.0.0.1:6379> hpttl myhash fields 3 f1 f2 f3
1) (integer) -1
2) (integer) 9997
3) (integer) 9997
127.0.0.1:6379> hgetall myhash
1) "f3"
2) "v3"
3) "f2"
4) "v2"
5) "f1"
6) "v1"
... after 10 seconds ...
127.0.0.1:6379> hgetall myhash
1) "f1"
2) "v1"
127.0.0.1:6379>
```
## Expiration strategy
1. Integrate active
Redis periodically performs active expiration and deletion of hash keys that contain expired fields, with a maximum attempt limit.
3. Lazy expiration
When a client touches fields within a hash, Redis checks if the fields are expired. If a field is expired, it will be deleted. However, we do not delete expired fields during a traversal, we implicitly skip over them.
## RDB changes
Add two new rdb type s`RDB_TYPE_HASH_METADATA` and `RDB_TYPE_HASH_LISTPACK_EX`.
## Notification
1. Add `hpersist` notification for `HPERSIST` command.
5. Add `hexpire` notification for `HEXPIRE`, `HEXPIREAT`, `HPEXPIRE` and `HPEXPIREAT` commands.
## Internal
1. Add new data structure `ebuckets`, which is used to store TTL and keys, enabling quick retrieval of keys based on TTL.
2. Add new data structure `mstr` like sds, which is used to store a string with TTL.
This work was done by @moticless, @tezc, @ronen-kalish, @sundb, I just release it.
* For replica sake, rewrite commands `H*EXPIRE*` , `HSETF`, `HGETF` to
have absolute unix time in msec.
* On active-expiration of field, propagate HDEL to replica
(`propagateHashFieldDeletion()`)
* On lazy-expiration, propagate HDEL to replica (`hashTypeGetValue()`
now calls `hashTypeDelete()`. It also takes care to call
`propagateHashFieldDeletion()`).
* Fix `H*EXPIRE*` command such that if it gets flag `LT` and it doesn’t
have any expiration on the field then it will considered as valid
condition.
Note, replicas doesn’t make any active expiration, and should avoid lazy
expiration. On `hashTypeGetValue()` it doesn't check expiration (As long
as the master didn’t request to delete the field, it is valid)
TODO:
* Attach `dbid` to HASH metadata. See
[here](https://github.com/redis/redis/pull/13209#discussion_r1593385850)
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
In the last step of hscan, while replying to client, we assume all items
in the result list are keys which are mstr instances. Though, there
might be values which are sds instances.
Added a check to avoid calling mstrlen() for value objects.
To reproduce:
```
127.0.0.1:6379> hset myhash1 a 11111111111111111111111111111111111111111111111111111111111111111
(integer) 0
127.0.0.1:6379> hscan myhash1 0
1) "0"
2) 1) "a"
2) "11111111111111111111111111111111111111111111111111111111111111111\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
```