VRANDMEMBER had a bug when exactly two elements where present in the
vector set: we selected a fixed number of random paths to take, and this
will lead always to the same element. This PR should be kindly
back-ported to Redis 8.x.
Hello, this is a patch that improves vector sets in two ways:
1. It makes the RDB format compatible with big endian machines: yeah,
they are non existent nowadays, but still it is better to be correct.
The behavior remains unchanged in little endian systems, it only changes
what happens in big endian systems in order for it to load and emit the
exact same format produced by little endian. The implementation was
*already largely safe* but for one detail.
2. More importantly, this PR saves nodes worst link score / index in a
backward compatible way, introducing also versioning information for the
serialized node encoding, that could be useful in the future. With this
information, that in the past was not saved for a programming error
(mine), there is no longer need to compute the worst link info at
runtime when loading data. This results in a speed improvement of about
30% when loading data from disk / RESTORE. The saving performance is
unaffected.
The patch was tested with care to be sure that data produced with old
vector sets implementations are loaded without issues (that is, the
backward compatibility was hand-tested). The new code is tested by the
persistence test already in the test suite, so no new test was added.
The SHA256 checksums for Rust 1.88.0 were incorrect, causing checksum
verification failures during installation. Updated with the correct
official checksums from https://static.rust-lang.org/dist/:
- x86_64-unknown-linux-gnu:
7b5437c1d18a174faae253a18eac22c32288dccfc09ff78d5ee99b7467e21bca
- x86_64-unknown-linux-musl:
200bcf3b5d574caededba78c9ea9d27e7afc5c6df4154ed0551879859be328e1
- aarch64-unknown-linux-gnu:
d5decc46123eb888f809f2ee3b118d13586a37ffad38afaefe56aa7139481d34
- aarch64-unknown-linux-musl:
f8b3a158f9e5e8cc82e4d92500dd2738ac7d8b5e66e0f18330408856235dec35
This PR introduces "IN" overloading for strings in Vector Sets VSIM
FILTER expressions.
Now it is possible to do something like:
"foo" IN "foobar"
IN continues to work as usually if the second operand is an array,
checking for membership of the left operand.
Ping @rowantrollope that requested this feature. I'm evaluating if to
add glob matching functionalities via the `=~` operator but I need to do
an optimization round in our glob matching function probably. Glob
matching can be slower, at the same time the complexity of the greedy
search in the graph remains unchanged, so it may be a good idea to have
it.
Case insensitive search will be likely not be added however, since this
would require handling unicode that is kinda outside the scope of Redis
filters. The user is still able to perform `"foo" in "foobar" || "FOO"
in "foobar"` at least.
This changes improve a bit the Vector Sets tests:
* DB9 is used instead of the target DB. After a successful test the DB
is left empty.
* If the replica is not available, the replication tests are skipped
without errors but just a warning.
* Other refactoring stuff.
This PR introduces the initial configuration infrastructure for
vector-sets, along with a new option:
`vset-force-single-threaded-execution`. When enabled, it applies the
`NOTHREAD` flag to VSIM and disables the `CAS` option for VADD, thereby
enforcing single-threaded execution.
Note: This mode is not optimized for single-threaded performance.
---------
Co-authored-by: GuyAv46 <47632673+GuyAv46@users.noreply.github.com>
Co-authored-by: debing.sun <debing.sun@redis.com>
Vector Sets deserialization was not designed to resist corrupted data,
assuming that a good checksum would mean everything is fine. However
Redis allows the user to specify extra protection via a specific
configuration option.
This commit makes the implementation more resistant, at the cost of some
slowdown. This also fixes a serialization bug that is unrelated (and has
no memory corruption effects) about the lack of the worst index /
distance serialization, that could lower the quality of a graph after
links are replaced. I'll address the serialization issues in a new PR
that will focus on that aspect alone (already work in progress).
The net result is that loading vector sets is, when the serialization of
worst index/distance is missing (always, for now) 100% slower, that is 2
times the loading time we had before. Instead when the info will be
added it will be just 10/15% slower, that is, just making the new sanity
checks.
It may be worth to export to modules if advanced sanity check if needed
or not. Anyway most of the slowdown in this patch comes from having to
recompute the worst neighbor, since duplicated and non reciprocal links
detection was heavy optimized with probabilistic algorithms.
---------
Co-authored-by: debing.sun <debing.sun@redis.com>
Hi, as described, this implements WITHATTRIBS, a feature requested by a
few users, and indeed needed.
This was requested the first time by @rowantrollope but I was not sure
how to make it work with RESP2 and RESP3 in a clean way, hopefully
that's it.
The patch includes tests and documentation updates.
Hi all, this PR fixes two things:
1. An assertion, that prevented the RDB loading from recovery if there
was a quantization type mismatch (with regression test).
2. Two code paths that just returned NULL without proper cleanup during
RDB loading.
This PR adds support for REDISMODULE_OPTIONS_HANDLE_IO_ERRORS.
and tests for short read and corrupted RESTORE payload.
Please: note that I also removed the comment about async loading support
since we should be already covered. No manipulation of global data
structures in Vector Sets, if not for the unique ID used to create new
vector sets with different IDs.
This PR replaces cJSON with an home-made parser designed for the kind of
access pattern the FILTER option of VSIM performs on JSON objects. The
main points here are:
* cJSON forces us to parse the whole JSON, create a graph of cJSON
objects, then we need to seek in O(N) to find the right field.
* The cJSON object associated with the value is not of the same format
as the expr.c virtual machine. We needed a conversion function doing
more allocation and work.
* Right now we only support top level fields in the JSON object, so a
full parser is not needed.
With all these things in mind, and after carefully profiling the old
code, I realized that a specialized parser able to parse JSON in a
zero-allocation fashion and only actually parse the value associated to
our key would be much more efficient. Moreover, after this change, the
dependencies of Vector Sets to external code drops to zero, and the
count of lines of code is 3000 lines less. The new line count with LOC
is 4200, making Vector Sets easily the smallest full featured
implementation of a Vector store available.
# Speedup achieved
In a dataset with JSON objects with 30 fields, 1 million elements, the
following query shows a 3.5x speedup:
vsim vectors:million ele ele943903 FILTER ".field29 > 1000 and .field15
< 50"
Please note that we get **3.5x speedup** in the VSIM command itself.
This means that the actual JSON parsing speedup is significantly greater
than that. However, in Redis land, under my past kingdom of many years
ago, the rule was that an improvement would produce speedups that are
*user facing*. This PR definitely qualifies.
What is interesting is that even with a JSON containing a single element
the speedup is of about 70%, so we are faster even in the worst case.
# Further info
Note that the new skipping parser, may happily process JSON objects that
are not perfectly valid, as soon as they look valid from the POV of
balancing [] and {} and so forth. This should not be an issue. Anyway
invalid JSON produces random results (the element is skipped at all even
if it would pass the filter).
Please feel free to ask me anything about the new implementation before
merging.
Used the augment agent to fix a given commands.json
Agent summary:
I've successfully fixed the `vectorset-commands.json` file to make it
coherent with the standard command files under `src/commands`. Here's a
summary of the changes I made:
1. Changed `type: "enum"` with `enum: ["TOKEN"]` to use the standard
format:
- For fixed tokens: token: `"TOKEN"` and `type: "pure-token"`
- For multiple choice options: `type: "oneof"` with nested arguments
2. Added missing fields to each command:
- `arity`: The number of arguments the command takes
- `function`: The C function that implements the command
- `command_flags`: Flags that describe the command's behavior
- Reorganized the structure to match the standard format:
3. Moved `group` and `since` to be consistent with other command files
- Properly structured the arguments with the correct types
4. Fixed the `multiple` attribute for parameters that can accept
multiple values
These changes make the vectorset-commands.json file consistent with the
standard command files under src/commands, while still keeping it as a
single file containing all the vector set commands as requested.
The vector-sets module is a part of Redis Core and is available by default,
just like any other data type in Redis.
As a result, when building Redis from the source, the vector-sets module
is also compiled as part of the Redis binary and loaded at server start-up.
This new data type added as a preview currently doesn't support
all the capabilities in Redis like:
32-bit OS
C99
Short-read that might end with memory leak
AOF rewirte
defrag
This PR introduces the installation of the `musl`-based version of Rust,
in order to support alpine-based runtime environments (Rust is used by
[RedisJSON](https://github.com/RedisJSON/RedisJSON)).
A new BUILD_WITH_MODULES flag was added to the Makefile to control
building the module directory.
The new module directory includes a general Makefile that iterates
over each module, fetch a specific version, and build it.
Co-authored-by: YaacovHazan <yaacov.hazan@redislabs.com>