Commit Graph

4016 Commits

Author SHA1 Message Date
antirez 0f72257049 Geo: fix GEOHASH return value for consistency.
The same thing observed in #3551 by gnethercutt also fixed for
GEOHASH as the original PR did.
2016-12-20 10:20:13 +01:00
antirez 913070a9e8 Geo: fix edge case return values for uniformity.
There were two cases outlined in issue #3512 and PR #3551 where
the Geo API returned unexpected results: empty strings where NULL
replies were expected, or a single null reply where an array was
expected. This violates the Redis principle that Redis replies for
existing keys or elements should be indistinguishable.

This is technically an API breakage so will be merged only into 4.0 and
specified in the changelog in the list of breaking compatibilities, even
if it is not very likely that actual code will be affected, hopefully,
since with the past behavior basically there was to acconut for *both*
the possibilities, and the new behavior is always one of the two, but
in a consistent way.
2016-12-20 10:12:38 +01:00
Justin Carvalho 7c64e88963 Fix missing brackets around encoding variable in ZIP_DECODE_LENGTH macro 2016-12-19 17:37:41 -05:00
antirez 074383f850 Remove first version of ASCII wave, later discarded. 2016-12-19 16:45:18 +01:00
antirez 06bfeb482d Only show Redis logo if logging to stdout / TTY.
You can still force the logo in the normal logs.
For motivations, check issue #3112. For me the reason is that actually
the logo is nice to have in interactive sessions, but inside the logs
kinda loses its usefulness, but for the ability of users to recognize
restarts easily: for this reason the new startup sequence shows a one
liner ASCII "wave" so that there is still a bit of visual clue.

Startup logging was modified in order to log events in more obvious
ways, and to log more events. Also certain important informations are
now more easy to parse/grep since they are printed in field=value style.

The option --always-show-logo in redis.conf was added, defaulting to no.
2016-12-19 16:41:47 +01:00
antirez 90a6f7fc98 adjustOpenFilesLimit() comment made hopefully more clear. 2016-12-19 08:53:29 +01:00
Salvatore Sanfilippo 2988889db1 Merge pull request #3603 from oranagra/adjustOpenFilesLimit_overflow
fix unsigned int overflow in adjustOpenFilesLimit
2016-12-19 08:48:44 +01:00
Salvatore Sanfilippo ce9e36eb01 Merge pull request #3605 from hylepo/unstable
Fixing typo in the usage of redis-benchmark
2016-12-19 08:20:01 +01:00
Salvatore Sanfilippo 6cf1a325d6 Merge pull request #3643 from andyli028/unstable
Modify MIN->MAX
2016-12-19 08:19:10 +01:00
antirez 8e390a62ad Hopefully improve code comments for issue #3616.
This commit also contains other changes in order to conform the code to
the Redis core style, specifically 80 chars max per line, smart
conditionals in the same line:

    if (that) do_this();
2016-12-16 17:48:38 +01:00
Salvatore Sanfilippo ca4ca5073e Merge pull request #3616 from oranagra/stop_aofrw_before_rdbload
CoW improvement, stop AOFRW before flushing and parsing slave RDB
2016-12-16 17:43:20 +01:00
Salvatore Sanfilippo 151af73118 Merge pull request #3661 from itamarhaber/module-doc2
Corrects a couple of omissions in the modules docs
2016-12-16 16:53:13 +01:00
antirez 87538cb7fe Switch PFCOUNT to LogLog-Beta algorithm.
The new algorithm provides the same speed with a smaller error for
cardinalities in the range 0-100k. Before switching, the new and old
algorithm behavior was studied in details in the context of
issue #3677. You can find a few graphs and motivations there.
2016-12-16 11:07:30 +01:00
antirez 0224be8811 Use llroundl() before converting loglog-beta output to integer.
Otherwise for small cardinalities the algorithm will output something
like, for example, 4.99 for a candinality of 5, that will be converted
to 4 producing a huge error.
2016-12-16 11:07:30 +01:00
Harish Murthy c55e3fbae5 LogLog-Beta Algorithm support within HLL
Config option to use LogLog-Beta Algorithm for Cardinality
2016-12-16 11:07:30 +01:00
Salvatore Sanfilippo 5ad2a94a16 Merge pull request #3686 from dvirsky/fix_lowlevel_zrange
fixed stop condition in RM_ZsetRangeNext and RM_ZsetRangePrev
2016-12-16 09:20:47 +01:00
antirez d634c36253 ziplist.c explanation of format improved a bit. 2016-12-16 09:04:57 +01:00
antirez ac61f90625 DEBUG: new "ziplist" subcommand added. Dumps a ziplist on stdout.
The commit improves ziplistRepr() and adds a new debugging subcommand so
that we can trigger the dump directly from the Redis API.
This command capability was used while investigating issue #3684.
2016-12-16 09:02:50 +01:00
Dvir Volk 7f9b9512b8 fixed stop condition in RM_ZsetRangeNext and RM_ZsetRangePrev 2016-12-15 00:07:20 +02:00
antirez b53e73e159 MIGRATE: Remove upfront ttl initialization.
After the fix for #3673 the ttl var is always initialized inside the
loop itself, so the early initialization is not needed.

Variables declaration also moved to a more local scope.
2016-12-14 12:43:55 +01:00
Salvatore Sanfilippo c9f0456d81 Merge pull request #3673 from badboy/reset-ttl-on-migrating
Reset the ttl for additional keys
2016-12-14 12:41:00 +01:00
antirez b6f871cf42 Writable slaves expires: fix leak in key tracking.
We need to use a dictionary type that frees the key, since we copy the
keys in the dictionary we use to track expires created in the slave
side.
2016-12-13 16:27:13 +01:00
antirez d1adc85aa6 INFO: show num of slave-expires keys tracked. 2016-12-13 16:02:29 +01:00
antirez 5b9ba26403 Fix created->created typo in expire.c 2016-12-13 12:21:15 +01:00
antirez 04542cff92 Replication: fix the infamous key leakage of writable slaves + EXPIRE.
BACKGROUND AND USE CASEj

Redis slaves are normally write only, however the supprot a "writable"
mode which is very handy when scaling reads on slaves, that actually
need write operations in order to access data. For instance imagine
having slaves replicating certain Sets keys from the master. When
accessing the data on the slave, we want to peform intersections between
such Sets values. However we don't want to intersect each time: to cache
the intersection for some time often is a good idea.

To do so, it is possible to setup a slave as a writable slave, and
perform the intersection on the slave side, perhaps setting a TTL on the
resulting key so that it will expire after some time.

THE BUG

Problem: in order to have a consistent replication, expiring of keys in
Redis replication is up to the master, that synthesize DEL operations to
send in the replication stream. However slaves logically expire keys
by hiding them from read attempts from clients so that if the master did
not promptly sent a DEL, the client still see logically expired keys
as non existing.

Because slaves don't actively expire keys by actually evicting them but
just masking from the POV of read operations, if a key is created in a
writable slave, and an expire is set, the key will be leaked forever:

1. No DEL will be received from the master, which does not know about
such a key at all.

2. No eviction will be performed by the slave, since it needs to disable
eviction because it's up to masters, otherwise consistency of data is
lost.

THE FIX

In order to fix the problem, the slave should be able to tag keys that
were created in the slave side and have an expire set in some way.

My solution involved using an unique additional dictionary created by
the writable slave only if needed. The dictionary is obviously keyed by
the key name that we need to track: all the keys that are set with an
expire directly by a client writing to the slave are tracked.

The value in the dictionary is a bitmap of all the DBs where such a key
name need to be tracked, so that we can use a single dictionary to track
keys in all the DBs used by the slave (actually this limits the solution
to the first 64 DBs, but the default with Redis is to use 16 DBs).

This solution allows to pay both a small complexity and CPU penalty,
which is zero when the feature is not used, actually. The slave-side
eviction is encapsulated in code which is not coupled with the rest of
the Redis core, if not for the hook to track the keys.

TODO

I'm doing the first smoke tests to see if the feature works as expected:
so far so good. Unit tests should be added before merging into the
4.0 branch.
2016-12-13 10:59:54 +01:00
Yossi Gottlieb b6ab4d04b6 Fix redis-cli rare crash.
This happens if the server (mysteriously) returns an unexpected response
to the COMMAND command.
2016-12-12 20:18:40 +02:00
Jan-Erik Rediger 2a32f0371e Reset the ttl for additional keys
Before, if a previous key had a TTL set but the current one didn't, the
TTL was reused and thus resulted in wrong expirations set.

This behaviour was experienced, when `MigrateDefaultPipeline` in
redis-trib was set to >1

Fixes #3655
2016-12-08 14:27:21 +01:00
wangshaonan 2d91fce970 Add '\n' to MEMORY DOCTOR command output message when num_reports
is 0 or empty is 1
2016-12-06 03:11:27 +00:00
itamar 94fe98666c Corrects a couple of omissions in the modules docs 2016-12-05 18:34:38 +02:00
antirez 16cce320c4 Modules: types doc updated to new API. 2016-12-05 14:40:51 +01:00
antirez 37b6e16ae1 Modules: API doc updated (auto generated). 2016-12-05 14:40:43 +01:00
antirez 3c85a88888 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2016-12-05 14:17:11 +01:00
antirez 001138aec3 Geo: fix computation of bounding box.
A bug was reported in the context in issue #3631. The root cause of the
bug was that certain neighbor boxes were zeroed after the "inside the
bounding box or not" check, simply because the bounding box computation
function was wrong.

A few debugging infos where enhanced and moved in other parts of the
code. A check to avoid steps=0 was added, but is unrelated to this
issue and I did not verified it was an actual bug in practice.
2016-12-05 14:02:32 +01:00
Itamar Haber 5dc4fe1529 Verify pairs are provided after subcommands
Fixes https://github.com/antirez/redis/issues/3639
2016-12-02 18:19:36 +02:00
antirez 434e6b2da3 PSYNC2: Do not accept WAIT in slave instances.
No longer makes sense since writable slaves only do local writes now:
writes are no longer passed to sub-slaves in the stream.
2016-12-02 10:21:20 +01:00
Chris Lamb 6eb0c52d4c src/rdb.c: Correct "whenver" -> "whenever" typo. 2016-12-01 13:16:30 +01:00
Yossi Gottlieb 5f5b4f1508 Fix typo in RedisModuleTypeMethods declaration. 2016-11-30 22:05:59 +02:00
Salvatore Sanfilippo 3c4fe59e09 Merge pull request #3648 from dvirsky/fix_reply_crash
fix memory corruption on RM_FreeCallReply
2016-11-30 11:21:10 +01:00
antirez 71e8d15e49 Modules: change type registration API to use a struct of methods. 2016-11-30 11:14:01 +01:00
Dvir Volk 8521cde570 fix memory corruption on RM_FreeCallReply 2016-11-30 11:49:49 +02:00
antirez 6eb720ff2d PSYNC2: Minor memory leak reading -NOMASTERLINK master reply fixed. 2016-11-29 10:25:00 +01:00
andyli 8abf9729f0 Modify MIN->MAX 2016-11-29 16:34:41 +08:00
antirez eab865a0a1 PSYNC2: stop sending newlines to sub-slaves when master is down.
This actually includes two changes:

1) No newlines to take the master-slave link up when the upstream master
is down. Doing this is dangerous because the sub-slave often is received
replication protocol for an half-command, so can't receive newlines
without desyncing the replication link, even with the code in order to
cancel out the bytes that PSYNC2 was using. Moreover this is probably
also not needed/sane, because anyway the slave can keep serving
requests, and because if it's configured to don't serve stale data, it's
a good idea, actually, to break the link.

2) When a +CONTINUE with a different ID is received, we now break
connection with the sub-slaves: they need to be notified as well. This
was part of the original specification but for some reason it was not
implemented in the code, and was alter found as a PSYNC2 bug in the
integration testing.
2016-11-28 17:54:04 +01:00
antirez 790310d894 Better protocol errors logging. 2016-11-25 10:55:16 +01:00
antirez e09e31b12e PSYNC2: on transient error jump to error, not write_error. 2016-11-24 15:48:18 +01:00
antirez 1f55170b9c Modules: fix client blocking calls access to invalid struct field.
We already have reference to the client pointer, no need to access the
already freed structure.

Close #3634.
2016-11-24 11:05:19 +01:00
antirez 5b7d42fff3 PSYNC2: bugfixing pre release.
1. Master replication offset was cleared after switching configuration
to some other slave, since it was assumed you can't PSYNC after a
switch. Note the case anymore and when we successfully PSYNC we need to
have our offset untouched.

2. Secondary replication ID was not reset to "000..." pattern at
startup.

3. Master in error state replying -LOADING or other transient errors
forced the slave to discard the cached master and full resync. This is
now fixed.

4. Better logging of what's happening on failed PSYNCs.
2016-11-23 17:36:45 +01:00
Salvatore Sanfilippo 5b83fa482c Merge pull request #3612 from deep011/unstable
fix a possible bug for 'replconf getack'
2016-11-18 10:45:09 +01:00
antirez 8fb3ad2444 Merge branch 'psync2' into unstable 2016-11-17 09:37:03 +01:00
oranagra e3a61950a2 when a slave loads an RDB, stop an AOFRW fork before flusing db and parsing rdb file, to avoid a CoW disaster. 2016-11-16 21:30:59 +02:00
antirez cfdb3a2214 Cluster: handle zero bytes at the end of nodes.conf. 2016-11-16 14:13:18 +01:00
deep011 13a92a5bb1 fix a possible bug for 'replconf getack' 2016-11-16 11:04:33 +08:00
hylepo dbb6cb442a Update redis-benchmark.c
Fixing typo in the usage of redis-benchmark
2016-11-11 10:33:48 +08:00
oranagra a1a07225b3 fix unsigned int overflow in adjustOpenFilesLimit 2016-11-10 16:59:52 +02:00
antirez 28c96d73b2 PSYNC2: Save replication ID/offset on RDB file.
This means that stopping a slave and restarting it will still make it
able to PSYNC with the master. Moreover the master itself will retain
its ID/offset, in case it gets turned into a slave, or if a slave will
try to PSYNC with it with an exactly updated offset (otherwise there is
no backlog).

This change was possible thanks to PSYNC v2 that makes saving the current
replication state much simpler.
2016-11-10 12:35:29 +01:00
antirez 4e5e366ed2 PSYNC2: Wrap debugging code with if(0) 2016-11-09 15:37:15 +01:00
antirez 2669fb8364 PSYNC2: different improvements to Redis replication.
The gist of the changes is that now, partial resynchronizations between
slaves and masters (without the need of a full resync with RDB transfer
and so forth), work in a number of cases when it was impossible
in the past. For instance:

1. When a slave is promoted to mastrer, the slaves of the old master can
partially resynchronize with the new master.

2. Chained slalves (slaves of slaves) can be moved to replicate to other
slaves or the master itsef, without requiring a full resync.

3. The master itself, after being turned into a slave, is able to
partially resynchronize with the new master, when it joins replication
again.

In order to obtain this, the following main changes were operated:

* Slaves also take a replication backlog, not just masters.

* Same stream replication for all the slaves and sub slaves. The
replication stream is identical from the top level master to its slaves
and is also the same from the slaves to their sub-slaves and so forth.
This means that if a slave is later promoted to master, it has the
same replication backlong, and can partially resynchronize with its
slaves (that were previously slaves of the old master).

* A given replication history is no longer identified by the `runid` of
a Redis node. There is instead a `replication ID` which changes every
time the instance has a new history no longer coherent with the past
one. So, for example, slaves publish the same replication history of
their master, however when they are turned into masters, they publish
a new replication ID, but still remember the old ID, so that they are
able to partially resynchronize with slaves of the old master (up to a
given offset).

* The replication protocol was slightly modified so that a new extended
+CONTINUE reply from the master is able to inform the slave of a
replication ID change.

* REPLCONF CAPA is used in order to notify masters that a slave is able
to understand the new +CONTINUE reply.

* The RDB file was extended with an auxiliary field that is able to
select a given DB after loading in the slave, so that the slave can
continue receiving the replication stream from the point it was
disconnected without requiring the master to insert "SELECT" statements.
This is useful in order to guarantee the "same stream" property, because
the slave must be able to accumulate an identical backlog.

* Slave pings to sub-slaves are now sent in a special form, when the
top-level master is disconnected, in order to don't interfer with the
replication stream. We just use out of band "\n" bytes as in other parts
of the Redis protocol.

An old design document is available here:

https://gist.github.com/antirez/ae068f95c0d084891305

However the implementation is not identical to the description because
during the work to implement it, different changes were needed in order
to make things working well.
2016-11-09 15:37:15 +01:00
antirez 18d32c7e1c redis-cli typo fixed: perferences -> preferences.
Thanks to @qiaodaimadelaowang for signaling the issue.
Close #3585.
2016-11-02 15:15:49 +01:00
Salvatore Sanfilippo fa2dc4b60c Merge pull request #3514 from charsyam/feature/simple-refactoring
Simple change just using slaves instead of server.slaves
2016-11-02 11:04:52 +01:00
Salvatore Sanfilippo 25811bc983 Merge pull request #3547 from yyoshiki41/refactor/redis-trib
Refactor redis-trib.rb
2016-11-02 11:02:32 +01:00
Salvatore Sanfilippo b3e707339d Merge pull request #3575 from deep011/unstable
fix a bug for quicklistDup() function
2016-11-02 11:00:24 +01:00
Dvir Volk ec8fd6e5e4 fixed sizeof in allocating io RedisModuleCtx* 2016-10-31 18:48:16 +02:00
Salvatore Sanfilippo 77b1abf185 Merge pull request #3565 from sunheehnus/bitfield-fix-highest_write_offset
bitops.c/bitfieldCommand: update higest_write_offset with check
2016-10-31 15:40:46 +01:00
Salvatore Sanfilippo f48ca5581e Merge pull request #3573 from jybaek/module-io-context
Add missing fclose()
2016-10-31 15:36:38 +01:00
Guy Benoish 8b070b5d12 Fixed wrong sizeof(client) in object.c 2016-10-31 15:08:17 +02:00
deep 7f1bb22ef3 fix a bug for quicklistDup() function 2016-10-28 19:47:29 +08:00
jybaek a06d59b583 Add missing fclose() 2016-10-28 10:42:54 +09:00
sunhe 949a274817 bitops.c/bitfieldCommand: update higest_write_offset with check 2016-10-22 01:54:46 +08:00
antirez f39e7d4d7e Remove "Hey!" warning... 2016-10-19 10:43:40 +02:00
antirez a9f50a389b Better target MacOS on __atomic macros conditional compilation. 2016-10-17 16:41:39 +02:00
Pedro Melo 2000abc86f Fixes compilation on MacOS 10.8.5, Clang tags/Apple/clang-421.0.57
Redis fails to compile on MacOS 10.8.5 with Clang 4, version 421.0.57
(based on LLVM 3.1svn).

When compiling zmalloc.c, we get these warnings:

        CC zmalloc.o
    zmalloc.c:109:5: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
        update_zmalloc_stat_alloc(zmalloc_size(ptr));
        ^
    zmalloc.c:75:9: note: expanded from macro 'update_zmalloc_stat_alloc'
            atomicIncr(used_memory,__n,used_memory_mutex); \
            ^
    ./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
    #define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
                                        ^
    zmalloc.c:145:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
        update_zmalloc_stat_free(oldsize);
        ^
    zmalloc.c:85:9: note: expanded from macro 'update_zmalloc_stat_free'
            atomicDecr(used_memory,__n,used_memory_mutex); \
            ^
    ./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
    #define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
                                        ^
    zmalloc.c:205:9: warning: implicit declaration of function '__atomic_load_n' is invalid in C99 [-Wimplicit-function-declaration]
            atomicGet(used_memory,um,used_memory_mutex);
            ^
    ./atomicvar.h:60:14: note: expanded from macro 'atomicGet'
        dstvar = __atomic_load_n(&var,__ATOMIC_RELAXED); \
                 ^
    3 warnings generated.

Also on lazyfree.c:

        CC lazyfree.o
    lazyfree.c:68:13: warning: implicit declaration of function '__atomic_add_fetch' is invalid in C99 [-Wimplicit-function-declaration]
                atomicIncr(lazyfree_objects,1,lazyfree_objects_mutex);
                ^
    ./atomicvar.h:57:37: note: expanded from macro 'atomicIncr'
    #define atomicIncr(var,count,mutex) __atomic_add_fetch(&var,(count),__ATOMIC_RELAXED)
                                        ^
    lazyfree.c:111:5: warning: implicit declaration of function '__atomic_sub_fetch' is invalid in C99 [-Wimplicit-function-declaration]
        atomicDecr(lazyfree_objects,1,lazyfree_objects_mutex);
        ^
    ./atomicvar.h:58:37: note: expanded from macro 'atomicDecr'
    #define atomicDecr(var,count,mutex) __atomic_sub_fetch(&var,(count),__ATOMIC_RELAXED)
                                        ^
    2 warnings generated.

Then in the linking stage:

        LINK redis-server
    Undefined symbols for architecture x86_64:
      "___atomic_add_fetch", referenced from:
          _zmalloc in zmalloc.o
          _zcalloc in zmalloc.o
          _zrealloc in zmalloc.o
          _dbAsyncDelete in lazyfree.o
          _emptyDbAsync in lazyfree.o
          _slotToKeyFlushAsync in lazyfree.o
      "___atomic_load_n", referenced from:
          _zmalloc_used_memory in zmalloc.o
          _zmalloc_get_fragmentation_ratio in zmalloc.o
      "___atomic_sub_fetch", referenced from:
          _zrealloc in zmalloc.o
          _zfree in zmalloc.o
          _lazyfreeFreeObjectFromBioThread in lazyfree.o
          _lazyfreeFreeDatabaseFromBioThread in lazyfree.o
          _lazyfreeFreeSlotsMapFromBioThread in lazyfree.o
    ld: symbol(s) not found for architecture x86_64
    clang: error: linker command failed with exit code 1 (use -v to see invocation)
    make[1]: *** [redis-server] Error 1
    make: *** [all] Error 2

With this patch, the compilation is sucessful, no warnings.

Running `make test` we get a almost clean bill of health. Test pass with
one exception:

    [err]: Check for memory leaks (pid 52793) in tests/unit/dump.tcl
    [err]: Check for memory leaks (pid 53103) in tests/unit/auth.tcl
    [err]: Check for memory leaks (pid 53117) in tests/unit/auth.tcl
    [err]: Check for memory leaks (pid 53131) in tests/unit/protocol.tcl
    [err]: Check for memory leaks (pid 53145) in tests/unit/protocol.tcl
    [ok]: Check for memory leaks (pid 53160)
    [err]: Check for memory leaks (pid 53175) in tests/unit/scan.tcl
    [ok]: Check for memory leaks (pid 53189)
    [err]: Check for memory leaks (pid 53221) in tests/unit/type/incr.tcl
    .
    .
    .

Full debug log (289MB, uncompressed) available at
https://dl.dropboxusercontent.com/u/75548/logs/redis-debug-log-macos-10.8.5.log.xz

Most if not all of the memory leak tests fail. Not sure if this is
related. They are the only ones that fail. I belive they are not related,
but just the memory leak detector is not working properly on 10.8.5.

Signed-off-by: Pedro Melo <melo@simplicidade.org>
2016-10-17 14:58:23 +01:00
antirez c7a4e694ad SWAPDB command.
This new command swaps two Redis databases, so that immediately all the
clients connected to a given DB will see the data of the other DB, and
the other way around. Example:

    SWAPDB 0 1

This will swap DB 0 with DB 1. All the clients connected with DB 0 will
immediately see the new data, exactly like all the clients connected
with DB 1 will see the data that was formerly of DB 0.

MOTIVATION AND HISTORY
---

The command was recently demanded by Pedro Melo, but was suggested in
the past multiple times, and always refused by me.

The reason why it was asked: Imagine you have clients operating in DB 0.
At the same time, you create a new version of the dataset in DB 1.
When the new version of the dataset is available, you immediately want
to swap the two views, so that the clients will transparently use the
new version of the data. At the same time you'll likely destroy the
DB 1 dataset (that contains the old data) and start to build a new
version, to repeat the process.

This is an interesting pattern, but the reason why I always opposed to
implement this, was that FLUSHDB was a blocking command in Redis before
Redis 4.0 improvements. Now we have FLUSHDB ASYNC that releases the
old data in O(1) from the point of view of the client, to reclaim memory
incrementally in a different thread.

At this point, the pattern can really be supported without latency
spikes, so I'm providing this implementation for the users to comment.
In case a very compelling argument will be made against this new command
it may be removed.

BEHAVIOR WITH BLOCKING OPERATIONS
---

If a client is blocking for a list in a given DB, after the swap it will
still be blocked in the same DB ID, since this is the most logical thing
to do: if I was blocked for a list push to list "foo", even after the
swap I want still a LPUSH to reach the key "foo" in the same DB in order
to unblock.

However an interesting thing happens when a client is, for instance,
blocked waiting for new elements in list "foo" of DB 0. Then the DB
0 and 1 are swapped with SWAPDB. However the DB 1 happened to have
a list called "foo" containing elements. When this happens, this
implementation can correctly unblock the client.

It is possible that there are subtle corner cases that are not covered
in the implementation, but since the command is self-contained from the
POV of the implementation and the Redis core, it cannot cause anything
bad if not used.

Tests and documentation are yet to be provided.
2016-10-14 15:28:04 +02:00
antirez a3b3ca7c21 Modules: use RedisModule_AbortBlock() in the example. 2016-10-13 17:00:45 +02:00
antirez 95c17c0cb2 Modules: AbortBlock() API implemented. 2016-10-13 16:57:40 +02:00
antirez 58601c8f7d Modules: blocking API documented. 2016-10-13 16:57:28 +02:00
antirez 553aa0e259 module.c: trim comment to 80 cols. 2016-10-13 12:48:36 +02:00
antirez 870274bea8 Example modules: remove warnings about types and not used args. 2016-10-13 12:43:18 +02:00
yyoshiki41 16f65068b0 Refactor redis-trib.rb 2016-10-10 01:13:20 +09:00
antirez 7dde8bf3ab Modules: blocking command example added. 2016-10-07 16:35:06 +02:00
antirez 34599691b3 Modules: fixes to the blocking commands API: examples now works. 2016-10-07 16:34:40 +02:00
antirez f156038db8 Modules: RM_Milliseconds() API added. 2016-10-07 16:34:19 +02:00
antirez ffb00fbcbe Modules: blocking commands WIP: API exported, a first example. 2016-10-07 13:48:14 +02:00
antirez 3aa816e61a Modules: introduce warning suppression macro for unused args. 2016-10-07 13:10:31 +02:00
antirez 3879923db8 Enable warning in example modules Makefile. 2016-10-07 13:07:13 +02:00
antirez 8fadfe52a2 Module: API to block clients with threading support.
Just a draft to align the main ideas, never executed code. Compiles.
2016-10-07 11:55:35 +02:00
antirez a5998d1fda Fix typos in GetContextFromIO API declaration. 2016-10-06 18:26:04 +02:00
antirez 799208de85 Fix name of mispelled function. 2016-10-06 17:10:47 +02:00
antirez 152c1b6802 Module: Ability to get context from IO context.
It was noted by @dvirsky that it is not possible to use string functions
when writing the AOF file. This sometimes is critical since the command
rewriting may need to be built in the context of the AOF callback, and
without access to the context, and the limited types that the AOF
production functions will accept, this can be an issue.

Moreover there are other needs that we can't anticipate regarding the
ability to use Redis Modules APIs using the context in order to build
representations to emit AOF / RDB.

Because of this a new API was added that allows the user to get a
temporary context from the IO context. The context is auto released
if obtained when the RDB / AOF callback returns.

Calling multiple time the function to get the context, always returns
the same one, since it is invalid to have more than a single context.
2016-10-06 17:09:26 +02:00
antirez 72279e3ea4 Copyright notice added to module.c. 2016-10-06 08:48:21 +02:00
antirez 3dc84c5300 Modules: API to save/load single precision floating point numbers.
When double precision is not needed, to take 2x space in the
serialization is not good.
2016-10-03 00:08:35 +02:00
antirez a1b1fd4f39 Modules: API to log from module I/O callbacks. 2016-10-02 16:51:37 +02:00
antirez 4674efdee2 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2016-10-02 16:50:37 +02:00
antirez 0d9febf6a0 Add compiler optimizations to example module makefile. 2016-10-02 11:01:36 +02:00
antirez 6782e774f1 debug.c: include dlfcn.h regardless of BACKTRACE support. 2016-09-27 00:29:47 +02:00
antirez 2564031a15 Merge branch 'unstable' of github.com:/antirez/redis into unstable 2016-09-26 09:10:52 +02:00
antirez 6d9f8e2462 Security: CONFIG SET client-output-buffer-limit overflow fixed.
This commit fixes a vunlerability reported by Cory Duplantis
of Cisco Talos, see TALOS-2016-0206 for reference.

CONFIG SET client-output-buffer-limit accepts as client class "master"
which is actually only used to implement CLIENT KILL. The "master" class
has ID 3. What happens is that the global structure:

    server.client_obuf_limits[class]

Is accessed with class = 3. However it is a 3 elements array, so writing
the 4th element means to write up to 24 bytes of memory *after* the end
of the array, since the structure is defined as:

    typedef struct clientBufferLimitsConfig {
        unsigned long long hard_limit_bytes;
        unsigned long long soft_limit_bytes;
        time_t soft_limit_seconds;
    } clientBufferLimitsConfig;

EVALUATION OF IMPACT:

Checking what's past the boundaries of the array in the global
'server' structure, we find AOF state fields:

    clientBufferLimitsConfig client_obuf_limits[CLIENT_TYPE_OBUF_COUNT];
    /* AOF persistence */
    int aof_state;                  /* AOF_(ON|OFF|WAIT_REWRITE) */
    int aof_fsync;                  /* Kind of fsync() policy */
    char *aof_filename;             /* Name of the AOF file */
    int aof_no_fsync_on_rewrite;    /* Don't fsync if a rewrite is in prog. */
    int aof_rewrite_perc;           /* Rewrite AOF if % growth is > M and... */
    off_t aof_rewrite_min_size;     /* the AOF file is at least N bytes. */
    off_t aof_rewrite_base_size;    /* AOF size on latest startup or rewrite. */
    off_t aof_current_size;         /* AOF current size. */

Writing to most of these fields should be harmless and only cause problems in
Redis persistence that should not escalate to security problems.
However unfortunately writing to "aof_filename" could be potentially a
security issue depending on the access pattern.

Searching for "aof.filename" accesses in the source code returns many different
usages of the field, including using it as input for open(), logging to the
Redis log file or syslog, and calling the rename() syscall.

It looks possible that attacks could lead at least to informations
disclosure of the state and data inside Redis. However note that the
attacker must already have access to the server. But, worse than that,
it looks possible that being able to change the AOF filename can be used
to mount more powerful attacks: like overwriting random files with AOF
data (easily a potential security issue as demostrated here:
http://antirez.com/news/96), or even more subtle attacks where the
AOF filename is changed to a path were a malicious AOF file is loaded
in order to exploit other potential issues when the AOF parser is fed
with untrusted input (no known issue known currently).

The fix checks the places where the 'master' class is specifiedf in
order to access configuration data structures, and return an error in
this cases.

WHO IS AT RISK?

The "master" client class was introduced in Redis in Jul 28 2015.
Every Redis instance released past this date is not vulnerable
while all the releases after this date are. Notably:

    Redis 3.0.x is NOT vunlerable.
    Redis 3.2.x IS vulnerable.
    Redis unstable is vulnerable.

In order for the instance to be at risk, at least one of the following
conditions must be true:

    1. The attacker can access Redis remotely and is able to send
       the CONFIG SET command (often banned in managed Redis instances).

    2. The attacker is able to control the "redis.conf" file and
       can wait or trigger a server restart.

The problem was fixed 26th September 2016 in all the releases affected.
2016-09-26 08:47:52 +02:00
charsyam ca6fc4f031 Simple change just using slaves instead of server.slaves 2016-09-24 15:53:57 +09:00
Dvir Volk a91650fc57 added RM_CreateStringPrintf 2016-09-21 12:30:38 +03:00
antirez 670586715a dict.c: fix dictGenericDelete() return ASAP condition.
Recently we moved the "return ASAP" condition for the Delete() function
from checking .size to checking .used, which is smarter, however while
testing the first table alone always works to ensure the dict is totally
emtpy, when we test the .size field, testing .used requires testing both
T0 and T1, since a rehashing could be in progress.
2016-09-20 17:22:30 +02:00
antirez e9d861ec69 Clear child data when opening the pipes.
This is important both to reset the magic to 0, so that it will not
match if the structure is not explicitly set, and to initialize other
things we may add like counters and such.
2016-09-19 14:11:17 +02:00