2008-07-03 21:41:03 +08:00
|
|
|
/*
|
2017-07-08 04:30:43 +08:00
|
|
|
* Copyright (C) 2016-2017 Red Hat, Inc.
|
2008-05-28 05:13:40 +08:00
|
|
|
* Copyright (C) 2005 Anthony Liguori <anthony@codemonkey.ws>
|
|
|
|
*
|
2016-01-14 16:41:02 +08:00
|
|
|
* Network Block Device Server Side
|
2008-05-28 05:13:40 +08:00
|
|
|
*
|
|
|
|
* This program is free software; you can redistribute it and/or modify
|
|
|
|
* it under the terms of the GNU General Public License as published by
|
|
|
|
* the Free Software Foundation; under version 2 of the License.
|
|
|
|
*
|
|
|
|
* This program is distributed in the hope that it will be useful,
|
|
|
|
* but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
|
|
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
|
|
* GNU General Public License for more details.
|
|
|
|
*
|
|
|
|
* You should have received a copy of the GNU General Public License
|
2009-07-17 04:47:01 +08:00
|
|
|
* along with this program; if not, see <http://www.gnu.org/licenses/>.
|
2008-07-03 21:41:03 +08:00
|
|
|
*/
|
2008-05-28 05:13:40 +08:00
|
|
|
|
2016-01-30 01:50:05 +08:00
|
|
|
#include "qemu/osdep.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 16:01:28 +08:00
|
|
|
#include "qapi/error.h"
|
2017-07-07 23:29:18 +08:00
|
|
|
#include "trace.h"
|
2016-01-14 16:41:02 +08:00
|
|
|
#include "nbd-internal.h"
|
qemu-nbd: only send a limited number of errno codes on the wire
Right now, NBD includes potentially platform-specific error values in
the wire protocol.
Luckily, most common error values are more or less universal: in
particular, of all errno values <= 34 (up to ERANGE), they are all the
same on supported platforms except for 11 (which is EAGAIN on Windows and
Linux, but EDEADLK on Darwin and the *BSDs). So, in order to guarantee
some portability, only keep a handful of possible error codes and squash
everything else to EINVAL.
This patch defines a limited set of errno values that are valid for the
NBD protocol, and specifies recommendations for what error to return
in specific corner cases. The set of errno values is roughly based on
the errors listed in the read(2) and write(2) man pages, with some
exceptions:
- ENOMEM is added for servers that implement copy-on-write or other
formats that require dynamic allocation.
- EDQUOT is not part of the universal set of errors; it can be changed
to ENOSPC on the wire format.
- EFBIG is part of the universal set of errors, but it is also changed
to ENOSPC because it is pretty similar to ENOSPC or EDQUOT.
Incoming values will in general match system errno values, but not
on the Hurd which has different errno values (they have a "subsystem
code" equal to 0x10 in bits 24-31). The Hurd is probably not something
to which QEMU has been ported, but still do the right thing and
reverse-map the NBD errno values to the system errno values.
The corresponding patch to the NBD protocol description can be found at
http://article.gmane.org/gmane.linux.drivers.nbd.general/3154.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-05-07 23:25:10 +08:00
|
|
|
|
|
|
|
static int system_errno_to_nbd_errno(int err)
|
|
|
|
{
|
|
|
|
switch (err) {
|
|
|
|
case 0:
|
|
|
|
return NBD_SUCCESS;
|
|
|
|
case EPERM:
|
2016-04-06 11:35:02 +08:00
|
|
|
case EROFS:
|
qemu-nbd: only send a limited number of errno codes on the wire
Right now, NBD includes potentially platform-specific error values in
the wire protocol.
Luckily, most common error values are more or less universal: in
particular, of all errno values <= 34 (up to ERANGE), they are all the
same on supported platforms except for 11 (which is EAGAIN on Windows and
Linux, but EDEADLK on Darwin and the *BSDs). So, in order to guarantee
some portability, only keep a handful of possible error codes and squash
everything else to EINVAL.
This patch defines a limited set of errno values that are valid for the
NBD protocol, and specifies recommendations for what error to return
in specific corner cases. The set of errno values is roughly based on
the errors listed in the read(2) and write(2) man pages, with some
exceptions:
- ENOMEM is added for servers that implement copy-on-write or other
formats that require dynamic allocation.
- EDQUOT is not part of the universal set of errors; it can be changed
to ENOSPC on the wire format.
- EFBIG is part of the universal set of errors, but it is also changed
to ENOSPC because it is pretty similar to ENOSPC or EDQUOT.
Incoming values will in general match system errno values, but not
on the Hurd which has different errno values (they have a "subsystem
code" equal to 0x10 in bits 24-31). The Hurd is probably not something
to which QEMU has been ported, but still do the right thing and
reverse-map the NBD errno values to the system errno values.
The corresponding patch to the NBD protocol description can be found at
http://article.gmane.org/gmane.linux.drivers.nbd.general/3154.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-05-07 23:25:10 +08:00
|
|
|
return NBD_EPERM;
|
|
|
|
case EIO:
|
|
|
|
return NBD_EIO;
|
|
|
|
case ENOMEM:
|
|
|
|
return NBD_ENOMEM;
|
|
|
|
#ifdef EDQUOT
|
|
|
|
case EDQUOT:
|
|
|
|
#endif
|
|
|
|
case EFBIG:
|
|
|
|
case ENOSPC:
|
|
|
|
return NBD_ENOSPC;
|
2016-10-15 02:33:16 +08:00
|
|
|
case ESHUTDOWN:
|
|
|
|
return NBD_ESHUTDOWN;
|
qemu-nbd: only send a limited number of errno codes on the wire
Right now, NBD includes potentially platform-specific error values in
the wire protocol.
Luckily, most common error values are more or less universal: in
particular, of all errno values <= 34 (up to ERANGE), they are all the
same on supported platforms except for 11 (which is EAGAIN on Windows and
Linux, but EDEADLK on Darwin and the *BSDs). So, in order to guarantee
some portability, only keep a handful of possible error codes and squash
everything else to EINVAL.
This patch defines a limited set of errno values that are valid for the
NBD protocol, and specifies recommendations for what error to return
in specific corner cases. The set of errno values is roughly based on
the errors listed in the read(2) and write(2) man pages, with some
exceptions:
- ENOMEM is added for servers that implement copy-on-write or other
formats that require dynamic allocation.
- EDQUOT is not part of the universal set of errors; it can be changed
to ENOSPC on the wire format.
- EFBIG is part of the universal set of errors, but it is also changed
to ENOSPC because it is pretty similar to ENOSPC or EDQUOT.
Incoming values will in general match system errno values, but not
on the Hurd which has different errno values (they have a "subsystem
code" equal to 0x10 in bits 24-31). The Hurd is probably not something
to which QEMU has been ported, but still do the right thing and
reverse-map the NBD errno values to the system errno values.
The corresponding patch to the NBD protocol description can be found at
http://article.gmane.org/gmane.linux.drivers.nbd.general/3154.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2015-05-07 23:25:10 +08:00
|
|
|
case EINVAL:
|
|
|
|
default:
|
|
|
|
return NBD_EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-08-22 21:30:31 +08:00
|
|
|
/* Definitions for opaque data types */
|
|
|
|
|
2016-10-15 02:33:05 +08:00
|
|
|
typedef struct NBDRequestData NBDRequestData;
|
2012-08-22 21:30:31 +08:00
|
|
|
|
2016-10-15 02:33:05 +08:00
|
|
|
struct NBDRequestData {
|
|
|
|
QSIMPLEQ_ENTRY(NBDRequestData) entry;
|
2012-08-22 21:30:31 +08:00
|
|
|
NBDClient *client;
|
|
|
|
uint8_t *data;
|
2016-05-12 06:39:37 +08:00
|
|
|
bool complete;
|
2012-08-22 21:30:31 +08:00
|
|
|
};
|
|
|
|
|
|
|
|
struct NBDExport {
|
2012-09-18 19:26:25 +08:00
|
|
|
int refcount;
|
2012-09-18 19:59:03 +08:00
|
|
|
void (*close)(NBDExport *exp);
|
|
|
|
|
2014-11-18 19:21:18 +08:00
|
|
|
BlockBackend *blk;
|
2012-08-22 21:59:23 +08:00
|
|
|
char *name;
|
2016-10-15 02:33:03 +08:00
|
|
|
char *description;
|
2012-08-22 21:30:31 +08:00
|
|
|
off_t dev_offset;
|
|
|
|
off_t size;
|
2016-07-22 03:34:46 +08:00
|
|
|
uint16_t nbdflags;
|
2012-09-18 19:58:25 +08:00
|
|
|
QTAILQ_HEAD(, NBDClient) clients;
|
2012-08-22 21:59:23 +08:00
|
|
|
QTAILQ_ENTRY(NBDExport) next;
|
2014-06-21 03:57:32 +08:00
|
|
|
|
|
|
|
AioContext *ctx;
|
2016-01-29 23:36:06 +08:00
|
|
|
|
2016-07-06 17:22:39 +08:00
|
|
|
BlockBackend *eject_notifier_blk;
|
2016-01-29 23:36:06 +08:00
|
|
|
Notifier eject_notifier;
|
2012-08-22 21:30:31 +08:00
|
|
|
};
|
|
|
|
|
2012-08-22 21:59:23 +08:00
|
|
|
static QTAILQ_HEAD(, NBDExport) exports = QTAILQ_HEAD_INITIALIZER(exports);
|
|
|
|
|
2012-08-22 21:30:31 +08:00
|
|
|
struct NBDClient {
|
|
|
|
int refcount;
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
void (*close_fn)(NBDClient *client, bool negotiated);
|
2012-08-22 21:30:31 +08:00
|
|
|
|
|
|
|
NBDExport *exp;
|
2016-02-11 02:41:11 +08:00
|
|
|
QCryptoTLSCreds *tlscreds;
|
|
|
|
char *tlsaclname;
|
2016-02-11 02:41:04 +08:00
|
|
|
QIOChannelSocket *sioc; /* The underlying data channel */
|
|
|
|
QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) */
|
2012-08-22 21:30:31 +08:00
|
|
|
|
|
|
|
Coroutine *recv_coroutine;
|
|
|
|
|
|
|
|
CoMutex send_lock;
|
|
|
|
Coroutine *send_coroutine;
|
|
|
|
|
2012-09-18 19:58:25 +08:00
|
|
|
QTAILQ_ENTRY(NBDClient) next;
|
2012-08-22 21:30:31 +08:00
|
|
|
int nb_requests;
|
2012-08-23 00:45:12 +08:00
|
|
|
bool closing;
|
2012-08-22 21:30:31 +08:00
|
|
|
};
|
|
|
|
|
2008-05-28 05:13:40 +08:00
|
|
|
/* That's all folks */
|
|
|
|
|
2017-02-13 21:52:24 +08:00
|
|
|
static void nbd_client_receive_next_request(NBDClient *client);
|
2014-06-21 03:57:32 +08:00
|
|
|
|
2012-08-23 20:57:11 +08:00
|
|
|
/* Basic flow for negotiation
|
2008-05-28 05:13:40 +08:00
|
|
|
|
|
|
|
Server Client
|
|
|
|
Negotiate
|
2012-08-23 20:57:11 +08:00
|
|
|
|
|
|
|
or
|
|
|
|
|
|
|
|
Server Client
|
|
|
|
Negotiate #1
|
|
|
|
Option
|
|
|
|
Negotiate #2
|
|
|
|
|
|
|
|
----
|
|
|
|
|
|
|
|
followed by
|
|
|
|
|
|
|
|
Server Client
|
2008-05-28 05:13:40 +08:00
|
|
|
Request
|
|
|
|
Response
|
|
|
|
Request
|
|
|
|
Response
|
|
|
|
...
|
|
|
|
...
|
|
|
|
Request (type == 2)
|
2012-08-23 20:57:11 +08:00
|
|
|
|
2008-05-28 05:13:40 +08:00
|
|
|
*/
|
|
|
|
|
2016-10-15 02:33:08 +08:00
|
|
|
/* Send a reply header, including length, but no payload.
|
|
|
|
* Return -errno on error, 0 on success. */
|
|
|
|
static int nbd_negotiate_send_rep_len(QIOChannel *ioc, uint32_t type,
|
2017-07-07 23:29:11 +08:00
|
|
|
uint32_t opt, uint32_t len, Error **errp)
|
2012-08-23 20:57:11 +08:00
|
|
|
{
|
|
|
|
uint64_t magic;
|
|
|
|
|
2017-07-08 04:30:43 +08:00
|
|
|
trace_nbd_negotiate_send_rep_len(opt, nbd_opt_lookup(opt),
|
|
|
|
type, nbd_rep_lookup(type), len);
|
2016-02-11 02:41:11 +08:00
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
assert(len < NBD_MAX_BUFFER_SIZE);
|
2014-06-07 08:32:31 +08:00
|
|
|
magic = cpu_to_be64(NBD_REP_MAGIC);
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, &magic, sizeof(magic), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (rep magic): ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
2014-06-07 08:32:31 +08:00
|
|
|
opt = cpu_to_be32(opt);
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, &opt, sizeof(opt), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (rep opt): ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
2014-06-07 08:32:31 +08:00
|
|
|
type = cpu_to_be32(type);
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, &type, sizeof(type), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (rep type): ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
2016-10-15 02:33:08 +08:00
|
|
|
len = cpu_to_be32(len);
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, &len, sizeof(len), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (rep data length): ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2014-06-07 08:32:31 +08:00
|
|
|
return 0;
|
|
|
|
}
|
2012-08-23 20:57:11 +08:00
|
|
|
|
2016-10-15 02:33:08 +08:00
|
|
|
/* Send a reply header with default 0 length.
|
|
|
|
* Return -errno on error, 0 on success. */
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_negotiate_send_rep(QIOChannel *ioc, uint32_t type, uint32_t opt,
|
|
|
|
Error **errp)
|
2016-10-15 02:33:08 +08:00
|
|
|
{
|
2017-07-07 23:29:11 +08:00
|
|
|
return nbd_negotiate_send_rep_len(ioc, type, opt, 0, errp);
|
2016-10-15 02:33:08 +08:00
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:09 +08:00
|
|
|
/* Send an error reply.
|
|
|
|
* Return -errno on error, 0 on success. */
|
2017-07-07 23:29:11 +08:00
|
|
|
static int GCC_FMT_ATTR(5, 6)
|
2016-10-15 02:33:09 +08:00
|
|
|
nbd_negotiate_send_rep_err(QIOChannel *ioc, uint32_t type,
|
2017-07-07 23:29:11 +08:00
|
|
|
uint32_t opt, Error **errp, const char *fmt, ...)
|
2016-10-15 02:33:09 +08:00
|
|
|
{
|
|
|
|
va_list va;
|
|
|
|
char *msg;
|
|
|
|
int ret;
|
|
|
|
size_t len;
|
|
|
|
|
|
|
|
va_start(va, fmt);
|
|
|
|
msg = g_strdup_vprintf(fmt, va);
|
|
|
|
va_end(va);
|
|
|
|
len = strlen(msg);
|
|
|
|
assert(len < 4096);
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_send_rep_err(msg);
|
2017-07-07 23:29:11 +08:00
|
|
|
ret = nbd_negotiate_send_rep_len(ioc, type, opt, len, errp);
|
2016-10-15 02:33:09 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, msg, len, errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (error message): ");
|
2016-10-15 02:33:09 +08:00
|
|
|
ret = -EIO;
|
|
|
|
} else {
|
|
|
|
ret = 0;
|
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
2016-10-15 02:33:09 +08:00
|
|
|
out:
|
|
|
|
g_free(msg);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:08 +08:00
|
|
|
/* Send a single NBD_REP_SERVER reply to NBD_OPT_LIST, including payload.
|
|
|
|
* Return -errno on error, 0 on success. */
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_negotiate_send_rep_list(QIOChannel *ioc, NBDExport *exp,
|
|
|
|
Error **errp)
|
2014-06-07 08:32:32 +08:00
|
|
|
{
|
2016-10-15 02:33:03 +08:00
|
|
|
size_t name_len, desc_len;
|
2016-10-15 02:33:08 +08:00
|
|
|
uint32_t len;
|
2016-10-15 02:33:03 +08:00
|
|
|
const char *name = exp->name ? exp->name : "";
|
|
|
|
const char *desc = exp->description ? exp->description : "";
|
2017-06-02 23:01:49 +08:00
|
|
|
int ret;
|
2014-06-07 08:32:32 +08:00
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_send_rep_list(name, desc);
|
2016-10-15 02:33:03 +08:00
|
|
|
name_len = strlen(name);
|
|
|
|
desc_len = strlen(desc);
|
2016-10-15 02:33:08 +08:00
|
|
|
len = name_len + desc_len + sizeof(len);
|
2017-07-07 23:29:11 +08:00
|
|
|
ret = nbd_negotiate_send_rep_len(ioc, NBD_REP_SERVER, NBD_OPT_LIST, len,
|
|
|
|
errp);
|
2017-06-02 23:01:49 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
2014-06-07 08:32:32 +08:00
|
|
|
}
|
2016-10-15 02:33:08 +08:00
|
|
|
|
2014-06-07 08:32:32 +08:00
|
|
|
len = cpu_to_be32(name_len);
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(ioc, &len, sizeof(len), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (name length): ");
|
2016-10-15 02:33:03 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
|
|
|
if (nbd_write(ioc, name, name_len, errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (name buffer): ");
|
2014-06-07 08:32:32 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
|
|
|
if (nbd_write(ioc, desc, desc_len, errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed (description buffer): ");
|
2014-06-07 08:32:32 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
|
2014-06-07 08:32:32 +08:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:08 +08:00
|
|
|
/* Process the NBD_OPT_LIST command, with a potential series of replies.
|
|
|
|
* Return -errno on error, 0 on success. */
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_negotiate_handle_list(NBDClient *client, uint32_t length,
|
|
|
|
Error **errp)
|
2014-06-07 08:32:32 +08:00
|
|
|
{
|
|
|
|
NBDExport *exp;
|
|
|
|
|
|
|
|
if (length) {
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_drop(client->ioc, length, errp) < 0) {
|
2015-02-26 02:08:34 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
2016-10-15 02:33:09 +08:00
|
|
|
return nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_INVALID, NBD_OPT_LIST,
|
2017-07-07 23:29:11 +08:00
|
|
|
errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"OPT_LIST should not have length");
|
2014-06-07 08:32:32 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* For each export, send a NBD_REP_SERVER reply. */
|
|
|
|
QTAILQ_FOREACH(exp, &exports, next) {
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_negotiate_send_rep_list(client->ioc, exp, errp)) {
|
2014-06-07 08:32:32 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* Finish with a NBD_REP_ACK. */
|
2017-07-07 23:29:11 +08:00
|
|
|
return nbd_negotiate_send_rep(client->ioc, NBD_REP_ACK, NBD_OPT_LIST, errp);
|
2014-06-07 08:32:32 +08:00
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
/* Send a reply to NBD_OPT_EXPORT_NAME.
|
|
|
|
* Return -errno on error, 0 on success. */
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_negotiate_handle_export_name(NBDClient *client, uint32_t length,
|
2017-07-08 04:30:45 +08:00
|
|
|
uint16_t myflags, bool no_zeroes,
|
2017-07-07 23:29:11 +08:00
|
|
|
Error **errp)
|
2014-06-07 08:32:31 +08:00
|
|
|
{
|
2016-05-12 06:39:44 +08:00
|
|
|
char name[NBD_MAX_NAME_SIZE + 1];
|
2017-07-18 03:26:35 +08:00
|
|
|
char buf[NBD_REPLY_EXPORT_NAME_SIZE] = "";
|
2017-07-08 04:30:45 +08:00
|
|
|
size_t len;
|
|
|
|
int ret;
|
2012-08-23 20:57:11 +08:00
|
|
|
|
2014-06-07 08:32:31 +08:00
|
|
|
/* Client sends:
|
|
|
|
[20 .. xx] export name (length bytes)
|
2017-07-18 03:26:35 +08:00
|
|
|
Server replies:
|
|
|
|
[ 0 .. 7] size
|
|
|
|
[ 8 .. 9] export flags
|
|
|
|
[10 .. 133] reserved (0) [unless no_zeroes]
|
2014-06-07 08:32:31 +08:00
|
|
|
*/
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_handle_export_name();
|
2016-05-12 06:39:44 +08:00
|
|
|
if (length >= sizeof(name)) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "Bad length received");
|
2017-06-02 23:01:48 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_read(client->ioc, name, length, errp) < 0) {
|
|
|
|
error_prepend(errp, "read failed: ");
|
2017-06-02 23:01:48 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
|
|
|
name[length] = '\0';
|
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_handle_export_name_request(name);
|
2016-02-11 02:41:09 +08:00
|
|
|
|
2012-08-23 20:57:11 +08:00
|
|
|
client->exp = nbd_export_find(name);
|
|
|
|
if (!client->exp) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "export not found");
|
2017-06-02 23:01:48 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:45 +08:00
|
|
|
trace_nbd_negotiate_new_style_size_flags(client->exp->size,
|
|
|
|
client->exp->nbdflags | myflags);
|
|
|
|
stq_be_p(buf, client->exp->size);
|
|
|
|
stw_be_p(buf + 8, client->exp->nbdflags | myflags);
|
|
|
|
len = no_zeroes ? 10 : sizeof(buf);
|
|
|
|
ret = nbd_write(client->ioc, buf, len, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
error_prepend(errp, "write failed: ");
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2012-08-23 20:57:11 +08:00
|
|
|
QTAILQ_INSERT_TAIL(&client->exp->clients, client, next);
|
|
|
|
nbd_export_get(client->exp);
|
2017-06-02 23:01:48 +08:00
|
|
|
|
|
|
|
return 0;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
/* Send a single NBD_REP_INFO, with a buffer @buf of @length bytes.
|
|
|
|
* The buffer does NOT include the info type prefix.
|
|
|
|
* Return -errno on error, 0 if ready to send more. */
|
|
|
|
static int nbd_negotiate_send_info(NBDClient *client, uint32_t opt,
|
|
|
|
uint16_t info, uint32_t length, void *buf,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
|
|
|
|
trace_nbd_negotiate_send_info(info, nbd_info_lookup(info), length);
|
|
|
|
rc = nbd_negotiate_send_rep_len(client->ioc, NBD_REP_INFO, opt,
|
|
|
|
sizeof(info) + length, errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
cpu_to_be16s(&info);
|
|
|
|
if (nbd_write(client->ioc, &info, sizeof(info), errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
if (nbd_write(client->ioc, buf, length, errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Handle NBD_OPT_INFO and NBD_OPT_GO.
|
|
|
|
* Return -errno on error, 0 if ready for next option, and 1 to move
|
|
|
|
* into transmission phase. */
|
|
|
|
static int nbd_negotiate_handle_info(NBDClient *client, uint32_t length,
|
|
|
|
uint32_t opt, uint16_t myflags,
|
|
|
|
Error **errp)
|
|
|
|
{
|
|
|
|
int rc;
|
|
|
|
char name[NBD_MAX_NAME_SIZE + 1];
|
|
|
|
NBDExport *exp;
|
|
|
|
uint16_t requests;
|
|
|
|
uint16_t request;
|
|
|
|
uint32_t namelen;
|
|
|
|
bool sendname = false;
|
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707203049.534-9-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-08 04:30:48 +08:00
|
|
|
bool blocksize = false;
|
|
|
|
uint32_t sizes[3];
|
2017-07-08 04:30:46 +08:00
|
|
|
char buf[sizeof(uint64_t) + sizeof(uint16_t)];
|
|
|
|
const char *msg;
|
|
|
|
|
|
|
|
/* Client sends:
|
|
|
|
4 bytes: L, name length (can be 0)
|
|
|
|
L bytes: export name
|
|
|
|
2 bytes: N, number of requests (can be 0)
|
|
|
|
N * 2 bytes: N requests
|
|
|
|
*/
|
|
|
|
if (length < sizeof(namelen) + sizeof(requests)) {
|
|
|
|
msg = "overall request too short";
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
if (nbd_read(client->ioc, &namelen, sizeof(namelen), errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
be32_to_cpus(&namelen);
|
|
|
|
length -= sizeof(namelen);
|
|
|
|
if (namelen > length - sizeof(requests) || (length - namelen) % 2) {
|
|
|
|
msg = "name length is incorrect";
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
if (nbd_read(client->ioc, name, namelen, errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
name[namelen] = '\0';
|
|
|
|
length -= namelen;
|
|
|
|
trace_nbd_negotiate_handle_export_name_request(name);
|
|
|
|
|
|
|
|
if (nbd_read(client->ioc, &requests, sizeof(requests), errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
be16_to_cpus(&requests);
|
|
|
|
length -= sizeof(requests);
|
|
|
|
trace_nbd_negotiate_handle_info_requests(requests);
|
|
|
|
if (requests != length / sizeof(request)) {
|
|
|
|
msg = "incorrect number of requests for overall length";
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
while (requests--) {
|
|
|
|
if (nbd_read(client->ioc, &request, sizeof(request), errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
be16_to_cpus(&request);
|
|
|
|
length -= sizeof(request);
|
|
|
|
trace_nbd_negotiate_handle_info_request(request,
|
|
|
|
nbd_info_lookup(request));
|
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707203049.534-9-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-08 04:30:48 +08:00
|
|
|
/* We care about NBD_INFO_NAME and NBD_INFO_BLOCK_SIZE;
|
|
|
|
* everything else is either a request we don't know or
|
|
|
|
* something we send regardless of request */
|
|
|
|
switch (request) {
|
|
|
|
case NBD_INFO_NAME:
|
2017-07-08 04:30:46 +08:00
|
|
|
sendname = true;
|
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707203049.534-9-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-08 04:30:48 +08:00
|
|
|
break;
|
|
|
|
case NBD_INFO_BLOCK_SIZE:
|
|
|
|
blocksize = true;
|
|
|
|
break;
|
2017-07-08 04:30:46 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
exp = nbd_export_find(name);
|
|
|
|
if (!exp) {
|
|
|
|
return nbd_negotiate_send_rep_err(client->ioc, NBD_REP_ERR_UNKNOWN,
|
|
|
|
opt, errp, "export '%s' not present",
|
|
|
|
name);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Don't bother sending NBD_INFO_NAME unless client requested it */
|
|
|
|
if (sendname) {
|
|
|
|
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_NAME, length, name,
|
|
|
|
errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Send NBD_INFO_DESCRIPTION only if available, regardless of
|
|
|
|
* client request */
|
|
|
|
if (exp->description) {
|
|
|
|
size_t len = strlen(exp->description);
|
|
|
|
|
|
|
|
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_DESCRIPTION,
|
|
|
|
len, exp->description, errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707203049.534-9-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-08 04:30:48 +08:00
|
|
|
/* Send NBD_INFO_BLOCK_SIZE always, but tweak the minimum size
|
|
|
|
* according to whether the client requested it, and according to
|
|
|
|
* whether this is OPT_INFO or OPT_GO. */
|
|
|
|
/* minimum - 1 for back-compat, or 512 if client is new enough.
|
|
|
|
* TODO: consult blk_bs(blk)->bl.request_alignment? */
|
|
|
|
sizes[0] = (opt == NBD_OPT_INFO || blocksize) ? BDRV_SECTOR_SIZE : 1;
|
|
|
|
/* preferred - Hard-code to 4096 for now.
|
|
|
|
* TODO: is blk_bs(blk)->bl.opt_transfer appropriate? */
|
|
|
|
sizes[1] = 4096;
|
|
|
|
/* maximum - At most 32M, but smaller as appropriate. */
|
|
|
|
sizes[2] = MIN(blk_get_max_transfer(exp->blk), NBD_MAX_BUFFER_SIZE);
|
|
|
|
trace_nbd_negotiate_handle_info_block_size(sizes[0], sizes[1], sizes[2]);
|
|
|
|
cpu_to_be32s(&sizes[0]);
|
|
|
|
cpu_to_be32s(&sizes[1]);
|
|
|
|
cpu_to_be32s(&sizes[2]);
|
|
|
|
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_BLOCK_SIZE,
|
|
|
|
sizeof(sizes), sizes, errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
/* Send NBD_INFO_EXPORT always */
|
|
|
|
trace_nbd_negotiate_new_style_size_flags(exp->size,
|
|
|
|
exp->nbdflags | myflags);
|
|
|
|
stq_be_p(buf, exp->size);
|
|
|
|
stw_be_p(buf + 8, exp->nbdflags | myflags);
|
|
|
|
rc = nbd_negotiate_send_info(client, opt, NBD_INFO_EXPORT,
|
|
|
|
sizeof(buf), buf, errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
nbd: Implement NBD_INFO_BLOCK_SIZE on server
The upstream NBD Protocol has defined a new extension to allow
the server to advertise block sizes to the client, as well as
a way for the client to inform the server that it intends to
obey block sizes.
Thanks to a recent fix (commit df7b97ff), our real minimum
transfer size is always 1 (the block layer takes care of
read-modify-write on our behalf), but we're still more efficient
if we advertise 512 when the client supports it, as follows:
- OPT_INFO, but no NBD_INFO_BLOCK_SIZE: advertise 512, then
fail with NBD_REP_ERR_BLOCK_SIZE_REQD; client is free to try
something else since we don't disconnect
- OPT_INFO with NBD_INFO_BLOCK_SIZE: advertise 512
- OPT_GO, but no NBD_INFO_BLOCK_SIZE: advertise 1
- OPT_GO with NBD_INFO_BLOCK_SIZE: advertise 512
We can also advertise the optimum block size (presumably the
cluster size, when exporting a qcow2 file), and our absolute
maximum transfer size of 32M, to help newer clients avoid
EINVAL failures or abrupt disconnects on oversize requests.
We do not reject clients for using the older NBD_OPT_EXPORT_NAME;
we are no worse off for those clients than we used to be.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707203049.534-9-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-07-08 04:30:48 +08:00
|
|
|
/* If the client is just asking for NBD_OPT_INFO, but forgot to
|
|
|
|
* request block sizes, return an error.
|
|
|
|
* TODO: consult blk_bs(blk)->request_align, and only error if it
|
|
|
|
* is not 1? */
|
|
|
|
if (opt == NBD_OPT_INFO && !blocksize) {
|
|
|
|
return nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_BLOCK_SIZE_REQD, opt,
|
|
|
|
errp,
|
|
|
|
"request NBD_INFO_BLOCK_SIZE to "
|
|
|
|
"use this export");
|
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
/* Final reply */
|
|
|
|
rc = nbd_negotiate_send_rep(client->ioc, NBD_REP_ACK, opt, errp);
|
|
|
|
if (rc < 0) {
|
|
|
|
return rc;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (opt == NBD_OPT_GO) {
|
|
|
|
client->exp = exp;
|
|
|
|
QTAILQ_INSERT_TAIL(&client->exp->clients, client, next);
|
|
|
|
nbd_export_get(client->exp);
|
|
|
|
rc = 1;
|
|
|
|
}
|
|
|
|
return rc;
|
|
|
|
|
|
|
|
invalid:
|
|
|
|
if (nbd_drop(client->ioc, length, errp) < 0) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
return nbd_negotiate_send_rep_err(client->ioc, NBD_REP_ERR_INVALID, opt,
|
|
|
|
errp, "%s", msg);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2016-10-15 02:33:09 +08:00
|
|
|
/* Handle NBD_OPT_STARTTLS. Return NULL to drop connection, or else the
|
|
|
|
* new channel for all further (now-encrypted) communication. */
|
2016-02-11 02:41:11 +08:00
|
|
|
static QIOChannel *nbd_negotiate_handle_starttls(NBDClient *client,
|
2017-07-07 23:29:11 +08:00
|
|
|
uint32_t length,
|
|
|
|
Error **errp)
|
2016-02-11 02:41:11 +08:00
|
|
|
{
|
|
|
|
QIOChannel *ioc;
|
|
|
|
QIOChannelTLS *tioc;
|
|
|
|
struct NBDTLSHandshakeData data = { 0 };
|
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_handle_starttls();
|
2016-02-11 02:41:11 +08:00
|
|
|
ioc = client->ioc;
|
|
|
|
if (length) {
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_drop(ioc, length, errp) < 0) {
|
2016-02-11 02:41:11 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
2016-10-15 02:33:09 +08:00
|
|
|
nbd_negotiate_send_rep_err(ioc, NBD_REP_ERR_INVALID, NBD_OPT_STARTTLS,
|
2017-07-07 23:29:11 +08:00
|
|
|
errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"OPT_STARTTLS should not have length");
|
2016-02-11 02:41:11 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-05-12 06:39:36 +08:00
|
|
|
if (nbd_negotiate_send_rep(client->ioc, NBD_REP_ACK,
|
2017-07-07 23:29:11 +08:00
|
|
|
NBD_OPT_STARTTLS, errp) < 0) {
|
2016-05-12 06:39:36 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
2016-02-11 02:41:11 +08:00
|
|
|
|
|
|
|
tioc = qio_channel_tls_new_server(ioc,
|
|
|
|
client->tlscreds,
|
|
|
|
client->tlsaclname,
|
2017-07-07 23:29:11 +08:00
|
|
|
errp);
|
2016-02-11 02:41:11 +08:00
|
|
|
if (!tioc) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2016-09-30 18:57:14 +08:00
|
|
|
qio_channel_set_name(QIO_CHANNEL(tioc), "nbd-server-tls");
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_handle_starttls_handshake();
|
2016-02-11 02:41:11 +08:00
|
|
|
data.loop = g_main_loop_new(g_main_context_default(), FALSE);
|
|
|
|
qio_channel_tls_handshake(tioc,
|
|
|
|
nbd_tls_handshake,
|
|
|
|
&data,
|
|
|
|
NULL);
|
|
|
|
|
|
|
|
if (!data.complete) {
|
|
|
|
g_main_loop_run(data.loop);
|
|
|
|
}
|
|
|
|
g_main_loop_unref(data.loop);
|
|
|
|
if (data.error) {
|
|
|
|
object_unref(OBJECT(tioc));
|
2017-07-07 23:29:11 +08:00
|
|
|
error_propagate(errp, data.error);
|
2016-02-11 02:41:11 +08:00
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
return QIO_CHANNEL(tioc);
|
|
|
|
}
|
|
|
|
|
2017-07-07 23:29:09 +08:00
|
|
|
/* nbd_negotiate_options
|
2017-07-08 04:30:46 +08:00
|
|
|
* Process all NBD_OPT_* client option commands, during fixed newstyle
|
|
|
|
* negotiation.
|
2017-07-07 23:29:09 +08:00
|
|
|
* Return:
|
2017-07-07 23:29:11 +08:00
|
|
|
* -errno on error, errp is set
|
|
|
|
* 0 on successful negotiation, errp is not set
|
|
|
|
* 1 if client sent NBD_OPT_ABORT, i.e. on valid disconnect,
|
|
|
|
* errp is not set
|
2017-07-07 23:29:09 +08:00
|
|
|
*/
|
2017-07-08 04:30:45 +08:00
|
|
|
static int nbd_negotiate_options(NBDClient *client, uint16_t myflags,
|
|
|
|
Error **errp)
|
2014-06-07 08:32:31 +08:00
|
|
|
{
|
2015-02-26 02:08:31 +08:00
|
|
|
uint32_t flags;
|
2016-02-11 02:41:06 +08:00
|
|
|
bool fixedNewstyle = false;
|
2017-07-08 04:30:45 +08:00
|
|
|
bool no_zeroes = false;
|
2015-02-26 02:08:31 +08:00
|
|
|
|
|
|
|
/* Client sends:
|
|
|
|
[ 0 .. 3] client flags
|
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
Then we loop until NBD_OPT_EXPORT_NAME or NBD_OPT_GO:
|
2015-02-26 02:08:31 +08:00
|
|
|
[ 0 .. 7] NBD_OPTS_MAGIC
|
|
|
|
[ 8 .. 11] NBD option
|
|
|
|
[12 .. 15] Data length
|
|
|
|
... Rest of request
|
|
|
|
|
|
|
|
[ 0 .. 7] NBD_OPTS_MAGIC
|
|
|
|
[ 8 .. 11] Second NBD option
|
|
|
|
[12 .. 15] Data length
|
|
|
|
... Rest of request
|
|
|
|
*/
|
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_read(client->ioc, &flags, sizeof(flags), errp) < 0) {
|
|
|
|
error_prepend(errp, "read failed: ");
|
2015-02-26 02:08:31 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
be32_to_cpus(&flags);
|
2017-07-08 04:30:44 +08:00
|
|
|
trace_nbd_negotiate_options_flags(flags);
|
2016-02-11 02:41:06 +08:00
|
|
|
if (flags & NBD_FLAG_C_FIXED_NEWSTYLE) {
|
|
|
|
fixedNewstyle = true;
|
|
|
|
flags &= ~NBD_FLAG_C_FIXED_NEWSTYLE;
|
|
|
|
}
|
2016-10-15 02:33:14 +08:00
|
|
|
if (flags & NBD_FLAG_C_NO_ZEROES) {
|
2017-07-08 04:30:45 +08:00
|
|
|
no_zeroes = true;
|
2016-10-15 02:33:14 +08:00
|
|
|
flags &= ~NBD_FLAG_C_NO_ZEROES;
|
|
|
|
}
|
2016-02-11 02:41:06 +08:00
|
|
|
if (flags != 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "Unknown client flags 0x%" PRIx32 " received", flags);
|
2017-07-08 04:30:44 +08:00
|
|
|
return -EINVAL;
|
2015-02-26 02:08:31 +08:00
|
|
|
}
|
|
|
|
|
2014-06-07 08:32:31 +08:00
|
|
|
while (1) {
|
2015-02-26 02:08:31 +08:00
|
|
|
int ret;
|
2017-07-07 23:29:16 +08:00
|
|
|
uint32_t option, length;
|
2014-06-07 08:32:31 +08:00
|
|
|
uint64_t magic;
|
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_read(client->ioc, &magic, sizeof(magic), errp) < 0) {
|
|
|
|
error_prepend(errp, "read failed: ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:18 +08:00
|
|
|
magic = be64_to_cpu(magic);
|
|
|
|
trace_nbd_negotiate_options_check_magic(magic);
|
|
|
|
if (magic != NBD_OPTS_MAGIC) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "Bad magic received");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2017-07-07 23:29:16 +08:00
|
|
|
if (nbd_read(client->ioc, &option,
|
|
|
|
sizeof(option), errp) < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_prepend(errp, "read failed: ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:16 +08:00
|
|
|
option = be32_to_cpu(option);
|
2014-06-07 08:32:31 +08:00
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_read(client->ioc, &length, sizeof(length), errp) < 0) {
|
|
|
|
error_prepend(errp, "read failed: ");
|
2014-06-07 08:32:31 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
length = be32_to_cpu(length);
|
|
|
|
|
2017-07-08 04:30:43 +08:00
|
|
|
trace_nbd_negotiate_options_check_option(option,
|
|
|
|
nbd_opt_lookup(option));
|
2016-02-11 02:41:11 +08:00
|
|
|
if (client->tlscreds &&
|
|
|
|
client->ioc == (QIOChannel *)client->sioc) {
|
|
|
|
QIOChannel *tioc;
|
|
|
|
if (!fixedNewstyle) {
|
2017-07-07 23:29:16 +08:00
|
|
|
error_setg(errp, "Unsupported option 0x%" PRIx32, option);
|
2016-02-11 02:41:11 +08:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
2017-07-07 23:29:16 +08:00
|
|
|
switch (option) {
|
2016-02-11 02:41:11 +08:00
|
|
|
case NBD_OPT_STARTTLS:
|
2017-07-07 23:29:11 +08:00
|
|
|
tioc = nbd_negotiate_handle_starttls(client, length, errp);
|
2016-02-11 02:41:11 +08:00
|
|
|
if (!tioc) {
|
|
|
|
return -EIO;
|
|
|
|
}
|
|
|
|
object_unref(OBJECT(client->ioc));
|
|
|
|
client->ioc = QIO_CHANNEL(tioc);
|
|
|
|
break;
|
|
|
|
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
case NBD_OPT_EXPORT_NAME:
|
|
|
|
/* No way to return an error to client, so drop connection */
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "Option 0x%x not permitted before TLS",
|
2017-07-07 23:29:16 +08:00
|
|
|
option);
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
return -EINVAL;
|
|
|
|
|
2016-02-11 02:41:11 +08:00
|
|
|
default:
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_drop(client->ioc, length, errp) < 0) {
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
2016-10-15 02:33:09 +08:00
|
|
|
ret = nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_TLS_REQD,
|
2017-07-07 23:29:16 +08:00
|
|
|
option, errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"Option 0x%" PRIx32
|
|
|
|
"not permitted before TLS",
|
2017-07-07 23:29:16 +08:00
|
|
|
option);
|
2016-05-12 06:39:36 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
2017-07-08 04:30:42 +08:00
|
|
|
/* Let the client keep trying, unless they asked to
|
|
|
|
* quit. In this mode, we've already sent an error, so
|
|
|
|
* we can't ack the abort. */
|
2017-07-07 23:29:16 +08:00
|
|
|
if (option == NBD_OPT_ABORT) {
|
2017-07-07 23:29:09 +08:00
|
|
|
return 1;
|
2016-10-15 02:33:16 +08:00
|
|
|
}
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
break;
|
2016-02-11 02:41:11 +08:00
|
|
|
}
|
|
|
|
} else if (fixedNewstyle) {
|
2017-07-07 23:29:16 +08:00
|
|
|
switch (option) {
|
2016-02-11 02:41:06 +08:00
|
|
|
case NBD_OPT_LIST:
|
2017-07-07 23:29:11 +08:00
|
|
|
ret = nbd_negotiate_handle_list(client, length, errp);
|
2016-02-11 02:41:06 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NBD_OPT_ABORT:
|
2016-10-15 02:33:16 +08:00
|
|
|
/* NBD spec says we must try to reply before
|
|
|
|
* disconnecting, but that we must also tolerate
|
|
|
|
* guests that don't wait for our reply. */
|
2017-07-08 04:30:42 +08:00
|
|
|
nbd_negotiate_send_rep(client->ioc, NBD_REP_ACK, option, NULL);
|
2017-07-07 23:29:09 +08:00
|
|
|
return 1;
|
2016-02-11 02:41:06 +08:00
|
|
|
|
|
|
|
case NBD_OPT_EXPORT_NAME:
|
2017-07-08 04:30:45 +08:00
|
|
|
return nbd_negotiate_handle_export_name(client, length,
|
|
|
|
myflags, no_zeroes,
|
|
|
|
errp);
|
2016-02-11 02:41:06 +08:00
|
|
|
|
2017-07-08 04:30:46 +08:00
|
|
|
case NBD_OPT_INFO:
|
|
|
|
case NBD_OPT_GO:
|
|
|
|
ret = nbd_negotiate_handle_info(client, length, option,
|
|
|
|
myflags, errp);
|
|
|
|
if (ret == 1) {
|
|
|
|
assert(option == NBD_OPT_GO);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
|
2016-02-11 02:41:11 +08:00
|
|
|
case NBD_OPT_STARTTLS:
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_drop(client->ioc, length, errp) < 0) {
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
2016-02-11 02:41:11 +08:00
|
|
|
if (client->tlscreds) {
|
2016-10-15 02:33:09 +08:00
|
|
|
ret = nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_INVALID,
|
2017-07-07 23:29:16 +08:00
|
|
|
option, errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"TLS already enabled");
|
2016-02-11 02:41:11 +08:00
|
|
|
} else {
|
2016-10-15 02:33:09 +08:00
|
|
|
ret = nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_POLICY,
|
2017-07-07 23:29:16 +08:00
|
|
|
option, errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"TLS not configured");
|
2016-05-12 06:39:36 +08:00
|
|
|
}
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
2016-02-11 02:41:11 +08:00
|
|
|
}
|
nbd: Don't kill server on client that doesn't request TLS
Upstream NBD documents (as of commit 4feebc95) that servers MAY
choose to operate in a conditional mode, where it is up to the
client whether to use TLS. For qemu's case, we want to always be
in FORCEDTLS mode, because of the risk of man-in-the-middle
attacks, and since we never export more than one device; likewise,
the qemu client will ALWAYS send NBD_OPT_STARTTLS as its first
option. But now that SELECTIVETLS servers exist, it is feasible
to encounter a (non-qemu) client that is programmed to talk to
such a server, and does not do NBD_OPT_STARTTLS first, but rather
wants to probe if it can use a non-encrypted export.
The NBD protocol documents that we should let such a client
continue trying, on the grounds that maybe the client will get the
hint to send NBD_OPT_STARTTLS, rather than immediately dropping
the connection.
Note that NBD_OPT_EXPORT_NAME is a special case: since it is the
only option request that can't have an error return, we have to
(continue to) drop the connection on that one; rather, what we are
fixing here is that all other replies prior to TLS initiation tell
the client NBD_REP_ERR_TLS_REQD, but keep the connection alive.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-id: 1460671343-18485-1-git-send-email-eblake@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-04-15 06:02:23 +08:00
|
|
|
break;
|
2016-02-11 02:41:06 +08:00
|
|
|
default:
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_drop(client->ioc, length, errp) < 0) {
|
2016-04-07 06:48:38 +08:00
|
|
|
return -EIO;
|
|
|
|
}
|
2016-10-15 02:33:09 +08:00
|
|
|
ret = nbd_negotiate_send_rep_err(client->ioc,
|
|
|
|
NBD_REP_ERR_UNSUP,
|
2017-07-07 23:29:16 +08:00
|
|
|
option, errp,
|
2016-10-15 02:33:09 +08:00
|
|
|
"Unsupported option 0x%"
|
2017-07-08 04:30:43 +08:00
|
|
|
PRIx32 " (%s)", option,
|
|
|
|
nbd_opt_lookup(option));
|
2016-05-12 06:39:36 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
2016-04-07 06:48:38 +08:00
|
|
|
break;
|
2016-02-11 02:41:06 +08:00
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* If broken new-style we should drop the connection
|
|
|
|
* for anything except NBD_OPT_EXPORT_NAME
|
|
|
|
*/
|
2017-07-07 23:29:16 +08:00
|
|
|
switch (option) {
|
2016-02-11 02:41:06 +08:00
|
|
|
case NBD_OPT_EXPORT_NAME:
|
2017-07-08 04:30:45 +08:00
|
|
|
return nbd_negotiate_handle_export_name(client, length,
|
|
|
|
myflags, no_zeroes,
|
|
|
|
errp);
|
2016-02-11 02:41:06 +08:00
|
|
|
|
|
|
|
default:
|
2017-07-08 04:30:43 +08:00
|
|
|
error_setg(errp, "Unsupported option 0x%" PRIx32 " (%s)",
|
|
|
|
option, nbd_opt_lookup(option));
|
2016-02-11 02:41:06 +08:00
|
|
|
return -EINVAL;
|
2014-06-07 08:32:32 +08:00
|
|
|
}
|
2014-06-07 08:32:31 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-07-07 23:29:09 +08:00
|
|
|
/* nbd_negotiate
|
|
|
|
* Return:
|
2017-07-07 23:29:11 +08:00
|
|
|
* -errno on error, errp is set
|
|
|
|
* 0 on successful negotiation, errp is not set
|
|
|
|
* 1 if client sent NBD_OPT_ABORT, i.e. on valid disconnect,
|
|
|
|
* errp is not set
|
2017-07-07 23:29:09 +08:00
|
|
|
*/
|
2017-07-07 23:29:11 +08:00
|
|
|
static coroutine_fn int nbd_negotiate(NBDClient *client, Error **errp)
|
2008-05-28 05:13:40 +08:00
|
|
|
{
|
2017-07-18 03:26:35 +08:00
|
|
|
char buf[NBD_OLDSTYLE_NEGOTIATE_SIZE] = "";
|
2017-06-02 23:01:49 +08:00
|
|
|
int ret;
|
2016-07-22 03:34:46 +08:00
|
|
|
const uint16_t myflags = (NBD_FLAG_HAS_FLAGS | NBD_FLAG_SEND_TRIM |
|
2016-10-15 02:33:17 +08:00
|
|
|
NBD_FLAG_SEND_FLUSH | NBD_FLAG_SEND_FUA |
|
|
|
|
NBD_FLAG_SEND_WRITE_ZEROES);
|
2016-02-11 02:41:11 +08:00
|
|
|
bool oldStyle;
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2017-07-18 03:26:35 +08:00
|
|
|
/* Old style negotiation header, no room for options
|
2012-08-23 20:57:11 +08:00
|
|
|
[ 0 .. 7] passwd ("NBDMAGIC")
|
|
|
|
[ 8 .. 15] magic (NBD_CLIENT_MAGIC)
|
2011-02-22 23:44:51 +08:00
|
|
|
[16 .. 23] size
|
2017-07-18 03:26:35 +08:00
|
|
|
[24 .. 27] export flags (zero-extended)
|
2012-08-23 20:57:11 +08:00
|
|
|
[28 .. 151] reserved (0)
|
|
|
|
|
2017-07-18 03:26:35 +08:00
|
|
|
New style negotiation header, client can send options
|
2012-08-23 20:57:11 +08:00
|
|
|
[ 0 .. 7] passwd ("NBDMAGIC")
|
|
|
|
[ 8 .. 15] magic (NBD_OPTS_MAGIC)
|
|
|
|
[16 .. 17] server flags (0)
|
2017-07-08 04:30:46 +08:00
|
|
|
....options sent, ending in NBD_OPT_EXPORT_NAME or NBD_OPT_GO....
|
2011-02-22 23:44:51 +08:00
|
|
|
*/
|
|
|
|
|
2016-02-11 02:41:04 +08:00
|
|
|
qio_channel_set_blocking(client->ioc, false, NULL);
|
2012-03-05 15:56:10 +08:00
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_begin();
|
2011-02-22 23:44:51 +08:00
|
|
|
memcpy(buf, "NBDMAGIC", 8);
|
2016-02-11 02:41:11 +08:00
|
|
|
|
|
|
|
oldStyle = client->exp != NULL && !client->tlscreds;
|
|
|
|
if (oldStyle) {
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_old_style(client->exp->size,
|
|
|
|
client->exp->nbdflags | myflags);
|
2016-02-04 18:27:55 +08:00
|
|
|
stq_be_p(buf + 8, NBD_CLIENT_MAGIC);
|
|
|
|
stq_be_p(buf + 16, client->exp->size);
|
2017-07-18 03:26:35 +08:00
|
|
|
stl_be_p(buf + 24, client->exp->nbdflags | myflags);
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(client->ioc, buf, sizeof(buf), errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed: ");
|
2017-06-02 23:01:48 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
|
|
|
} else {
|
2017-07-07 23:29:10 +08:00
|
|
|
stq_be_p(buf + 8, NBD_OPTS_MAGIC);
|
|
|
|
stw_be_p(buf + 16, NBD_FLAG_FIXED_NEWSTYLE | NBD_FLAG_NO_ZEROES);
|
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_write(client->ioc, buf, 18, errp) < 0) {
|
|
|
|
error_prepend(errp, "write failed: ");
|
2017-06-02 23:01:48 +08:00
|
|
|
return -EINVAL;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2017-07-08 04:30:45 +08:00
|
|
|
ret = nbd_negotiate_options(client, myflags, errp);
|
2017-06-02 23:01:49 +08:00
|
|
|
if (ret != 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
error_prepend(errp, "option negotiation failed: ");
|
|
|
|
}
|
2017-06-02 23:01:49 +08:00
|
|
|
return ret;
|
2012-08-23 20:57:11 +08:00
|
|
|
}
|
2011-02-22 23:44:51 +08:00
|
|
|
}
|
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_negotiate_success();
|
2017-06-02 23:01:48 +08:00
|
|
|
|
|
|
|
return 0;
|
2008-05-28 05:13:40 +08:00
|
|
|
}
|
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_receive_request(QIOChannel *ioc, NBDRequest *request,
|
|
|
|
Error **errp)
|
2008-07-03 21:41:03 +08:00
|
|
|
{
|
2012-08-22 21:13:30 +08:00
|
|
|
uint8_t buf[NBD_REQUEST_SIZE];
|
2011-02-22 23:44:51 +08:00
|
|
|
uint32_t magic;
|
2017-06-02 23:01:42 +08:00
|
|
|
int ret;
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
ret = nbd_read(ioc, buf, sizeof(buf), errp);
|
2012-03-05 15:56:10 +08:00
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2011-02-22 23:44:51 +08:00
|
|
|
/* Request
|
|
|
|
[ 0 .. 3] magic (NBD_REQUEST_MAGIC)
|
2016-10-15 02:33:04 +08:00
|
|
|
[ 4 .. 5] flags (NBD_CMD_FLAG_FUA, ...)
|
|
|
|
[ 6 .. 7] type (NBD_CMD_READ, ...)
|
2011-02-22 23:44:51 +08:00
|
|
|
[ 8 .. 15] handle
|
|
|
|
[16 .. 23] from
|
|
|
|
[24 .. 27] len
|
|
|
|
*/
|
|
|
|
|
2016-06-10 23:00:36 +08:00
|
|
|
magic = ldl_be_p(buf);
|
2016-10-15 02:33:04 +08:00
|
|
|
request->flags = lduw_be_p(buf + 4);
|
|
|
|
request->type = lduw_be_p(buf + 6);
|
2016-06-10 23:00:36 +08:00
|
|
|
request->handle = ldq_be_p(buf + 8);
|
|
|
|
request->from = ldq_be_p(buf + 16);
|
|
|
|
request->len = ldl_be_p(buf + 24);
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_receive_request(magic, request->flags, request->type,
|
|
|
|
request->from, request->len);
|
2011-02-22 23:44:51 +08:00
|
|
|
|
|
|
|
if (magic != NBD_REQUEST_MAGIC) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "invalid magic (got 0x%" PRIx32 ")", magic);
|
2012-03-05 15:56:10 +08:00
|
|
|
return -EINVAL;
|
2011-02-22 23:44:51 +08:00
|
|
|
}
|
|
|
|
return 0;
|
2008-07-03 21:41:03 +08:00
|
|
|
}
|
|
|
|
|
2011-09-19 21:25:40 +08:00
|
|
|
#define MAX_NBD_REQUESTS 16
|
|
|
|
|
2012-09-18 19:17:52 +08:00
|
|
|
void nbd_client_get(NBDClient *client)
|
2011-09-19 20:33:23 +08:00
|
|
|
{
|
|
|
|
client->refcount++;
|
|
|
|
}
|
|
|
|
|
2012-09-18 19:17:52 +08:00
|
|
|
void nbd_client_put(NBDClient *client)
|
2011-09-19 20:33:23 +08:00
|
|
|
{
|
|
|
|
if (--client->refcount == 0) {
|
2012-08-23 00:45:12 +08:00
|
|
|
/* The last reference should be dropped by client->close,
|
2015-02-07 05:06:16 +08:00
|
|
|
* which is called by client_close.
|
2012-08-23 00:45:12 +08:00
|
|
|
*/
|
|
|
|
assert(client->closing);
|
|
|
|
|
2017-02-13 21:52:24 +08:00
|
|
|
qio_channel_detach_aio_context(client->ioc);
|
2016-02-11 02:41:04 +08:00
|
|
|
object_unref(OBJECT(client->sioc));
|
|
|
|
object_unref(OBJECT(client->ioc));
|
2016-02-11 02:41:11 +08:00
|
|
|
if (client->tlscreds) {
|
|
|
|
object_unref(OBJECT(client->tlscreds));
|
|
|
|
}
|
|
|
|
g_free(client->tlsaclname);
|
2012-08-23 20:57:11 +08:00
|
|
|
if (client->exp) {
|
|
|
|
QTAILQ_REMOVE(&client->exp->clients, client, next);
|
|
|
|
nbd_export_put(client->exp);
|
|
|
|
}
|
2011-09-19 20:33:23 +08:00
|
|
|
g_free(client);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
static void client_close(NBDClient *client, bool negotiated)
|
2011-09-19 20:33:23 +08:00
|
|
|
{
|
2012-08-23 00:45:12 +08:00
|
|
|
if (client->closing) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
client->closing = true;
|
|
|
|
|
|
|
|
/* Force requests to finish. They will drop their own references,
|
|
|
|
* then we'll close the socket and free the NBDClient.
|
|
|
|
*/
|
2016-02-11 02:41:04 +08:00
|
|
|
qio_channel_shutdown(client->ioc, QIO_CHANNEL_SHUTDOWN_BOTH,
|
|
|
|
NULL);
|
2012-08-23 00:45:12 +08:00
|
|
|
|
|
|
|
/* Also tell the client, so that they release their reference. */
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
if (client->close_fn) {
|
|
|
|
client->close_fn(client, negotiated);
|
2011-09-19 20:33:23 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:05 +08:00
|
|
|
static NBDRequestData *nbd_request_get(NBDClient *client)
|
2011-09-19 20:18:33 +08:00
|
|
|
{
|
2016-10-15 02:33:05 +08:00
|
|
|
NBDRequestData *req;
|
2011-10-07 22:47:56 +08:00
|
|
|
|
2011-09-19 21:25:40 +08:00
|
|
|
assert(client->nb_requests <= MAX_NBD_REQUESTS - 1);
|
|
|
|
client->nb_requests++;
|
|
|
|
|
2016-10-15 02:33:05 +08:00
|
|
|
req = g_new0(NBDRequestData, 1);
|
2011-10-07 22:47:56 +08:00
|
|
|
nbd_client_get(client);
|
|
|
|
req->client = client;
|
2011-09-19 20:18:33 +08:00
|
|
|
return req;
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:05 +08:00
|
|
|
static void nbd_request_put(NBDRequestData *req)
|
2011-09-19 20:18:33 +08:00
|
|
|
{
|
2011-10-07 22:47:56 +08:00
|
|
|
NBDClient *client = req->client;
|
2013-05-02 20:23:07 +08:00
|
|
|
|
2013-05-02 20:23:08 +08:00
|
|
|
if (req->data) {
|
|
|
|
qemu_vfree(req->data);
|
|
|
|
}
|
2015-10-01 18:59:08 +08:00
|
|
|
g_free(req);
|
2013-05-02 20:23:07 +08:00
|
|
|
|
2014-06-21 03:57:32 +08:00
|
|
|
client->nb_requests--;
|
2017-02-13 21:52:24 +08:00
|
|
|
nbd_client_receive_next_request(client);
|
|
|
|
|
2011-10-07 22:47:56 +08:00
|
|
|
nbd_client_put(client);
|
2011-09-19 20:18:33 +08:00
|
|
|
}
|
|
|
|
|
2014-11-18 19:21:18 +08:00
|
|
|
static void blk_aio_attached(AioContext *ctx, void *opaque)
|
2014-06-21 03:57:34 +08:00
|
|
|
{
|
|
|
|
NBDExport *exp = opaque;
|
|
|
|
NBDClient *client;
|
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_blk_aio_attached(exp->name, ctx);
|
2014-06-21 03:57:34 +08:00
|
|
|
|
|
|
|
exp->ctx = ctx;
|
|
|
|
|
|
|
|
QTAILQ_FOREACH(client, &exp->clients, next) {
|
2017-02-13 21:52:24 +08:00
|
|
|
qio_channel_attach_aio_context(client->ioc, ctx);
|
|
|
|
if (client->recv_coroutine) {
|
|
|
|
aio_co_schedule(ctx, client->recv_coroutine);
|
|
|
|
}
|
|
|
|
if (client->send_coroutine) {
|
|
|
|
aio_co_schedule(ctx, client->send_coroutine);
|
|
|
|
}
|
2014-06-21 03:57:34 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-11-18 19:21:18 +08:00
|
|
|
static void blk_aio_detach(void *opaque)
|
2014-06-21 03:57:34 +08:00
|
|
|
{
|
|
|
|
NBDExport *exp = opaque;
|
|
|
|
NBDClient *client;
|
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_blk_aio_detach(exp->name, exp->ctx);
|
2014-06-21 03:57:34 +08:00
|
|
|
|
|
|
|
QTAILQ_FOREACH(client, &exp->clients, next) {
|
2017-02-13 21:52:24 +08:00
|
|
|
qio_channel_detach_aio_context(client->ioc);
|
2014-06-21 03:57:34 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
exp->ctx = NULL;
|
|
|
|
}
|
|
|
|
|
2016-01-29 23:36:06 +08:00
|
|
|
static void nbd_eject_notifier(Notifier *n, void *data)
|
|
|
|
{
|
|
|
|
NBDExport *exp = container_of(n, NBDExport, eject_notifier);
|
|
|
|
nbd_export_close(exp);
|
|
|
|
}
|
|
|
|
|
2016-07-06 17:22:39 +08:00
|
|
|
NBDExport *nbd_export_new(BlockDriverState *bs, off_t dev_offset, off_t size,
|
2016-07-22 03:34:46 +08:00
|
|
|
uint16_t nbdflags, void (*close)(NBDExport *),
|
2016-07-06 17:22:39 +08:00
|
|
|
bool writethrough, BlockBackend *on_eject_blk,
|
2015-02-26 02:08:21 +08:00
|
|
|
Error **errp)
|
2011-09-19 20:03:37 +08:00
|
|
|
{
|
2017-08-15 21:07:38 +08:00
|
|
|
AioContext *ctx;
|
2016-07-06 17:22:39 +08:00
|
|
|
BlockBackend *blk;
|
2017-10-07 07:49:16 +08:00
|
|
|
NBDExport *exp = g_new0(NBDExport, 1);
|
2017-02-09 22:43:38 +08:00
|
|
|
uint64_t perm;
|
2017-01-14 02:02:32 +08:00
|
|
|
int ret;
|
2016-07-06 17:22:39 +08:00
|
|
|
|
2017-08-15 21:07:38 +08:00
|
|
|
/*
|
|
|
|
* NBD exports are used for non-shared storage migration. Make sure
|
|
|
|
* that BDRV_O_INACTIVE is cleared and the image is ready for write
|
|
|
|
* access since the export could be available before migration handover.
|
|
|
|
*/
|
|
|
|
ctx = bdrv_get_aio_context(bs);
|
|
|
|
aio_context_acquire(ctx);
|
|
|
|
bdrv_invalidate_cache(bs, NULL);
|
|
|
|
aio_context_release(ctx);
|
|
|
|
|
2017-02-09 22:43:38 +08:00
|
|
|
/* Don't allow resize while the NBD server is running, otherwise we don't
|
|
|
|
* care what happens with the node. */
|
|
|
|
perm = BLK_PERM_CONSISTENT_READ;
|
|
|
|
if ((nbdflags & NBD_FLAG_READ_ONLY) == 0) {
|
|
|
|
perm |= BLK_PERM_WRITE;
|
|
|
|
}
|
|
|
|
blk = blk_new(perm, BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
|
|
|
|
BLK_PERM_WRITE | BLK_PERM_GRAPH_MOD);
|
2017-01-14 02:02:32 +08:00
|
|
|
ret = blk_insert_bs(blk, bs, errp);
|
|
|
|
if (ret < 0) {
|
|
|
|
goto fail;
|
|
|
|
}
|
2016-07-06 17:22:39 +08:00
|
|
|
blk_set_enable_write_cache(blk, !writethrough);
|
|
|
|
|
2012-09-18 19:26:25 +08:00
|
|
|
exp->refcount = 1;
|
2012-09-18 19:58:25 +08:00
|
|
|
QTAILQ_INIT(&exp->clients);
|
2014-11-18 19:21:18 +08:00
|
|
|
exp->blk = blk;
|
2011-09-19 20:03:37 +08:00
|
|
|
exp->dev_offset = dev_offset;
|
|
|
|
exp->nbdflags = nbdflags;
|
2015-02-26 02:08:21 +08:00
|
|
|
exp->size = size < 0 ? blk_getlength(blk) : size;
|
|
|
|
if (exp->size < 0) {
|
|
|
|
error_setg_errno(errp, -exp->size,
|
|
|
|
"Failed to determine the NBD export's length");
|
|
|
|
goto fail;
|
|
|
|
}
|
|
|
|
exp->size -= exp->size % BDRV_SECTOR_SIZE;
|
|
|
|
|
2012-09-18 19:59:03 +08:00
|
|
|
exp->close = close;
|
2014-11-18 19:21:18 +08:00
|
|
|
exp->ctx = blk_get_aio_context(blk);
|
|
|
|
blk_add_aio_context_notifier(blk, blk_aio_attached, blk_aio_detach, exp);
|
2016-01-29 23:36:06 +08:00
|
|
|
|
2016-07-06 17:22:39 +08:00
|
|
|
if (on_eject_blk) {
|
|
|
|
blk_ref(on_eject_blk);
|
|
|
|
exp->eject_notifier_blk = on_eject_blk;
|
|
|
|
exp->eject_notifier.notify = nbd_eject_notifier;
|
|
|
|
blk_add_remove_bs_notifier(on_eject_blk, &exp->eject_notifier);
|
|
|
|
}
|
2011-09-19 20:03:37 +08:00
|
|
|
return exp;
|
2015-02-26 02:08:21 +08:00
|
|
|
|
|
|
|
fail:
|
2016-07-06 17:22:39 +08:00
|
|
|
blk_unref(blk);
|
2015-02-26 02:08:21 +08:00
|
|
|
g_free(exp);
|
|
|
|
return NULL;
|
2011-09-19 20:03:37 +08:00
|
|
|
}
|
|
|
|
|
2012-08-22 21:59:23 +08:00
|
|
|
NBDExport *nbd_export_find(const char *name)
|
|
|
|
{
|
|
|
|
NBDExport *exp;
|
|
|
|
QTAILQ_FOREACH(exp, &exports, next) {
|
|
|
|
if (strcmp(name, exp->name) == 0) {
|
|
|
|
return exp;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
|
|
|
void nbd_export_set_name(NBDExport *exp, const char *name)
|
|
|
|
{
|
|
|
|
if (exp->name == name) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
nbd_export_get(exp);
|
|
|
|
if (exp->name != NULL) {
|
|
|
|
g_free(exp->name);
|
|
|
|
exp->name = NULL;
|
|
|
|
QTAILQ_REMOVE(&exports, exp, next);
|
|
|
|
nbd_export_put(exp);
|
|
|
|
}
|
|
|
|
if (name != NULL) {
|
|
|
|
nbd_export_get(exp);
|
|
|
|
exp->name = g_strdup(name);
|
|
|
|
QTAILQ_INSERT_TAIL(&exports, exp, next);
|
|
|
|
}
|
|
|
|
nbd_export_put(exp);
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:03 +08:00
|
|
|
void nbd_export_set_description(NBDExport *exp, const char *description)
|
|
|
|
{
|
|
|
|
g_free(exp->description);
|
|
|
|
exp->description = g_strdup(description);
|
|
|
|
}
|
|
|
|
|
2011-09-19 20:03:37 +08:00
|
|
|
void nbd_export_close(NBDExport *exp)
|
|
|
|
{
|
2012-09-18 19:58:25 +08:00
|
|
|
NBDClient *client, *next;
|
2012-09-18 19:26:25 +08:00
|
|
|
|
2012-09-18 19:58:25 +08:00
|
|
|
nbd_export_get(exp);
|
|
|
|
QTAILQ_FOREACH_SAFE(client, &exp->clients, next, next) {
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
client_close(client, true);
|
2012-09-18 19:58:25 +08:00
|
|
|
}
|
2012-09-18 20:31:44 +08:00
|
|
|
nbd_export_set_name(exp, NULL);
|
2016-10-15 02:33:03 +08:00
|
|
|
nbd_export_set_description(exp, NULL);
|
2012-09-18 19:58:25 +08:00
|
|
|
nbd_export_put(exp);
|
2012-09-18 19:26:25 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
void nbd_export_get(NBDExport *exp)
|
|
|
|
{
|
|
|
|
assert(exp->refcount > 0);
|
|
|
|
exp->refcount++;
|
|
|
|
}
|
|
|
|
|
|
|
|
void nbd_export_put(NBDExport *exp)
|
|
|
|
{
|
|
|
|
assert(exp->refcount > 0);
|
|
|
|
if (exp->refcount == 1) {
|
|
|
|
nbd_export_close(exp);
|
2011-09-19 20:18:33 +08:00
|
|
|
}
|
|
|
|
|
2012-09-18 19:26:25 +08:00
|
|
|
if (--exp->refcount == 0) {
|
2012-08-22 21:59:23 +08:00
|
|
|
assert(exp->name == NULL);
|
2016-10-15 02:33:03 +08:00
|
|
|
assert(exp->description == NULL);
|
2012-08-22 21:59:23 +08:00
|
|
|
|
2012-09-18 19:59:03 +08:00
|
|
|
if (exp->close) {
|
|
|
|
exp->close(exp);
|
|
|
|
}
|
|
|
|
|
2015-09-16 16:35:46 +08:00
|
|
|
if (exp->blk) {
|
2016-07-06 17:22:39 +08:00
|
|
|
if (exp->eject_notifier_blk) {
|
|
|
|
notifier_remove(&exp->eject_notifier);
|
|
|
|
blk_unref(exp->eject_notifier_blk);
|
|
|
|
}
|
2015-09-16 16:35:46 +08:00
|
|
|
blk_remove_aio_context_notifier(exp->blk, blk_aio_attached,
|
|
|
|
blk_aio_detach, exp);
|
|
|
|
blk_unref(exp->blk);
|
|
|
|
exp->blk = NULL;
|
|
|
|
}
|
|
|
|
|
2012-09-18 19:26:25 +08:00
|
|
|
g_free(exp);
|
|
|
|
}
|
2011-09-19 20:03:37 +08:00
|
|
|
}
|
|
|
|
|
2014-11-18 19:21:17 +08:00
|
|
|
BlockBackend *nbd_export_get_blockdev(NBDExport *exp)
|
2012-09-18 20:31:44 +08:00
|
|
|
{
|
2014-11-18 19:21:18 +08:00
|
|
|
return exp->blk;
|
2012-09-18 20:31:44 +08:00
|
|
|
}
|
|
|
|
|
2012-08-22 21:59:23 +08:00
|
|
|
void nbd_export_close_all(void)
|
|
|
|
{
|
|
|
|
NBDExport *exp, *next;
|
|
|
|
|
|
|
|
QTAILQ_FOREACH_SAFE(exp, &exports, next, next) {
|
|
|
|
nbd_export_close(exp);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-10-13 06:29:00 +08:00
|
|
|
static int coroutine_fn nbd_co_send_iov(NBDClient *client, struct iovec *iov,
|
|
|
|
unsigned niov, Error **errp)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
g_assert(qemu_in_coroutine());
|
|
|
|
qemu_co_mutex_lock(&client->send_lock);
|
|
|
|
client->send_coroutine = qemu_coroutine_self();
|
|
|
|
|
|
|
|
ret = qio_channel_writev_all(client->ioc, iov, niov, errp) < 0 ? -EIO : 0;
|
|
|
|
|
|
|
|
client->send_coroutine = NULL;
|
|
|
|
qemu_co_mutex_unlock(&client->send_lock);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
2017-10-12 17:53:10 +08:00
|
|
|
static inline void set_be_simple_reply(NBDSimpleReply *reply, uint64_t error,
|
|
|
|
uint64_t handle)
|
|
|
|
{
|
|
|
|
stl_be_p(&reply->magic, NBD_SIMPLE_REPLY_MAGIC);
|
|
|
|
stl_be_p(&reply->error, error);
|
|
|
|
stq_be_p(&reply->handle, handle);
|
|
|
|
}
|
|
|
|
|
2017-10-12 17:53:12 +08:00
|
|
|
static int nbd_co_send_simple_reply(NBDClient *client,
|
2017-10-13 06:05:06 +08:00
|
|
|
uint64_t handle,
|
|
|
|
uint32_t error,
|
2017-10-12 17:53:12 +08:00
|
|
|
void *data,
|
|
|
|
size_t len,
|
|
|
|
Error **errp)
|
2011-09-19 20:25:30 +08:00
|
|
|
{
|
2017-10-13 06:29:00 +08:00
|
|
|
NBDSimpleReply reply;
|
2017-10-13 06:05:06 +08:00
|
|
|
int nbd_err = system_errno_to_nbd_errno(error);
|
2017-10-13 06:29:00 +08:00
|
|
|
struct iovec iov[] = {
|
|
|
|
{.iov_base = &reply, .iov_len = sizeof(reply)},
|
|
|
|
{.iov_base = data, .iov_len = len}
|
|
|
|
};
|
2017-07-07 23:29:17 +08:00
|
|
|
|
2017-10-27 18:40:26 +08:00
|
|
|
trace_nbd_co_send_simple_reply(handle, nbd_err, nbd_err_lookup(nbd_err),
|
|
|
|
len);
|
2017-10-13 06:29:00 +08:00
|
|
|
set_be_simple_reply(&reply, nbd_err, handle);
|
2011-09-19 21:19:27 +08:00
|
|
|
|
2017-10-13 06:29:00 +08:00
|
|
|
return nbd_co_send_iov(client, iov, len ? 2 : 1, errp);
|
2011-09-19 20:25:30 +08:00
|
|
|
}
|
|
|
|
|
2017-06-02 23:01:44 +08:00
|
|
|
/* nbd_co_receive_request
|
|
|
|
* Collect a client request. Return 0 if request looks valid, -EIO to drop
|
|
|
|
* connection right away, and any other negative value to report an error to
|
|
|
|
* the client (although the caller may still need to disconnect after reporting
|
|
|
|
* the error).
|
|
|
|
*/
|
2017-07-07 23:29:11 +08:00
|
|
|
static int nbd_co_receive_request(NBDRequestData *req, NBDRequest *request,
|
|
|
|
Error **errp)
|
2011-09-19 21:07:54 +08:00
|
|
|
{
|
2011-10-07 22:47:56 +08:00
|
|
|
NBDClient *client = req->client;
|
2011-09-19 21:07:54 +08:00
|
|
|
|
2016-02-11 02:41:04 +08:00
|
|
|
g_assert(qemu_in_coroutine());
|
2017-02-13 21:52:24 +08:00
|
|
|
assert(client->recv_coroutine == qemu_coroutine_self());
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_receive_request(client->ioc, request, errp) < 0) {
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EIO;
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
|
|
|
|
2017-07-08 04:30:43 +08:00
|
|
|
trace_nbd_co_receive_request_decode_type(request->handle, request->type,
|
|
|
|
nbd_cmd_lookup(request->type));
|
2016-05-12 06:39:37 +08:00
|
|
|
|
2016-10-15 02:33:04 +08:00
|
|
|
if (request->type != NBD_CMD_WRITE) {
|
2016-05-12 06:39:37 +08:00
|
|
|
/* No payload, we are ready to read the next request. */
|
|
|
|
req->complete = true;
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:04 +08:00
|
|
|
if (request->type == NBD_CMD_DISC) {
|
2016-05-12 06:39:37 +08:00
|
|
|
/* Special case: we're going to disconnect without a reply,
|
|
|
|
* whether or not flags, from, or len are bogus */
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EIO;
|
2016-05-12 06:39:37 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Check for sanity in the parameters, part 1. Defer as many
|
|
|
|
* checks as possible until after reading any NBD_CMD_WRITE
|
|
|
|
* payload, so we can try and keep the connection alive. */
|
2011-09-19 21:07:54 +08:00
|
|
|
if ((request->from + request->len) < request->from) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp,
|
|
|
|
"integer overflow detected, you're probably being attacked");
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EINVAL;
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:04 +08:00
|
|
|
if (request->type == NBD_CMD_READ || request->type == NBD_CMD_WRITE) {
|
2016-01-07 21:32:42 +08:00
|
|
|
if (request->len > NBD_MAX_BUFFER_SIZE) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "len (%" PRIu32" ) is larger than max len (%u)",
|
|
|
|
request->len, NBD_MAX_BUFFER_SIZE);
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EINVAL;
|
2016-01-07 21:32:42 +08:00
|
|
|
}
|
|
|
|
|
2016-01-07 21:34:13 +08:00
|
|
|
req->data = blk_try_blockalign(client->exp->blk, request->len);
|
|
|
|
if (req->data == NULL) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "No memory");
|
2017-06-02 23:01:45 +08:00
|
|
|
return -ENOMEM;
|
2016-01-07 21:34:13 +08:00
|
|
|
}
|
2013-05-02 20:23:08 +08:00
|
|
|
}
|
2016-10-15 02:33:04 +08:00
|
|
|
if (request->type == NBD_CMD_WRITE) {
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_read(client->ioc, req->data, request->len, errp) < 0) {
|
|
|
|
error_prepend(errp, "reading from socket failed: ");
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EIO;
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
2016-05-12 06:39:37 +08:00
|
|
|
req->complete = true;
|
2017-07-07 23:29:17 +08:00
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_co_receive_request_payload_received(request->handle,
|
|
|
|
request->len);
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
2016-05-12 06:39:37 +08:00
|
|
|
|
|
|
|
/* Sanity checks, part 2. */
|
|
|
|
if (request->from + request->len > client->exp->size) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "operation past EOF; From: %" PRIu64 ", Len: %" PRIu32
|
|
|
|
", Size: %" PRIu64, request->from, request->len,
|
|
|
|
(uint64_t)client->exp->size);
|
2017-06-02 23:01:45 +08:00
|
|
|
return request->type == NBD_CMD_WRITE ? -ENOSPC : -EINVAL;
|
2016-05-12 06:39:37 +08:00
|
|
|
}
|
2016-10-15 02:33:17 +08:00
|
|
|
if (request->flags & ~(NBD_CMD_FLAG_FUA | NBD_CMD_FLAG_NO_HOLE)) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "unsupported flags (got 0x%x)", request->flags);
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EINVAL;
|
2016-05-12 06:39:38 +08:00
|
|
|
}
|
2016-10-15 02:33:17 +08:00
|
|
|
if (request->type != NBD_CMD_WRITE_ZEROES &&
|
|
|
|
(request->flags & NBD_CMD_FLAG_NO_HOLE)) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(errp, "unexpected flags (got 0x%x)", request->flags);
|
2017-06-02 23:01:45 +08:00
|
|
|
return -EINVAL;
|
2016-10-15 02:33:17 +08:00
|
|
|
}
|
2016-05-12 06:39:37 +08:00
|
|
|
|
2017-06-02 23:01:45 +08:00
|
|
|
return 0;
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
|
|
|
|
2017-02-13 21:52:24 +08:00
|
|
|
/* Owns a reference to the NBDClient passed as opaque. */
|
|
|
|
static coroutine_fn void nbd_trip(void *opaque)
|
2008-07-03 21:41:03 +08:00
|
|
|
{
|
2011-09-19 21:19:27 +08:00
|
|
|
NBDClient *client = opaque;
|
2011-09-19 20:33:23 +08:00
|
|
|
NBDExport *exp = client->exp;
|
2016-10-15 02:33:05 +08:00
|
|
|
NBDRequestData *req;
|
2017-02-13 21:52:24 +08:00
|
|
|
NBDRequest request = { 0 }; /* GCC thinks it can be used uninitialized */
|
2017-06-02 23:01:42 +08:00
|
|
|
int ret;
|
2016-05-12 06:39:34 +08:00
|
|
|
int flags;
|
2017-06-02 23:01:50 +08:00
|
|
|
int reply_data_len = 0;
|
2017-07-07 23:29:11 +08:00
|
|
|
Error *local_err = NULL;
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2017-07-07 23:29:18 +08:00
|
|
|
trace_nbd_trip();
|
2012-08-23 00:45:12 +08:00
|
|
|
if (client->closing) {
|
2017-02-13 21:52:24 +08:00
|
|
|
nbd_client_put(client);
|
2012-08-23 00:45:12 +08:00
|
|
|
return;
|
|
|
|
}
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2012-08-23 00:45:12 +08:00
|
|
|
req = nbd_request_get(client);
|
2017-07-07 23:29:11 +08:00
|
|
|
ret = nbd_co_receive_request(req, &request, &local_err);
|
2017-06-02 23:01:45 +08:00
|
|
|
client->recv_coroutine = NULL;
|
|
|
|
nbd_client_receive_next_request(client);
|
2011-09-19 21:07:54 +08:00
|
|
|
if (ret == -EIO) {
|
2017-06-02 23:01:50 +08:00
|
|
|
goto disconnect;
|
2011-09-19 21:07:54 +08:00
|
|
|
}
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2011-09-19 21:07:54 +08:00
|
|
|
if (ret < 0) {
|
2017-06-02 23:01:50 +08:00
|
|
|
goto reply;
|
2011-02-22 23:44:51 +08:00
|
|
|
}
|
|
|
|
|
2015-09-16 16:35:46 +08:00
|
|
|
if (client->closing) {
|
|
|
|
/*
|
|
|
|
* The client may be closed when we are blocked in
|
|
|
|
* nbd_co_receive_request()
|
|
|
|
*/
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
2016-10-15 02:33:04 +08:00
|
|
|
switch (request.type) {
|
2011-02-22 23:44:51 +08:00
|
|
|
case NBD_CMD_READ:
|
2016-10-15 02:33:04 +08:00
|
|
|
/* XXX: NBD Protocol only documents use of FUA with WRITE */
|
|
|
|
if (request.flags & NBD_CMD_FLAG_FUA) {
|
2014-11-18 19:21:18 +08:00
|
|
|
ret = blk_co_flush(exp->blk);
|
2012-04-19 17:59:11 +08:00
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "flush failed");
|
2017-06-02 23:01:50 +08:00
|
|
|
break;
|
2012-04-19 17:59:11 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-04-21 22:42:30 +08:00
|
|
|
ret = blk_pread(exp->blk, request.from + exp->dev_offset,
|
|
|
|
req->data, request.len);
|
2011-09-13 23:27:45 +08:00
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "reading from file failed");
|
2017-06-02 23:01:50 +08:00
|
|
|
break;
|
2011-02-22 23:44:51 +08:00
|
|
|
}
|
|
|
|
|
2017-06-02 23:01:50 +08:00
|
|
|
reply_data_len = request.len;
|
|
|
|
|
2011-02-22 23:44:51 +08:00
|
|
|
break;
|
|
|
|
case NBD_CMD_WRITE:
|
2011-09-19 20:03:37 +08:00
|
|
|
if (exp->nbdflags & NBD_FLAG_READ_ONLY) {
|
2017-10-13 06:05:06 +08:00
|
|
|
ret = -EROFS;
|
2017-06-02 23:01:50 +08:00
|
|
|
break;
|
2011-09-19 22:04:36 +08:00
|
|
|
}
|
|
|
|
|
2016-05-12 06:39:34 +08:00
|
|
|
flags = 0;
|
2016-10-15 02:33:04 +08:00
|
|
|
if (request.flags & NBD_CMD_FLAG_FUA) {
|
2016-05-12 06:39:34 +08:00
|
|
|
flags |= BDRV_REQ_FUA;
|
|
|
|
}
|
2016-04-21 22:42:30 +08:00
|
|
|
ret = blk_pwrite(exp->blk, request.from + exp->dev_offset,
|
2016-05-12 06:39:34 +08:00
|
|
|
req->data, request.len, flags);
|
2011-09-19 22:04:36 +08:00
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "writing to file failed");
|
2011-09-19 22:04:36 +08:00
|
|
|
}
|
2011-02-22 23:44:51 +08:00
|
|
|
|
2016-10-15 02:33:17 +08:00
|
|
|
break;
|
|
|
|
case NBD_CMD_WRITE_ZEROES:
|
|
|
|
if (exp->nbdflags & NBD_FLAG_READ_ONLY) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(&local_err, "Server is read-only, return error");
|
2017-10-13 06:05:06 +08:00
|
|
|
ret = -EROFS;
|
2017-06-02 23:01:50 +08:00
|
|
|
break;
|
2016-10-15 02:33:17 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
flags = 0;
|
|
|
|
if (request.flags & NBD_CMD_FLAG_FUA) {
|
|
|
|
flags |= BDRV_REQ_FUA;
|
|
|
|
}
|
|
|
|
if (!(request.flags & NBD_CMD_FLAG_NO_HOLE)) {
|
|
|
|
flags |= BDRV_REQ_MAY_UNMAP;
|
|
|
|
}
|
|
|
|
ret = blk_pwrite_zeroes(exp->blk, request.from + exp->dev_offset,
|
|
|
|
request.len, flags);
|
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "writing to file failed");
|
2016-10-15 02:33:17 +08:00
|
|
|
}
|
|
|
|
|
2011-02-22 23:44:51 +08:00
|
|
|
break;
|
|
|
|
case NBD_CMD_DISC:
|
2016-05-12 06:39:37 +08:00
|
|
|
/* unreachable, thanks to special case in nbd_co_receive_request() */
|
|
|
|
abort();
|
|
|
|
|
2011-10-21 19:17:14 +08:00
|
|
|
case NBD_CMD_FLUSH:
|
2014-11-18 19:21:18 +08:00
|
|
|
ret = blk_co_flush(exp->blk);
|
2011-10-21 19:17:14 +08:00
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "flush failed");
|
2011-10-21 19:17:14 +08:00
|
|
|
}
|
2017-06-02 23:01:50 +08:00
|
|
|
|
2011-10-21 19:17:14 +08:00
|
|
|
break;
|
|
|
|
case NBD_CMD_TRIM:
|
2016-07-16 07:22:54 +08:00
|
|
|
ret = blk_co_pdiscard(exp->blk, request.from + exp->dev_offset,
|
|
|
|
request.len);
|
|
|
|
if (ret < 0) {
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg_errno(&local_err, -ret, "discard failed");
|
2011-10-21 19:17:14 +08:00
|
|
|
}
|
2017-06-02 23:01:50 +08:00
|
|
|
|
2011-10-21 19:17:14 +08:00
|
|
|
break;
|
2011-02-22 23:44:51 +08:00
|
|
|
default:
|
2017-07-07 23:29:11 +08:00
|
|
|
error_setg(&local_err, "invalid request type (%" PRIu32 ") received",
|
|
|
|
request.type);
|
2017-10-13 06:05:06 +08:00
|
|
|
ret = -EINVAL;
|
2017-06-02 23:01:50 +08:00
|
|
|
}
|
|
|
|
|
|
|
|
reply:
|
2017-07-07 23:29:11 +08:00
|
|
|
if (local_err) {
|
2017-10-13 06:05:06 +08:00
|
|
|
/* If we get here, local_err was not a fatal error, and should be sent
|
|
|
|
* to the client. */
|
2017-07-07 23:29:11 +08:00
|
|
|
error_report_err(local_err);
|
|
|
|
local_err = NULL;
|
|
|
|
}
|
|
|
|
|
2017-10-12 17:53:12 +08:00
|
|
|
if (nbd_co_send_simple_reply(req->client, request.handle,
|
2017-10-13 06:05:06 +08:00
|
|
|
ret < 0 ? -ret : 0,
|
2017-10-12 17:53:12 +08:00
|
|
|
req->data, reply_data_len, &local_err) < 0)
|
2017-10-13 06:05:06 +08:00
|
|
|
{
|
2017-07-07 23:29:12 +08:00
|
|
|
error_prepend(&local_err, "Failed to send reply: ");
|
2017-07-07 23:29:11 +08:00
|
|
|
goto disconnect;
|
|
|
|
}
|
|
|
|
|
2017-06-02 23:01:50 +08:00
|
|
|
/* We must disconnect after NBD_CMD_WRITE if we did not
|
|
|
|
* read the payload.
|
|
|
|
*/
|
2017-07-07 23:29:11 +08:00
|
|
|
if (!req->complete) {
|
|
|
|
error_setg(&local_err, "Request handling failed in intermediate state");
|
2017-06-02 23:01:50 +08:00
|
|
|
goto disconnect;
|
2011-02-22 23:44:51 +08:00
|
|
|
}
|
|
|
|
|
2012-03-05 16:10:35 +08:00
|
|
|
done:
|
2011-09-19 21:19:27 +08:00
|
|
|
nbd_request_put(req);
|
2017-02-13 21:52:24 +08:00
|
|
|
nbd_client_put(client);
|
2011-09-19 21:19:27 +08:00
|
|
|
return;
|
|
|
|
|
2017-06-02 23:01:50 +08:00
|
|
|
disconnect:
|
2017-07-07 23:29:11 +08:00
|
|
|
if (local_err) {
|
|
|
|
error_reportf_err(local_err, "Disconnect client, due to: ");
|
|
|
|
}
|
2011-10-07 22:47:56 +08:00
|
|
|
nbd_request_put(req);
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
client_close(client, true);
|
2017-02-13 21:52:24 +08:00
|
|
|
nbd_client_put(client);
|
2008-05-28 05:13:40 +08:00
|
|
|
}
|
2011-09-19 20:03:37 +08:00
|
|
|
|
2017-02-13 21:52:24 +08:00
|
|
|
static void nbd_client_receive_next_request(NBDClient *client)
|
2014-06-21 03:57:32 +08:00
|
|
|
{
|
2017-02-13 21:52:24 +08:00
|
|
|
if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS) {
|
|
|
|
nbd_client_get(client);
|
|
|
|
client->recv_coroutine = qemu_coroutine_create(nbd_trip, client);
|
|
|
|
aio_co_schedule(client->exp->ctx, client->recv_coroutine);
|
2014-06-21 03:57:32 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-01-14 16:41:03 +08:00
|
|
|
static coroutine_fn void nbd_co_client_start(void *opaque)
|
|
|
|
{
|
2017-06-02 23:01:46 +08:00
|
|
|
NBDClient *client = opaque;
|
2016-01-14 16:41:03 +08:00
|
|
|
NBDExport *exp = client->exp;
|
2017-07-07 23:29:11 +08:00
|
|
|
Error *local_err = NULL;
|
2016-01-14 16:41:03 +08:00
|
|
|
|
|
|
|
if (exp) {
|
|
|
|
nbd_export_get(exp);
|
2017-05-27 11:04:21 +08:00
|
|
|
QTAILQ_INSERT_TAIL(&exp->clients, client, next);
|
2016-01-14 16:41:03 +08:00
|
|
|
}
|
2017-05-27 11:04:21 +08:00
|
|
|
qemu_co_mutex_init(&client->send_lock);
|
|
|
|
|
2017-07-07 23:29:11 +08:00
|
|
|
if (nbd_negotiate(client, &local_err)) {
|
|
|
|
if (local_err) {
|
|
|
|
error_report_err(local_err);
|
|
|
|
}
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
client_close(client, false);
|
2017-06-02 23:01:46 +08:00
|
|
|
return;
|
2016-01-14 16:41:03 +08:00
|
|
|
}
|
2017-02-13 21:52:24 +08:00
|
|
|
|
|
|
|
nbd_client_receive_next_request(client);
|
2016-01-14 16:41:03 +08:00
|
|
|
}
|
|
|
|
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
/*
|
|
|
|
* Create a new client listener on the given export @exp, using the
|
|
|
|
* given channel @sioc. Begin servicing it in a coroutine. When the
|
|
|
|
* connection closes, call @close_fn with an indication of whether the
|
|
|
|
* client completed negotiation.
|
|
|
|
*/
|
2016-02-11 02:41:04 +08:00
|
|
|
void nbd_client_new(NBDExport *exp,
|
|
|
|
QIOChannelSocket *sioc,
|
2016-02-11 02:41:11 +08:00
|
|
|
QCryptoTLSCreds *tlscreds,
|
|
|
|
const char *tlsaclname,
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
void (*close_fn)(NBDClient *, bool))
|
2011-09-19 20:03:37 +08:00
|
|
|
{
|
2011-09-19 20:33:23 +08:00
|
|
|
NBDClient *client;
|
2017-06-02 23:01:46 +08:00
|
|
|
Coroutine *co;
|
2016-01-14 16:41:03 +08:00
|
|
|
|
2017-10-07 07:49:16 +08:00
|
|
|
client = g_new0(NBDClient, 1);
|
2011-09-19 20:33:23 +08:00
|
|
|
client->refcount = 1;
|
|
|
|
client->exp = exp;
|
2016-02-11 02:41:11 +08:00
|
|
|
client->tlscreds = tlscreds;
|
|
|
|
if (tlscreds) {
|
|
|
|
object_ref(OBJECT(client->tlscreds));
|
|
|
|
}
|
|
|
|
client->tlsaclname = g_strdup(tlsaclname);
|
2016-02-11 02:41:04 +08:00
|
|
|
client->sioc = sioc;
|
|
|
|
object_ref(OBJECT(client->sioc));
|
|
|
|
client->ioc = QIO_CHANNEL(sioc);
|
|
|
|
object_ref(OBJECT(client->ioc));
|
nbd: Fix regression on resiliency to port scan
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2017-06-09 06:26:17 +08:00
|
|
|
client->close_fn = close_fn;
|
2012-09-18 19:26:25 +08:00
|
|
|
|
2017-06-02 23:01:46 +08:00
|
|
|
co = qemu_coroutine_create(nbd_co_client_start, client);
|
|
|
|
qemu_coroutine_enter(co);
|
2011-09-19 20:03:37 +08:00
|
|
|
}
|