- Dec 14, 2022
-
-
Markus Armbruster authored
The has_FOO for pointer-valued FOO are redundant, except for arrays. They are also a nuisance to work with. Recent commit "qapi: Start to elide redundant has_FOO in generated C" provided the means to elide them step by step. This is the step for qapi/block*.json. Said commit explains the transformation in more detail. There is one instance of the invariant violation mentioned there: qcow2_signal_corruption() passes false, "" when node_name is an empty string. Take care to pass NULL then. The previous two commits cleaned up two more. Additionally, helper bdrv_latency_histogram_stats() loses its output parameters and returns a value instead. Cc: Kevin Wolf <kwolf@redhat.com> Cc: Hanna Reitz <hreitz@redhat.com> Cc: qemu-block@nongnu.org Signed-off-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20221104160712.3005652-11-armbru@redhat.com> [Fixes for #ifndef LIBRBD_SUPPORTS_ENCRYPTION and MacOS squashed in]
-
- May 12, 2022
-
-
Eric Blake authored
According to the NBD spec, a server that advertises NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will not see any cache inconsistencies: when properly separated by a single flush, actions performed by one client will be visible to another client, regardless of which client did the flush. We always satisfy these conditions in qemu - even when we support multiple clients, ALL clients go through a single point of reference into the block layer, with no local caching. The effect of one client is instantly visible to the next client. Even if our backend were a network device, we argue that any multi-path caching effects that would cause inconsistencies in back-to-back actions not seeing the effect of previous actions would be a bug in that backend, and not the fault of caching in qemu. As such, it is safe to unconditionally advertise CAN_MULTI_CONN for any qemu NBD server situation that supports parallel clients. Note, however, that we don't want to advertise CAN_MULTI_CONN when we know that a second client cannot connect (for historical reasons, qemu-nbd defaults to a single connection while nbd-server-add and QMP commands default to unlimited connections; but we already have existing means to let either style of NBD server creation alter those defaults). This is visible by no longer advertising MULTI_CONN for 'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation. The harder part of this patch is setting up an iotest to demonstrate behavior of multiple NBD clients to a single server. It might be possible with parallel qemu-io processes, but I found it easier to do in python with the help of libnbd, and help from Nir and Vladimir in writing the test. Signed-off-by:
Eric Blake <eblake@redhat.com> Suggested-by:
Nir Soffer <nsoffer@redhat.com> Suggested-by:
Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru> Message-Id: <20220512004924.417153-3-eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Eric Blake authored
The next patch wants to adjust whether the NBD server code advertises MULTI_CONN based on whether it is known if the server limits to exactly one client. For a server started by QMP, this information is obtained through nbd_server_start (which can support more than one export); but for qemu-nbd (which supports exactly one export), it is controlled only by the command-line option -e/--shared. Since we already have a hook function used by qemu-nbd, it's easiest to just alter its signature to fit our needs. Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20220512004924.417153-2-eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Apr 26, 2022
-
-
Vladimir Sementsov-Ogievskiy authored
Hi all! Current logic of relying on search through backing chain is not safe neither convenient. Sometimes it leads to necessity of extra bitmap copying. Also, we are going to add "snapshot-access" driver, to access some snapshot state through NBD. And this driver is not formally a filter, and of course it's not a COW format driver. So, searching through backing chain will not work. Instead of widening the workaround of bitmap searching, let's extend the interface so that user can select bitmap precisely. Note, that checking for bitmap active status is not copied to the new API, I don't see a reason for it, user should understand the risks. And anyway, bitmap from other node is unrelated to this export being read-only or read-write. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org> Message-Id: <20220314213226.362217-3-v.sementsov-og@mail.ru> [eblake: Adjust S-o-b to Vladimir's new email, with permission] Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com>
-
- Mar 07, 2022
-
-
Daniel P. Berrangé authored
The TLS usage for NBD was restricted to IP sockets because validating x509 certificates requires knowledge of the hostname that the client is connecting to. TLS does not have to use x509 certificates though, as PSK (pre-shared keys) provide an alternative credential option. These have no requirement for a hostname and can thus be trivially used for UNIX sockets. Furthermore, with the ability to overide the default hostname for TLS validation in the previous patch, it is now also valid to want to use x509 certificates with FD passing and UNIX sockets. Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220304193610.3293146-6-berrange@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com>
-
- Jun 29, 2021
-
-
Philippe Mathieu-Daudé authored
Avoid accessing QCryptoTLSCreds internals by using the qcrypto_tls_creds_check_endpoint() helper. Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Signed-off-by:
Philippe Mathieu-Daudé <philmd@redhat.com> Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com>
-
- Feb 12, 2021
-
-
Eric Blake authored
Our default of a backlog of 1 connection is rather puny; it gets in the way when we are explicitly allowing multiple clients (such as qemu-nbd -e N [--shared], or nbd-server-start with its default "max-connections":0 for unlimited), but is even a problem when we stick to qemu-nbd's default of only 1 active client but use -t [--persistent] where a second client can start using the server once the first finishes. While the effects are less noticeable on TCP sockets (since the client can poll() to learn when the server is ready again), it is definitely observable on Unix sockets, where on Linux, a client will fail with EAGAIN and no recourse but to sleep an arbitrary amount of time before retrying if the server backlog is already full. Since QMP nbd-server-start is always persistent, it now always requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request SOMAXCONN if persistent, otherwise its backlog should be based on the expected number of clients. See https://bugzilla.redhat.com/1925045 for a demonstration of where our low backlog prevents libnbd from connecting as many parallel clients as it wants. Reported-by:
Richard W.M. Jones <rjones@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com> CC: qemu-stable@nongnu.org Message-Id: <20210209152759.209074-2-eblake@redhat.com> Tested-by:
Richard W.M. Jones <rjones@redhat.com> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com>
-
- Oct 30, 2020
-
-
Eric Blake authored
Since 'block-export-add' is new to 5.2, we can still tweak the interface; there, allowing 'bitmaps':['str'] is nicer than 'bitmap':'str'. This wires up the qapi and qemu-nbd changes to permit passing multiple bitmaps as distinct metadata contexts that the NBD client may request, but the actual support for more than one will require a further patch to the server. Note that there are no changes made to the existing deprecated 'nbd-server-add' command; this required splitting the QAPI type BlockExportOptionsNbd, which fortunately does not affect QMP introspection. Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20201027050556.269064-5-eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Peter Krempa <pkrempa@redhat.com>
-
Eric Blake authored
Rather than open-coding the translation from the deprecated NbdServerAddOptions type to the preferred BlockExportOptionsNbd, it's better to utilize QAPI_CLONE_MEMBERS. This solves a couple of issues: first, if we do any more refactoring of the base type (which an upcoming patch plans to do), we don't have to revisit the open-coding. Second, our assignment to arg->name is fishy: the generated QAPI code for qapi_free_NbdServerAddOptions does not visit arg->name if arg->has_name is false, but if it DID visit it, we would have introduced a double-free situation when arg is finally freed. Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20201027050556.269064-4-eblake@redhat.com>
-
- Oct 02, 2020
-
-
Kevin Wolf authored
There is no real reason any more why nbd_export_new() and nbd_export_create() should be separate functions. The latter only performs a few checks before it calls the former. What makes the current state stand out is that it's the only function in BlockExportDriver that is not a static function inside nbd/server.c, but a small wrapper in blockdev-nbd.c that then calls back into nbd/server.c for the real functionality. Move all the checks to nbd/server.c and make the resulting function static to improve readability. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-27-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
The 'writable' option is a basic option that will probably be applicable to most if not all export types that we will implement. Move it from NBD to the generic BlockExport layer. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-26-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Every export type will need a BlockBackend, so creating it centrally in blk_exp_add() instead of the .create driver callback avoids duplication. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-24-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Implement a new QMP command block-export-del and make nbd-server-remove a wrapper around it. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-21-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
The reference owned by the user/monitor that is created when adding the export and dropped when removing it was tied to the 'exports' list in nbd/server.c. Every block export will have a user reference, so move it to the block export level and tie it to the 'block_exports' list in block/export/export.c instead. This is necessary for introducing a QMP command for removing exports. Note that exports are present in block_exports even after the user has requested shutdown. This is different from NBD's exports where exports are immediately removed on a shutdown request, even if they are still in the process of shutting down. In order to avoid that the user still interacts with an export that is shutting down (and possibly removes it a second time), we need to remember if the user actually still owns it. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-20-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
We'll need an id to identify block exports in monitor commands. This adds one. Note that this is different from the 'name' option in the NBD server, which is the externally visible export name. While block export ids need to be unique in the whole process, export names must be unique only for the same server. Different export types or (potentially in the future) multiple NBD servers can have the same export name externally, but still need different block export ids internally. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-19-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds a function to shut down all block exports, and another one to shut down the block exports of a single type. The latter is used for now when stopping the NBD server. As soon as we implement support for multiple NBD servers, we'll need a per-server list of exports and it will be replaced by a function using that. As a side effect, the BlockExport layer has a list tracking all existing exports now. closed_exports loses its only user and can go away. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-18-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Instead of letting the driver allocate and return the BlockExport object, allocate it already in blk_exp_add() and pass it. This allows us to initialise the generic part before calling into the driver so that the driver can just use these values instead of having to parse the options a second time. For symmetry, move freeing the BlockExport to blk_exp_unref(). Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-17-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Every block export needs a block node to export, so add a 'node-name' option to BlockExportOptions and remove the replaced option 'device' from BlockExportOptionsNbd. To maintain compatibility in nbd-server-add, BlockExportOptionsNbd needs to be wrapped by a new type NbdServerAddOptions that adds 'device' back because nbd-server-add doesn't use the BlockExportOptions base type at all (so even without changing it to a 'node-name' option in block-export-add, this compatibility code would be necessary). Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-16-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Having a refcount makes sense for all types of block exports. It is also a prerequisite for keeping a list of all exports at the BlockExport level. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-14-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
With this change, NBD exports are now only created through the BlockExport interface. This allows us finally to move things from the NBD layer to the BlockExport layer if they make sense for other export types, too. blk_exp_add() returns only a weak reference, so the explicit nbd_export_put() goes away. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-12-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
The export close callback is unused by the built-in NBD server. qemu-nbd uses it only during shutdown to wait for the unrefed export to actually go away. It can just use nbd_export_close_all() instead and do without the callback. This removes the close callback from nbd_export_new() and makes both callers of it more similar. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-11-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
qemu-nbd allows use of writethrough cache modes, which mean that write requests made through NBD will cause a flush before they complete. Expose the same functionality in block-export-add. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-10-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This is a QMP equivalent of qemu-nbd's --shared option, limiting the maximum number of clients that can attach at the same time. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-9-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
nbd-server-add tries to be convenient and adds two questionable features that we don't want to share in block-export-add, even for NBD exports: 1. When requesting a writable export of a read-only device, the export is silently downgraded to read-only. This should be an error in the context of block-export-add. 2. When using a BlockBackend name, unplugging the device from the guest will automatically stop the NBD server, too. This may sometimes be what you want, but it could also be very surprising. Let's keep things explicit with block-export-add. If the user wants to stop the export, they should tell us so. Move these things into the nbd-server-add QMP command handler so that they apply only there. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-8-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Instead of implementing qemu-nbd --offset in the NBD code, just put a raw block node with the requested offset on top of the user image and rely on that doing the job. This does not only simplify the nbd_export_new() interface and bring it closer to the set of options that the nbd-server-add QMP command offers, but in fact it also eliminates a potential source for bugs in the NBD code which previously had to add the offset manually in all relevant places. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-7-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
We want to have a common set of commands for all types of block exports. Currently, this is only NBD, but we're going to add more types. This patch adds the basic BlockExport and BlockExportDriver structs and a QMP command block-export-add that creates a new export based on the given BlockExportOptions. qmp_nbd_server_add() becomes a wrapper around qmp_block_export_add(). Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-Id: <20200924152717.287415-5-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
The name BlockExport will be used for the struct containing the runtime state of block exports, so change the name of export creation options. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-4-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Move all block export related types and commands from block-core to the new QAPI module block-export. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-Id: <20200924152717.287415-3-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Mar 06, 2020
-
-
Kevin Wolf authored
Move the arguments of nbd-server-add to a new struct BlockExportNbd and convert the command to 'boxed': true. This makes it easier to share code with the storage daemon. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-Id: <20200224143008.13362-11-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Add a --nbd-server option to qemu-storage-daemon to start the built-in NBD server right away. It maps the arguments for nbd-server-start to the command line, with the exception that it uses SocketAddress instead of SocketAddressLegacy: New interfaces shouldn't use legacy types, and the additional nesting would be nasty on the command line. Example (only with required options): --nbd-server addr.type=inet,addr.host=localhost,addr.port=10809 Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-Id: <20200224143008.13362-10-kwolf@redhat.com> Acked-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Feb 05, 2020
-
-
Eric Blake authored
Allow blockdevs to match the feature already present in qemu-nbd -D. Enhance iotest 223 to cover it. Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20191114024635.11363-5-eblake@redhat.com>
-
- Nov 18, 2019
-
-
Eric Blake authored
Qemu as server currently won't accept export names larger than 256 bytes, nor create dirty bitmap names longer than 1023 bytes, so most uses of qemu as client or server have no reason to get anywhere near the NBD spec maximum of a 4k limit per string. However, we weren't actually enforcing things, ignoring when the remote side violates the protocol on input, and also having several code paths where we send oversize strings on output (for example, qemu-nbd --description could easily send more than 4k). Tighten things up as follows: client: - Perform bounds check on export name and dirty bitmap request prior to handing it to server - Validate that copied server replies are not too long (ignoring NBD_INFO_* replies that are not copied is not too bad) server: - Perform bounds check on export name and description prior to advertising it to client - Reject client name or metadata query that is too long - Adjust things to allow full 4k name limit rather than previous 256 byte limit Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20191114024635.11363-4-eblake@redhat.com> Reviewed-by:
Maxim Levitsky <mlevitsk@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-
- Sep 24, 2019
-
-
Eric Blake authored
When iothreads are in use, the failure to grab the aio context results in an assertion failure when trying to unlock things during blk_unref, when trying to unlock a mutex that was not locked. In short, all calls to nbd_export_put need to done while within the correct aio context. But since nbd_export_put can recursively reach itself via nbd_export_close, and recursively grabbing the context would deadlock, we can't do the context grab directly in those functions, but must do so in their callers. Hoist the use of the correct aio_context from nbd_export_new() to its caller qmp_nbd_server_add(). Then tweak qmp_nbd_server_remove(), nbd_eject_notifier(), and nbd_esport_close_all() to grab the right context, so that all callers during qemu now own the context before nbd_export_put() can call blk_unref(). Remaining uses in qemu-nbd don't matter (since that use case does not support iothreads). Suggested-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20190917023917.32226-1-eblake@redhat.com> Reviewed-by:
Sergio Lopez <slp@redhat.com>
-
- Sep 05, 2019
-
-
Eric Blake authored
When creating a read-only image, we are still advertising support for TRIM and WRITE_ZEROES to the client, even though the client should not be issuing those commands. But seeing this requires looking across multiple functions: All callers to nbd_export_new() passed a single flag based solely on whether the export allows writes. Later, we then pass a constant set of flags to nbd_negotiate_options() (namely, the set of flags which we always support, at least for writable images), which is then further dynamically modified with NBD_FLAG_SEND_DF based on client requests for structured options. Finally, when processing NBD_OPT_EXPORT_NAME or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the runtime set of flags we've built up over several functions. Let's refactor things to instead compute a baseline of flags as soon as possible which gets shared between multiple clients, in nbd_export_new(), and changing the signature for the callers to pass in a simpler bool rather than having to figure out flags. We can then get rid of the 'myflags' parameter to various functions, and instead refer to client for everything we need (we still have to perform a bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and NBD_OPT_EXPORT_GO, but it's easier to see what is being computed). This lets us quit advertising senseless flags for read-only images, as well as making the next patch for exposing FAST_ZERO support easier to write. Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20190823143726.27062-2-eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> [eblake: improve commit message, update iotest 223]
-
Eric Blake authored
The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be advertised when the server promises cache consistency between simultaneous clients (basically, rules that determine what FUA and flush from one client are able to guarantee for reads from another client). When we don't permit simultaneous clients (such as qemu-nbd without -e), the bit makes no sense; and for writable images, we probably have a lot more work before we can declare that actions from one client are cache-consistent with actions from another. But for read-only images, where flush isn't changing any data, we might as well advertise multi-conn support. What's more, advertisement of the bit makes it easier for clients to determine if 'qemu-nbd -e' was in use, where a second connection will succeed rather than hang until the first client goes away. This patch affects qemu as server in advertising the bit. We may want to consider patches to qemu as client to attempt parallel connections for higher throughput by spreading the load over those connections when a server advertises multi-conn, but for now sticking to one connection per nbd:// BDS is okay. See also: https://bugzilla.redhat.com/1708300 Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20190815185024.7010-1-eblake@redhat.com> [eblake: tweak blockdev-nbd.c to not request shared when writable, fix iotest 233] Reviewed-by:
John Snow <jsnow@redhat.com>
-
- Sep 03, 2019
-
-
Juan Quintela authored
Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Aug 16, 2019
-
-
Markus Armbruster authored
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). Almost a third of its inclusions are actually superfluous. Delete them. Downgrade two more to qapi/qapi-types-run-state.h, and move one from char/serial.h to char/serial.c. hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and stubs/semihost.c define variables declared in sysemu/sysemu.h without including it. The compiler is cool with that, but include it anyway. This doesn't reduce actual use much, as it's still included into widely included headers. The next commit will tackle that. Signed-off-by:
Markus Armbruster <armbru@redhat.com> Reviewed-by:
Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-27-armbru@redhat.com> Reviewed-by:
Alex Bennée <alex.bennee@linaro.org>
-
- Mar 06, 2019
-
-
Daniel P. Berrangé authored
As with the previous patch to qemu-nbd, the nbd-server-start QMP command also needs to be able to specify authorization when enabling TLS encryption. First the client must create a QAuthZ object instance using the 'object-add' command: { 'execute': 'object-add', 'arguments': { 'qom-type': 'authz-list', 'id': 'authz0', 'parameters': { 'policy': 'deny', 'rules': [ { 'match': '*CN=fred', 'policy': 'allow' } ] } } } They can then reference this in the new 'tls-authz' parameter when executing the 'nbd-server-start' command: { 'execute': 'nbd-server-start', 'arguments': { 'addr': { 'type': 'inet', 'host': '127.0.0.1', 'port': '9000' }, 'tls-creds': 'tls0', 'tls-authz': 'authz0' } } Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Daniel P. Berrange <berrange@redhat.com> Message-Id: <20190227162035.18543-3-berrange@redhat.com> Signed-off-by:
Eric Blake <eblake@redhat.com>
-
- Jan 21, 2019
-
-
Eric Blake authored
We only had two callers to nbd_export_new; qemu-nbd.c always passed a valid offset/length pair (because it already checked the file length, to ensure that offset was in bounds), while blockdev-nbd.c always passed 0/-1. Then nbd_export_new reduces the size to a multiple of BDRV_SECTOR_SIZE (can only happen when offset is not sector-aligned, since bdrv_getlength() currently rounds up) (someday, it would be nice to have byte-accurate lengths - but not today). However, I'm finding it easier to work with the code if we are consistent on having both callers pass in a valid length, and just assert that things are sane in nbd_export_new, meaning that no negative values were passed, and that offset+size does not exceed 63 bits (as that really is a fundamental limit to later operations, whether we use off_t or uint64_t). Signed-off-by:
Eric Blake <eblake@redhat.com> Message-Id: <20190117193658.16413-6-eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
-
- Jan 14, 2019
-
-
Eric Blake authored
We only have one caller that wants to export a bitmap name, which it does right after creation of the export. But there is still a brief window of time where an NBD client could see the export but not the dirty bitmap, which a robust client would have to interpret as meaning the entire image should be treated as dirty. Better is to eliminate the window entirely, by inlining nbd_export_bitmap() into nbd_export_new(), and refusing to create the bitmap in the first place if the requested bitmap can't be located. We also no longer need logic for setting a different bitmap name compared to the bitmap being exported. Signed-off-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20190111194720.15671-8-eblake@redhat.com>
-