Skip to content
Snippets Groups Projects
  1. Sep 25, 2023
    • Eric Blake's avatar
      nbd/server: Refactor handling of command sanity checks · 8db7e2d6
      Eric Blake authored
      
      Upcoming additions to support NBD 64-bit effect lengths will add a new
      command flag NBD_CMD_FLAG_PAYLOAD_LEN that needs to be considered in
      our sanity checks of the client's messages (that is, more than just
      CMD_WRITE have the potential to carry a client payload when extended
      headers are in effect).  But before we can start to support that, it
      is easier to first refactor the existing set of various if statements
      over open-coded combinations of request->type to instead be a single
      switch statement over all command types that sets witnesses, then
      straight-line processing based on the witnesses.  No semantic change
      is intended.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230829175826.377251-24-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      8db7e2d6
    • Eric Blake's avatar
      nbd: Prepare for 64-bit request effect lengths · b2578459
      Eric Blake authored
      
      Widen the length field of NBDRequest to 64-bits, although we can
      assert that all current uses are still under 32 bits: either because
      of NBD_MAX_BUFFER_SIZE which is even smaller (and where size_t can
      still be appropriate, even on 32-bit platforms), or because nothing
      ever puts us into NBD_MODE_EXTENDED yet (and while future patches will
      allow larger transactions, the lengths in play here are still capped
      at 32-bit).  There are no semantic changes, other than a typo fix in a
      couple of error messages.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230829175826.377251-23-eblake@redhat.com>
      [eblake: fix assertion bug in nbd_co_send_simple_reply]
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      b2578459
  2. Sep 22, 2023
    • Eric Blake's avatar
      nbd: Add types for extended headers · d95ffb6f
      Eric Blake authored
      
      Add the constants and structs necessary for later patches to start
      implementing the NBD_OPT_EXTENDED_HEADERS extension in both the client
      and server, matching recent upstream nbd.git (through commit
      e6f3b94a934).  This patch does not change any existing behavior, but
      merely sets the stage for upcoming patches.
      
      This patch does not change the status quo that neither the client nor
      server use a packed-struct representation for the request header.
      While most of the patch adds new types, there is also some churn for
      renaming the existing NBDExtent to NBDExtent32 to contrast it with
      NBDExtent64, which I thought was a nicer name than NBDExtentExt.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Message-ID: <20230829175826.377251-22-eblake@redhat.com>
      d95ffb6f
    • Eric Blake's avatar
      nbd/client: Pass mode through to nbd_send_request · 297365b4
      Eric Blake authored
      
      Once the 64-bit headers extension is enabled, the data layout we send
      over the wire for a client request depends on the mode negotiated with
      the server.  Rather than adding a parameter to nbd_send_request, we
      can add a member to struct NBDRequest, since it already does not
      reflect on-wire format.  Some callers initialize it directly; many
      others rely on a common initialization point during
      nbd_co_send_request().  At this point, there is no semantic change.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Message-ID: <20230829175826.377251-21-eblake@redhat.com>
      297365b4
    • Eric Blake's avatar
      nbd: Replace bool structured_reply with mode enum · ac132d05
      Eric Blake authored
      
      The upcoming patches for 64-bit extensions requires various points in
      the protocol to make decisions based on what was negotiated.  While we
      could easily add a 'bool extended_headers' alongside the existing
      'bool structured_reply', this does not scale well if more modes are
      added in the future.  Better is to expose the mode enum added in the
      recent commit bfe04d0a out to a wider use in the code base.
      
      Where the code previously checked for structured_reply being set or
      clear, it now prefers checking for an inequality; this works because
      the nodes are in a continuum of increasing abilities, and allows us to
      touch fewer places if we ever insert other modes in the middle of the
      enum.  There should be no semantic change in this patch.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230829175826.377251-20-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      ac132d05
  3. Sep 08, 2023
    • Michael Tokarev's avatar
      misc/other: spelling fixes · 0a19d879
      Michael Tokarev authored
      
      Signed-off-by: default avatarMichael Tokarev <mjt@tls.msk.ru>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      0a19d879
    • Stefan Hajnoczi's avatar
      io: follow coroutine AioContext in qio_channel_yield() · 06e0f098
      Stefan Hajnoczi authored
      
      The ongoing QEMU multi-queue block layer effort makes it possible for multiple
      threads to process I/O in parallel. The nbd block driver is not compatible with
      the multi-queue block layer yet because QIOChannel cannot be used easily from
      coroutines running in multiple threads. This series changes the QIOChannel API
      to make that possible.
      
      In the current API, calling qio_channel_attach_aio_context() sets the
      AioContext where qio_channel_yield() installs an fd handler prior to yielding:
      
        qio_channel_attach_aio_context(ioc, my_ctx);
        ...
        qio_channel_yield(ioc); // my_ctx is used here
        ...
        qio_channel_detach_aio_context(ioc);
      
      This API design has limitations: reading and writing must be done in the same
      AioContext and moving between AioContexts involves a cumbersome sequence of API
      calls that is not suitable for doing on a per-request basis.
      
      There is no fundamental reason why a QIOChannel needs to run within the
      same AioContext every time qio_channel_yield() is called. QIOChannel
      only uses the AioContext while inside qio_channel_yield(). The rest of
      the time, QIOChannel is independent of any AioContext.
      
      In the new API, qio_channel_yield() queries the AioContext from the current
      coroutine using qemu_coroutine_get_aio_context(). There is no need to
      explicitly attach/detach AioContexts anymore and
      qio_channel_attach_aio_context() and qio_channel_detach_aio_context() are gone.
      One coroutine can read from the QIOChannel while another coroutine writes from
      a different AioContext.
      
      This API change allows the nbd block driver to use QIOChannel from any thread.
      It's important to keep in mind that the block driver already synchronizes
      QIOChannel access and ensures that two coroutines never read simultaneously or
      write simultaneously.
      
      This patch updates all users of qio_channel_attach_aio_context() to the
      new API. Most conversions are simple, but vhost-user-server requires a
      new qemu_coroutine_yield() call to quiesce the vu_client_trip()
      coroutine when not attached to any AioContext.
      
      While the API is has become simpler, there is one wart: QIOChannel has a
      special case for the iohandler AioContext (used for handlers that must not run
      in nested event loops). I didn't find an elegant way preserve that behavior, so
      I added a new API called qio_channel_set_follow_coroutine_ctx(ioc, true|false)
      for opting in to the new AioContext model. By default QIOChannel uses the
      iohandler AioHandler. Code that formerly called
      qio_channel_attach_aio_context() now calls
      qio_channel_set_follow_coroutine_ctx(ioc, true) once after the QIOChannel is
      created.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Acked-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Message-ID: <20230830224802.493686-5-stefanha@redhat.com>
      [eblake: also fix migration/rdma.c]
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      06e0f098
    • Stefan Hajnoczi's avatar
      nbd: drop unused nbd_start_negotiate() aio_context argument · 078c8ada
      Stefan Hajnoczi authored
      
      aio_context is always NULL, so drop it.
      
      Suggested-by: default avatarFabiano Rosas <farosas@suse.de>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Message-ID: <20230830224802.493686-3-stefanha@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      078c8ada
    • Stefan Hajnoczi's avatar
      nbd: drop unused nbd_receive_negotiate() aio_context argument · b84ca91c
      Stefan Hajnoczi authored
      
      aio_context is always NULL, so drop it.
      
      Suggested-by: default avatarFabiano Rosas <farosas@suse.de>
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@linaro.org>
      Message-ID: <20230830224802.493686-2-stefanha@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      b84ca91c
  4. Jul 19, 2023
    • Eric Blake's avatar
      nbd: Use enum for various negotiation modes · bfe04d0a
      Eric Blake authored
      
      Deciphering the hard-coded list of integer return values from
      nbd_start_negotiate() will only get more confusing when adding support
      for 64-bit extended headers.  Better is to name things in an enum.
      Although the function in question is private to client.c, putting the
      enum in a public header and including an enum-to-string conversion
      will allow its use in more places in upcoming patches.
      
      The enum is intentionally laid out so that operators like <= can be
      used to group multiple modes with similar characteristics, and where
      the least powerful mode has value 0, even though this patch does not
      exploit that.  No semantic change intended.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230608135653.2918540-9-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      bfe04d0a
    • Eric Blake's avatar
      nbd/client: Add safety check on chunk payload length · 70fa99f4
      Eric Blake authored
      
      Our existing use of structured replies either reads into a qiov capped
      at 32M (NBD_CMD_READ) or caps allocation to 1000 bytes (see
      NBD_MAX_MALLOC_PAYLOAD in block/nbd.c).  But the existing length
      checks are rather late; if we encounter a buggy (or malicious) server
      that sends a super-large payload length, we should drop the connection
      right then rather than assuming the layer on top will be careful.
      This becomes more important when we permit 64-bit lengths which are
      even more likely to have the potential for attempted denial of service
      abuse.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Message-ID: <20230608135653.2918540-8-eblake@redhat.com>
      70fa99f4
    • Eric Blake's avatar
      nbd: s/handle/cookie/ to match NBD spec · 22efd811
      Eric Blake authored
      Externally, libnbd exposed the 64-bit opaque marker for each client
      NBD packet as the "cookie", because it was less confusing when
      contrasted with 'struct nbd_handle *' holding all libnbd state.  It
      also avoids confusion between the noun 'handle' as a way to identify a
      packet and the verb 'handle' for reacting to things like signals.
      Upstream NBD changed their spec to favor the name "cookie" based on
      libnbd's recommendations[1], so we can do likewise.
      
      [1] https://github.com/NetworkBlockDevice/nbd/commit/ca4392eb2b
      
      
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230608135653.2918540-6-eblake@redhat.com>
      [eblake: typo fix]
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      22efd811
    • Eric Blake's avatar
      nbd/server: Refactor to pass full request around · 66d4f4fe
      Eric Blake authored
      
      Part of NBD's 64-bit headers extension involves passing the client's
      requested offset back as part of the reply header (one reason it
      stated for this change: converting absolute offsets stored in
      NBD_REPLY_TYPE_OFFSET_DATA to relative offsets within the buffer is
      easier if the absolute offset of the buffer is also available).  This
      is a refactoring patch to pass the full request around the reply
      stack, rather than just the handle, so that later patches can then
      access request->from when extended headers are active.  Meanwhile,
      this patch enables us to now assert that simple replies are only
      attempted when appropriate, and otherwise has no semantic change.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Message-ID: <20230608135653.2918540-5-eblake@redhat.com>
      66d4f4fe
    • Eric Blake's avatar
      nbd/server: Prepare for alternate-size headers · a7c8ed36
      Eric Blake authored
      Upstream NBD now documents[1] an extension that supports 64-bit effect
      lengths in requests.  As part of that extension, the size of the reply
      headers will change in order to permit a 64-bit length in the reply
      for symmetry[2].  Additionally, where the reply header is currently 16
      bytes for simple reply, and 20 bytes for structured reply; with the
      extension enabled, there will only be one extended reply header, of 32
      bytes, with both structured and extended modes sending identical
      payloads for chunked replies.
      
      Since we are already wired up to use iovecs, it is easiest to allow
      for this change in header size by splitting each structured reply
      across multiple iovecs, one for the header (which will become wider in
      a future patch according to client negotiation), and the other(s) for
      the chunk payload, and removing the header from the payload struct
      definitions.  Rename the affected functions with s/structured/chunk/
      to make it obvious that the code will be reused in extended mode.
      
      Interestingly, the client side code never utilized the packed types,
      so only the server code needs to be updated.
      
      [1] https://github.com/NetworkBlockDevice/nbd/blob/extension-ext-header/doc/proto.md
      
      
      as of NBD commit e6f3b94a934
      
      [2] Note that on the surface, this is because some future server might
      permit a 4G+ NBD_CMD_READ and need to reply with that much data in one
      transaction.  But even though the extended reply length is widened to
      64 bits, for now the NBD spec is clear that servers will not reply
      with more than a maximum payload bounded by the 32-bit
      NBD_INFO_BLOCK_SIZE field; allowing a client and server to mutually
      agree to transactions larger than 4G would require yet another
      extension.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-ID: <20230608135653.2918540-4-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      a7c8ed36
    • Eric Blake's avatar
      nbd/client: Use smarter assert · f47b6eab
      Eric Blake authored
      
      Assigning strlen() to a uint32_t and then asserting that it isn't too
      large doesn't catch the case of an input string 4G in length.
      Thankfully, the incoming strings can never be that large: if the
      export name or query is reflecting a string the client got from the
      server, we already guarantee that we dropped the NBD connection if the
      server sent more than 32M in a single reply to our NBD_OPT_* request;
      if the export name is coming from qemu, nbd_receive_negotiate()
      asserted that strlen(info->name) <= NBD_MAX_STRING_SIZE; and
      similarly, a query string via x->dirty_bitmap coming from the user was
      bounds-checked in either qemu-nbd or by the limitations of QMP.
      Still, it doesn't hurt to be more explicit in how we write our
      assertions to not have to analyze whether inadvertent wraparound is
      possible.
      
      Fixes: 93676c88 ("nbd: Don't send oversize strings", v4.2.0)
      Reported-by: default avatarDr. David Alan Gilbert <dave@treblig.org>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Message-ID: <20230608135653.2918540-2-eblake@redhat.com>
      f47b6eab
  5. Jun 05, 2023
  6. May 19, 2023
    • Kevin Wolf's avatar
      nbd/server: Fix drained_poll to wake coroutine in right AioContext · 7c1f51bf
      Kevin Wolf authored
      
      nbd_drained_poll() generally runs in the main thread, not whatever
      iothread the NBD server coroutine is meant to run in, so it can't
      directly reenter the coroutines to wake them up.
      
      The code seems to have the right intention, it specifies the correct
      AioContext when it calls qemu_aio_coroutine_enter(). However, this
      functions doesn't schedule the coroutine to run in that AioContext, but
      it assumes it is already called in the home thread of the AioContext.
      
      To fix this, add a new thread-safe qio_channel_wake_read() that can be
      called in the main thread to wake up the coroutine in its AioContext,
      and use this in nbd_drained_poll().
      
      Cc: qemu-stable@nongnu.org
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20230517152834.277483-3-kwolf@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      7c1f51bf
  7. Apr 25, 2023
  8. Apr 20, 2023
  9. Apr 04, 2023
  10. Mar 27, 2023
    • Florian Westphal's avatar
      nbd/server: push pending frames after sending reply · bd2cd4a4
      Florian Westphal authored
      
      qemu-nbd doesn't set TCP_NODELAY on the tcp socket.
      
      Kernel waits for more data and avoids transmission of small packets.
      Without TLS this is barely noticeable, but with TLS this really shows.
      
      Booting a VM via qemu-nbd on localhost (with tls) takes more than
      2 minutes on my system.  tcpdump shows frequent wait periods, where no
      packets get sent for a 40ms period.
      
      Add explicit (un)corking when processing (and responding to) requests.
      "TCP_CORK, &zero" after earlier "CORK, &one" will flush pending data.
      
      VM Boot time:
      main:    no tls:  23s, with tls: 2m45s
      patched: no tls:  14s, with tls: 15s
      
      VM Boot time, qemu-nbd via network (same lan):
      main:    no tls:  18s, with tls: 1m50s
      patched: no tls:  17s, with tls: 18s
      
      Future optimization: if we could detect if there is another pending
      request we could defer the uncork operation because more data would be
      appended.
      
      Signed-off-by: default avatarFlorian Westphal <fw@strlen.de>
      Message-Id: <20230324104720.2498-1-fw@strlen.de>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      bd2cd4a4
  11. Jan 20, 2023
    • Markus Armbruster's avatar
      include/block: Untangle inclusion loops · e2c1c34f
      Markus Armbruster authored
      
      We have two inclusion loops:
      
             block/block.h
          -> block/block-global-state.h
          -> block/block-common.h
          -> block/blockjob.h
          -> block/block.h
      
             block/block.h
          -> block/block-io.h
          -> block/block-common.h
          -> block/blockjob.h
          -> block/block.h
      
      I believe these go back to Emanuele's reorganization of the block API,
      merged a few months ago in commit d7e2fe4a.
      
      Fortunately, breaking them is merely a matter of deleting unnecessary
      includes from headers, and adding them back in places where they are
      now missing.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20221221133551.3967339-2-armbru@redhat.com>
      e2c1c34f
    • Markus Armbruster's avatar
      coroutine: Split qemu/coroutine-core.h off qemu/coroutine.h · 68ba85ce
      Markus Armbruster authored
      
      qemu/coroutine.h and qemu/lockable.h include each other.
      
      They need each other only in macro expansions, so we could simply drop
      both inclusions to break the loop, and add suitable includes to files
      that expand the macros.
      
      Instead, move a part of qemu/coroutine.h to new qemu/coroutine-core.h
      so that qemu/coroutine-core.h doesn't need qemu/lockable.h, and
      qemu/lockable.h only needs qemu/coroutine-core.h.  Result:
      qemu/coroutine.h includes qemu/lockable.h includes
      qemu/coroutine-core.h.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20221221131435.3851212-5-armbru@redhat.com>
      [Semantic rebase conflict with 7c10cb38 "accel/tcg: Add debuginfo
      support" resolved]
      68ba85ce
  12. Jan 19, 2023
  13. Dec 15, 2022
  14. Dec 13, 2022
    • Markus Armbruster's avatar
      nbd/server: Clean up abuse of BlockExportOptionsNbd member @arg · 8461b4d6
      Markus Armbruster authored
      
      block-export-add argument @name defaults to the value of argument
      @node-name.
      
      nbd_export_create() implements this by copying @node_name to @name.
      It leaves @has_node_name false, violating the "has_node_name ==
      !!node_name" invariant.  Unclean.  Falls apart when we elide
      @has_node_name (next commit): then QAPI frees the same value twice,
      once for @node_name and once @name.  iotest 307 duly explodes.
      
      Goes back to commit c62d24e9 "blockdev-nbd: Boxed argument type for
      nbd-server-add" (v5.0.0).  Got moved from qmp_nbd_server_add() to
      nbd_export_create() (commit 56ee8626), then copied back (commit
      b6076afc).  Commit 8675cbd6 "nbd: Utilize QAPI_CLONE for type
      conversion" (v5.2.0) cleaned up the copy in qmp_nbd_server_add()
      noting
      
          Second, our assignment to arg->name is fishy: the generated QAPI code
          for qapi_free_NbdServerAddOptions does not visit arg->name if
          arg->has_name is false, but if it DID visit it, we would have
          introduced a double-free situation when arg is finally freed.
      
      Exactly.  However, the copy in nbd_export_create() remained dirty.
      
      Clean it up.  Since the value stored in member @name is not actually
      used outside this function, use a local variable instead of modifying
      the QAPI object.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Cc: Eric Blake <eblake@redhat.com>
      Cc: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      Cc: qemu-block@nongnu.org
      Message-Id: <20221104160712.3005652-10-armbru@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
      8461b4d6
  15. Jul 12, 2022
  16. Jun 29, 2022
  17. May 12, 2022
    • Eric Blake's avatar
      nbd/server: Allow MULTI_CONN for shared writable exports · 58a6fdcc
      Eric Blake authored
      
      According to the NBD spec, a server that advertises
      NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will
      not see any cache inconsistencies: when properly separated by a single
      flush, actions performed by one client will be visible to another
      client, regardless of which client did the flush.
      
      We always satisfy these conditions in qemu - even when we support
      multiple clients, ALL clients go through a single point of reference
      into the block layer, with no local caching.  The effect of one client
      is instantly visible to the next client.  Even if our backend were a
      network device, we argue that any multi-path caching effects that
      would cause inconsistencies in back-to-back actions not seeing the
      effect of previous actions would be a bug in that backend, and not the
      fault of caching in qemu.  As such, it is safe to unconditionally
      advertise CAN_MULTI_CONN for any qemu NBD server situation that
      supports parallel clients.
      
      Note, however, that we don't want to advertise CAN_MULTI_CONN when we
      know that a second client cannot connect (for historical reasons,
      qemu-nbd defaults to a single connection while nbd-server-add and QMP
      commands default to unlimited connections; but we already have
      existing means to let either style of NBD server creation alter those
      defaults).  This is visible by no longer advertising MULTI_CONN for
      'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation.
      
      The harder part of this patch is setting up an iotest to demonstrate
      behavior of multiple NBD clients to a single server.  It might be
      possible with parallel qemu-io processes, but I found it easier to do
      in python with the help of libnbd, and help from Nir and Vladimir in
      writing the test.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Suggested-by: default avatarNir Soffer <nsoffer@redhat.com>
      Suggested-by: default avatarVladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru>
      Message-Id: <20220512004924.417153-3-eblake@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      58a6fdcc
  18. Apr 26, 2022
    • Vladimir Sementsov-Ogievskiy's avatar
      qapi: nbd-export: allow select bitmaps by node/name pair · e5fb29d5
      Vladimir Sementsov-Ogievskiy authored
      
      Hi all! Current logic of relying on search through backing chain is not
      safe neither convenient.
      
      Sometimes it leads to necessity of extra bitmap copying. Also, we are
      going to add "snapshot-access" driver, to access some snapshot state
      through NBD. And this driver is not formally a filter, and of course
      it's not a COW format driver. So, searching through backing chain will
      not work. Instead of widening the workaround of bitmap searching, let's
      extend the interface so that user can select bitmap precisely.
      
      Note, that checking for bitmap active status is not copied to the new
      API, I don't see a reason for it, user should understand the risks. And
      anyway, bitmap from other node is unrelated to this export being
      read-only or read-write.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@openvz.org>
      Message-Id: <20220314213226.362217-3-v.sementsov-og@mail.ru>
      [eblake: Adjust S-o-b to Vladimir's new email, with permission]
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      e5fb29d5
  19. Mar 22, 2022
  20. Mar 08, 2022
  21. Mar 07, 2022
  22. Jan 28, 2022
  23. Dec 21, 2021
Loading