Skip to content
Snippets Groups Projects
  1. Feb 26, 2020
    • Eric Blake's avatar
      nbd: Fix regression with multiple meta contexts · 73e064cc
      Eric Blake authored
      
      Detected by a hang in the libnbd testsuite.  If a client requests
      multiple meta contexts (both base:allocation and qemu:dirty-bitmap:x)
      at the same time, our attempt to silence a false-positive warning
      about a potential uninitialized variable introduced botched logic: we
      were short-circuiting the second context, and never sending the
      NBD_REPLY_FLAG_DONE.  Combining two 'if' into one 'if/else' in
      bdf200a5 was wrong (I'm a bit embarrassed that such a change was my
      initial suggestion after the v1 patch, then I did not review the v2
      patch that actually got committed). Revert that, and instead silence
      the false positive warning by replacing 'return ret' with 'return 0'
      (the value it always has at that point in the code, even though it
      eluded the deduction abilities of the robot that reported the false
      positive).
      
      Fixes: bdf200a5
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20200206173832.130004-1-eblake@redhat.com>
      Reviewed-by: default avatarLaurent Vivier <laurent@vivier.eu>
      73e064cc
  2. Jan 08, 2020
  3. Nov 18, 2019
    • Eric Blake's avatar
      nbd: Don't send oversize strings · 93676c88
      Eric Blake authored
      
      Qemu as server currently won't accept export names larger than 256
      bytes, nor create dirty bitmap names longer than 1023 bytes, so most
      uses of qemu as client or server have no reason to get anywhere near
      the NBD spec maximum of a 4k limit per string.
      
      However, we weren't actually enforcing things, ignoring when the
      remote side violates the protocol on input, and also having several
      code paths where we send oversize strings on output (for example,
      qemu-nbd --description could easily send more than 4k).  Tighten
      things up as follows:
      
      client:
      - Perform bounds check on export name and dirty bitmap request prior
        to handing it to server
      - Validate that copied server replies are not too long (ignoring
        NBD_INFO_* replies that are not copied is not too bad)
      server:
      - Perform bounds check on export name and description prior to
        advertising it to client
      - Reject client name or metadata query that is too long
      - Adjust things to allow full 4k name limit rather than previous
        256 byte limit
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20191114024635.11363-4-eblake@redhat.com>
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      93676c88
    • Eric Blake's avatar
      nbd/server: Prefer heap over stack for parsing client names · 9d7ab222
      Eric Blake authored
      
      As long as we limit NBD names to 256 bytes (the bare minimum permitted
      by the standard), stack-allocation works for parsing a name received
      from the client.  But as mentioned in a comment, we eventually want to
      permit up to the 4k maximum of the NBD standard, which is too large
      for stack allocation; so switch everything in the server to use heap
      allocation.  For now, there is no change in actually supported name
      length.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20191114024635.11363-2-eblake@redhat.com>
      [eblake: fix uninit variable compile failure]
      Reviewed-by: default avatarMaxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      9d7ab222
  4. Sep 24, 2019
    • Eric Blake's avatar
      nbd: Grab aio context lock in more places · 61bc846d
      Eric Blake authored
      
      When iothreads are in use, the failure to grab the aio context results
      in an assertion failure when trying to unlock things during blk_unref,
      when trying to unlock a mutex that was not locked.  In short, all
      calls to nbd_export_put need to done while within the correct aio
      context.  But since nbd_export_put can recursively reach itself via
      nbd_export_close, and recursively grabbing the context would deadlock,
      we can't do the context grab directly in those functions, but must do
      so in their callers.
      
      Hoist the use of the correct aio_context from nbd_export_new() to its
      caller qmp_nbd_server_add().  Then tweak qmp_nbd_server_remove(),
      nbd_eject_notifier(), and nbd_esport_close_all() to grab the right
      context, so that all callers during qemu now own the context before
      nbd_export_put() can call blk_unref().
      
      Remaining uses in qemu-nbd don't matter (since that use case does not
      support iothreads).
      
      Suggested-by: default avatarKevin Wolf <kwolf@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190917023917.32226-1-eblake@redhat.com>
      Reviewed-by: default avatarSergio Lopez <slp@redhat.com>
      61bc846d
    • Sergio Lopez's avatar
      nbd/server: attach client channel to the export's AioContext · b4961249
      Sergio Lopez authored
      On creation, the export's AioContext is set to the same one as the
      BlockBackend, while the AioContext in the client QIOChannel is left
      untouched.
      
      As a result, when using data-plane, nbd_client_receive_next_request()
      schedules coroutines in the IOThread AioContext, while the client's
      QIOChannel is serviced from the main_loop, potentially triggering the
      assertion at qio_channel_restart_[read|write].
      
      To fix this, as soon we have the export corresponding to the client,
      we call qio_channel_attach_aio_context() to attach the QIOChannel
      context to the export's AioContext. This matches with the logic at
      blk_aio_attached().
      
      RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1748253
      
      
      Signed-off-by: default avatarSergio Lopez <slp@redhat.com>
      Message-Id: <20190912110032.26395-1-slp@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      b4961249
    • Eric Blake's avatar
      nbd/client: Add hint when TLS is missing · 1b5c15ce
      Eric Blake authored
      
      I received an off-list report of failure to connect to an NBD server
      expecting an x509 certificate, when the client was attempting something
      similar to this command line:
      
      $ ./x86_64-softmmu/qemu-system-x86_64 -name 'blah' -machine q35 -nodefaults \
        -object tls-creds-x509,id=tls0,endpoint=client,dir=$path_to_certs \
        -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0,addr=0x6 \
        -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0 \
        -device scsi-hd,id=image1,drive=drive_image1,bootindex=0
      qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go)
      server reported: Option 0x7 not permitted before TLS
      
      The problem?  As specified, -drive is trying to pass tls-creds to the
      raw format driver instead of the nbd protocol driver, but before we
      get to the point where we can detect that raw doesn't know what to do
      with tls-creds, the nbd driver has already failed because the server
      complained.  The fix to the broken command line?  Pass
      '...,file.tls-creds=tls0' to ensure the tls-creds option is handed to
      nbd, not raw.  But since the error message was rather cryptic, I'm
      trying to improve the error message.
      
      With this patch, the error message adds a line:
      
      qemu-system-x86_64: -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=nbd:localhost:9000,werror=stop,rerror=stop,tls-creds=tls0: TLS negotiation required before option 7 (go)
      Did you forget a valid tls-creds?
      server reported: Option 0x7 not permitted before TLS
      
      And with luck, someone grepping for that error message will find this
      commit message and figure out their command line mistake.  Sadly, the
      only mention of file.tls-creds in our docs relates to an --image-opts
      use of PSK encryption with qemu-img as the client, rather than x509
      certificate encryption with qemu-kvm as the client.
      
      CC: Tingting Mao <timao@redhat.com>
      CC: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190907172055.26870-1-eblake@redhat.com>
      [eblake: squash in iotest 233 fix]
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      1b5c15ce
  5. Sep 18, 2019
  6. Sep 05, 2019
    • Eric Blake's avatar
      nbd: Implement server use of NBD FAST_ZERO · b491dbb7
      Eric Blake authored
      
      The server side is fairly straightforward: we can always advertise
      support for detection of fast zero, and implement it by mapping the
      request to the block layer BDRV_REQ_NO_FALLBACK.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190823143726.27062-5-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: update iotests 223, 233]
      b491dbb7
    • Eric Blake's avatar
      nbd: Prepare for NBD_CMD_FLAG_FAST_ZERO · 0a479545
      Eric Blake authored
      
      Commit fe0480d6 and friends added BDRV_REQ_NO_FALLBACK as a way to
      avoid wasting time on a preliminary write-zero request that will later
      be rewritten by actual data, if it is known that the write-zero
      request will use a slow fallback; but in doing so, could not optimize
      for NBD.  The NBD specification is now considering an extension that
      will allow passing on those semantics; this patch updates the new
      protocol bits and 'qemu-nbd --list' output to recognize the bit, as
      well as the new errno value possible when using the new flag; while
      upcoming patches will improve the client to use the feature when
      present, and the server to advertise support for it.
      
      The NBD spec recommends (but not requires) that ENOTSUP be avoided for
      all but failures of a fast zero (the only time it is mandatory to
      avoid an ENOTSUP failure is when fast zero is supported but not
      requested during write zeroes; the questionable use is for ENOTSUP to
      other actions like a normal write request).  However, clients that get
      an unexpected ENOTSUP will either already be treating it the same as
      EINVAL, or may appreciate the extra bit of information.  We were
      equally loose for returning EOVERFLOW in more situations than
      recommended by the spec, so if it turns out to be a problem in
      practice, a later patch can tighten handling for both error codes.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190823143726.27062-3-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: tweak commit message, also handle EOPNOTSUPP]
      0a479545
    • Eric Blake's avatar
      nbd: Improve per-export flag handling in server · dbb38caa
      Eric Blake authored
      
      When creating a read-only image, we are still advertising support for
      TRIM and WRITE_ZEROES to the client, even though the client should not
      be issuing those commands.  But seeing this requires looking across
      multiple functions:
      
      All callers to nbd_export_new() passed a single flag based solely on
      whether the export allows writes.  Later, we then pass a constant set
      of flags to nbd_negotiate_options() (namely, the set of flags which we
      always support, at least for writable images), which is then further
      dynamically modified with NBD_FLAG_SEND_DF based on client requests
      for structured options.  Finally, when processing NBD_OPT_EXPORT_NAME
      or NBD_OPT_EXPORT_GO we bitwise-or the original caller's flag with the
      runtime set of flags we've built up over several functions.
      
      Let's refactor things to instead compute a baseline of flags as soon
      as possible which gets shared between multiple clients, in
      nbd_export_new(), and changing the signature for the callers to pass
      in a simpler bool rather than having to figure out flags.  We can then
      get rid of the 'myflags' parameter to various functions, and instead
      refer to client for everything we need (we still have to perform a
      bitwise-OR for NBD_FLAG_SEND_DF during NBD_OPT_EXPORT_NAME and
      NBD_OPT_EXPORT_GO, but it's easier to see what is being computed).
      This lets us quit advertising senseless flags for read-only images, as
      well as making the next patch for exposing FAST_ZERO support easier to
      write.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190823143726.27062-2-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: improve commit message, update iotest 223]
      dbb38caa
    • Eric Blake's avatar
      nbd: Tolerate more errors to structured reply request · 5de47735
      Eric Blake authored
      
      A server may have a reason to reject a request for structured replies,
      beyond just not recognizing them as a valid request; similarly, it may
      have a reason for rejecting a request for a meta context.  It doesn't
      hurt us to continue talking to such a server; otherwise 'qemu-nbd
      --list' of such a server fails to display all available details about
      the export.
      
      Encountered when temporarily tweaking nbdkit to reply with
      NBD_REP_ERR_POLICY.  Present since structured reply support was first
      added (commit d795299b reused starttls handling, but starttls is
      different in that we can't fall back to other behavior on any error).
      
      Note that for an unencrypted client trying to connect to a server that
      requires encryption, this defers the point of failure to when we
      finally execute a strict command (such as NBD_OPT_GO or NBD_OPT_LIST),
      now that the intermediate NBD_OPT_STRUCTURED_REPLY does not diagnose
      NBD_REP_ERR_TLS_REQD as fatal; but as the protocol eventually gets us
      to a command where we can't continue onwards, the changed error
      message doesn't cause any security concerns.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190824172813.29720-3-eblake@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      [eblake: fix iotest 233]
      5de47735
    • Eric Blake's avatar
      nbd: Use g_autofree in a few places · df18c04e
      Eric Blake authored
      
      Thanks to our recent move to use glib's g_autofree, I can join the
      bandwagon.  Getting rid of gotos is fun ;)
      
      There are probably more places where we could register cleanup
      functions and get rid of more gotos; this patch just focuses on the
      labels that existed merely to call g_free.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190824172813.29720-2-eblake@redhat.com>
      Reviewed-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      df18c04e
    • Eric Blake's avatar
      nbd: Advertise multi-conn for shared read-only connections · 61cc8724
      Eric Blake authored
      The NBD specification defines NBD_FLAG_CAN_MULTI_CONN, which can be
      advertised when the server promises cache consistency between
      simultaneous clients (basically, rules that determine what FUA and
      flush from one client are able to guarantee for reads from another
      client).  When we don't permit simultaneous clients (such as qemu-nbd
      without -e), the bit makes no sense; and for writable images, we
      probably have a lot more work before we can declare that actions from
      one client are cache-consistent with actions from another.  But for
      read-only images, where flush isn't changing any data, we might as
      well advertise multi-conn support.  What's more, advertisement of the
      bit makes it easier for clients to determine if 'qemu-nbd -e' was in
      use, where a second connection will succeed rather than hang until the
      first client goes away.
      
      This patch affects qemu as server in advertising the bit.  We may want
      to consider patches to qemu as client to attempt parallel connections
      for higher throughput by spreading the load over those connections
      when a server advertises multi-conn, but for now sticking to one
      connection per nbd:// BDS is okay.
      
      See also: https://bugzilla.redhat.com/1708300
      
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190815185024.7010-1-eblake@redhat.com>
      [eblake: tweak blockdev-nbd.c to not request shared when writable,
      fix iotest 233]
      Reviewed-by: default avatarJohn Snow <jsnow@redhat.com>
      61cc8724
  7. Aug 16, 2019
  8. Aug 15, 2019
  9. Jun 13, 2019
  10. Jun 04, 2019
    • Kevin Wolf's avatar
      block: Add BlockBackend.ctx · d861ab3a
      Kevin Wolf authored
      
      This adds a new parameter to blk_new() which requires its callers to
      declare from which AioContext this BlockBackend is going to be used (or
      the locks of which AioContext need to be taken anyway).
      
      The given context is only stored and kept up to date when changing
      AioContexts. Actually applying the stored AioContext to the root node
      is saved for another commit.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      d861ab3a
    • Kevin Wolf's avatar
      nbd-server: Call blk_set_allow_aio_context_change() · 45e92a90
      Kevin Wolf authored
      
      The NBD server uses an AioContext notifier, so it can tolerate that its
      BlockBackend is switched to a different AioContext. Before we start
      actually calling bdrv_try_set_aio_context(), which checks for
      consistency, outside of test cases, we need to make sure that the NBD
      server actually allows this.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      45e92a90
  11. Apr 08, 2019
    • Eric Blake's avatar
      nbd/client: Fix error message for server with unusable sizing · e53f88df
      Eric Blake authored
      
      Add a missing space to the error message used when giving up on a
      server that insists on an alignment which renders the last few bytes
      of the export unreadable.
      
      Fixes: 3add3ab7
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190404145226.32649-1-eblake@redhat.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      e53f88df
    • Eric Blake's avatar
      nbd/server: Don't fail NBD_OPT_INFO for byte-aligned sources · 099fbcd6
      Eric Blake authored
      
      In commit 0c1d50bd, I added a couple of TODO comments about whether we
      consult bl.request_alignment when responding to NBD_OPT_INFO. At the
      time, qemu as server was hard-coding an advertised alignment of 512 to
      clients that promised to obey constraints, and there was no function
      for getting at a device's preferred alignment. But in hindsight,
      advertising 512 when the block device prefers 1 caused other
      compliance problems, and commit b0245d64 changed one of the two TODO
      comments to advertise a more accurate alignment. Time to fix the other
      TODO.  Doesn't really impact qemu as client (our normal client doesn't
      use NBD_OPT_INFO, and qemu-nbd --list promises to obey block sizes),
      but it might prove useful to other clients.
      
      Fixes: b0245d64
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190403030526.12258-4-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      099fbcd6
    • Eric Blake's avatar
      nbd/server: Trace client noncompliance on unaligned requests · 6e280648
      Eric Blake authored
      
      We've recently added traces for clients to flag server non-compliance;
      let's do the same for servers to flag client non-compliance. According
      to the spec, if the client requests NBD_INFO_BLOCK_SIZE, it is
      promising to send all requests aligned to those boundaries.  Of
      course, if the client does not request NBD_INFO_BLOCK_SIZE, then it
      made no promises so we shouldn't flag anything; and because we are
      willing to handle clients that made no promises (the spec allows us to
      use NBD_REP_ERR_BLOCK_SIZE_REQD if we had been unwilling), we already
      have to handle unaligned requests (which the block layer already does
      on our behalf).  So even though the spec allows us to return EINVAL
      for clients that promised to behave, it's easier to always answer
      unaligned requests.  Still, flagging non-compliance can be useful in
      debugging a client that is trying to be maximally portable.
      
      Qemu as client used to have one spot where it sent non-compliant
      requests: if the server sends an unaligned reply to
      NBD_CMD_BLOCK_STATUS, and the client was iterating over the entire
      disk, the next request would start at that unaligned point; this was
      fixed in commit a39286dd when the client was taught to work around
      server non-compliance; but is equally fixed if the server is patched
      to not send unaligned replies in the first place (yes, qemu 4.0 as
      server still has few such bugs, although they will be patched in
      4.1). Fortunately, I did not find any more spots where qemu as client
      was non-compliant. I was able to test the patch by using the following
      hack to convince qemu-io to run various unaligned commands, coupled
      with serving 512-byte alignment by intentionally omitting '-f raw' on
      the server while viewing server traces.
      
      | diff --git i/nbd/client.c w/nbd/client.c
      | index 427980bdd22..1858b2aac35 100644
      | --- i/nbd/client.c
      | +++ w/nbd/client.c
      | @@ -449,6 +449,7 @@ static int nbd_opt_info_or_go(QIOChannel *ioc, uint32_t opt,
      |                  nbd_send_opt_abort(ioc);
      |                  return -1;
      |              }
      | +            info->min_block = 1;//hack
      |              if (!is_power_of_2(info->min_block)) {
      |                  error_setg(errp, "server minimum block size %" PRIu32
      |                             " is not a power of two", info->min_block);
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190403030526.12258-3-eblake@redhat.com>
      [eblake: address minor review nits]
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      6e280648
    • Eric Blake's avatar
      nbd/server: Fix blockstatus trace · 2178a569
      Eric Blake authored
      
      Don't increment remaining_bytes until we know that we will actually be
      including the current block status extent in the reply; otherwise, the
      value traced will include a bytes value that is oversized by the
      length of the next block status extent which did not get sent because
      it instead ended the loop.
      
      Fixes: fb7afc79
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190403030526.12258-2-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      2178a569
  12. Apr 01, 2019
    • Eric Blake's avatar
      nbd/server: Advertise actual minimum block size · b0245d64
      Eric Blake authored
      
      Both NBD_CMD_BLOCK_STATUS and structured NBD_CMD_READ will split their
      reply according to bdrv_block_status() boundaries. If the block device
      has a request_alignment smaller than 512, but we advertise a block
      alignment of 512 to the client, then this can result in the server
      reply violating client expectations by reporting a smaller region of
      the export than what the client is permitted to address (although this
      is less of an issue for qemu 4.0 clients, given recent client patches
      to overlook our non-compliance at EOF).  Since it's always better to
      be strict in what we send, it is worth advertising the actual minimum
      block limit rather than blindly rounding it up to 512.
      
      Note that this patch is not foolproof - it is still possible to
      provoke non-compliant server behavior using:
      
      $ qemu-nbd --image-opts driver=blkdebug,align=512,image.driver=file,image.filename=/path/to/non-aligned-file
      
      That is arguably a bug in the blkdebug driver (it should never pass
      back block status smaller than its alignment, even if it has to make
      multiple bdrv_get_status calls and determine the
      least-common-denominator status among the group to return). It may
      also be possible to observe issues with a backing layer with smaller
      alignment than the active layer, although so far I have been unable to
      write a reliable iotest for that scenario (but again, an issue like
      that could be argued to be a bug in the block layer, or something
      where we need a flag to bdrv_block_status() to state whether the
      result must be aligned to the current layer's limits or can be
      subdivided for accuracy when chasing backing files).
      
      Anyways, as blkdebug is not normally used, and as this patch makes our
      server more interoperable with qemu 3.1 clients, it is worth applying
      now, even while we still work on a larger patch series for the 4.1
      timeframe to have byte-accurate file lengths.
      
      Note that the iotests output changes - for 223 and 233, we can see the
      server's better granularity advertisement; and for 241, the three test
      cases have the following effects:
      - natural alignment: the server's smaller alignment is now advertised,
      and the hole reported at EOF is now the right result; we've gotten rid
      of the server's non-compliance
      - forced server alignment: the server still advertises 512 bytes, but
      still sends a mid-sector hole. This is still a server compliance bug,
      which needs to be fixed in the block layer in a later patch; output
      does not change because the client is already being tolerant of the
      non-compliance
      - forced client alignment: the server's smaller alignment means that
      the client now sees the server's status change mid-sector without any
      protocol violations, but the fact that the map shows an unaligned
      mid-sector hole is evidence of the block layer problems with aligned
      block status, to be fixed in a later patch
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190329042750.14704-7-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: rebase to enhanced iotest 241 coverage]
      b0245d64
    • Eric Blake's avatar
      nbd/client: Reject inaccessible tail of inconsistent server · 3add3ab7
      Eric Blake authored
      
      The NBD spec suggests that a server should never advertise a size
      inconsistent with its minimum block alignment, as that tail is
      effectively inaccessible to a compliant client obeying those block
      constraints. Since we have a habit of rounding up rather than
      truncating, to avoid losing the last few bytes of user input, and we
      cannot access the tail when the server advertises bogus block sizing,
      abort the connection to alert the server to fix their bug.  And
      rejecting such servers matches what we already did for a min_block
      that was not a power of 2 or which was larger than max_block.
      
      Does not impact either qemu (which always sends properly aligned
      sizes) or nbdkit (which does not send minimum block requirements yet);
      so this is mostly aimed at new NBD server implementations, and ensures
      that the rest of our code can assume the size is aligned.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20190330155704.24191-1-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      3add3ab7
  13. Mar 22, 2019
  14. Mar 12, 2019
  15. Mar 06, 2019
    • Daniel P. Berrangé's avatar
      qemu-nbd: add support for authorization of TLS clients · b25e12da
      Daniel P. Berrangé authored
      
      Currently any client which can complete the TLS handshake is able to use
      the NBD server. The server admin can turn on the 'verify-peer' option
      for the x509 creds to require the client to provide a x509 certificate.
      This means the client will have to acquire a certificate from the CA
      before they are permitted to use the NBD server. This is still a fairly
      low bar to cross.
      
      This adds a '--tls-authz OBJECT-ID' option to the qemu-nbd command which
      takes the ID of a previously added 'QAuthZ' object instance. This will
      be used to validate the client's x509 distinguished name. Clients
      failing the authorization check will not be permitted to use the NBD
      server.
      
      For example to setup authorization that only allows connection from a client
      whose x509 certificate distinguished name is
      
         CN=laptop.example.com,O=Example Org,L=London,ST=London,C=GB
      
      escape the commas in the name and use:
      
        qemu-nbd --object tls-creds-x509,id=tls0,dir=/home/berrange/qemutls,\
                          endpoint=server,verify-peer=yes \
                 --object 'authz-simple,id=auth0,identity=CN=laptop.example.com,,\
                           O=Example Org,,L=London,,ST=London,,C=GB' \
                 --tls-creds tls0 \
                 --tls-authz authz0 \
      	   ....other qemu-nbd args...
      
      NB: a real shell command line would not have leading whitespace after
      the line continuation, it is just included here for clarity.
      
      Reviewed-by: default avatarJuan Quintela <quintela@redhat.com>
      Signed-off-by: default avatarDaniel P. Berrange <berrange@redhat.com>
      Message-Id: <20190227162035.18543-2-berrange@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      [eblake: split long line in --help text, tweak 233 to show that whitespace
      after ,, in identity= portion is actually okay]
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      b25e12da
  16. Feb 25, 2019
  17. Feb 11, 2019
  18. Feb 04, 2019
  19. Jan 21, 2019
    • Eric Blake's avatar
      nbd/client: Work around 3.0 bug for listing meta contexts · 7c6f5ddc
      Eric Blake authored
      
      Commit 3d068aff forgot to advertise available qemu: contexts
      when the client requests a list with 0 queries. Furthermore,
      3.0 shipped with a qemu-img hack of x-dirty-bitmap (commit
      216ee365) that _silently_ acts as though the entire image is
      clean if a requested bitmap is not present.  Both bugs have
      been recently fixed, so that a modern qemu server gives full
      context output right away, and the client refuses a
      connection if a requested x-dirty-bitmap was not found.
      
      Still, it is likely that there will be users that have to
      work with a mix of old and new qemu versions, depending on
      which features get backported where, at which point being
      able to rely on 'qemu-img --list' output to know for sure
      whether a given NBD export has the desired dirty bitmap is
      much nicer than blindly connecting and risking that the
      entire image may appear clean.  We can make our --list code
      smart enough to work around buggy servers by tracking
      whether we've seen any qemu: replies in the original 0-query
      list; if not, repeat with a single query on "qemu:" (which
      may still have no replies, but then we know for sure we
      didn't trip up on the server bug).
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20190117193658.16413-21-eblake@redhat.com>
      7c6f5ddc
Loading