Skip to content
Snippets Groups Projects
  1. Dec 21, 2021
  2. Nov 22, 2021
    • Eric Blake's avatar
      nbd/server: Simplify zero and trim · e3557422
      Eric Blake authored
      
      Now that the block layer supports 64-bit operations (see commit
      2800637a and friends, new to v6.2), we no longer have to self-fragment
      requests larger than 2G, reverting the workaround added in 890cbccb
      ("nbd: Fix large trim/zero requests", v5.1.0).
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20211117170230.1128262-3-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      e3557422
    • Eric Blake's avatar
      nbd/server: Don't complain on certain client disconnects · 1644ccce
      Eric Blake authored
      When a client disconnects abruptly, but did not have any pending
      requests (for example, when using nbdsh without calling h.shutdown),
      we used to output the following message:
      
      $ qemu-nbd -f raw file
      $ nbdsh -u 'nbd://localhost:10809' -c 'h.trim(1,0)'
      qemu-nbd: Disconnect client, due to: Failed to read request: Unexpected end-of-file before all bytes were read
      
      Then in commit f148ae7d, we refactored nbd_receive_request() to use
      nbd_read_eof(); when this returns 0, we regressed into tracing
      uninitialized memory (if tracing is enabled) and reporting a
      less-specific:
      
      qemu-nbd: Disconnect client, due to: Request handling failed in intermediate state
      
      Note that with Unix sockets, we have yet another error message,
      unchanged by the 6.0 regression:
      
      $ qemu-nbd -k /tmp/sock -f raw file
      $ nbdsh -u 'nbd+unix:///?socket=/tmp/sock
      
      ' -c 'h.trim(1,0)'
      qemu-nbd: Disconnect client, due to: Failed to send reply: Unable to write to socket: Broken pipe
      
      But in all cases, the error message goes away if the client performs a
      soft shutdown by using NBD_CMD_DISC, rather than a hard shutdown by
      abrupt disconnect:
      
      $ nbdsh -u 'nbd://localhost:10809' -c 'h.trim(1,0)' -c 'h.shutdown()'
      
      This patch fixes things to avoid uninitialized memory, and in general
      avoids warning about a client that does a hard shutdown when not in
      the middle of a packet.  A client that aborts mid-request, or which
      does not read the full server's reply, can still result in warnings,
      but those are indeed much more unusual situations.
      
      CC: qemu-stable@nongnu.org
      Fixes: f148ae7d ("nbd/server: Quiesce coroutines on context switch", v6.0.0)
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: defer unrelated typo fixes to later patch]
      Message-Id: <20211117170230.1128262-2-eblake@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      1644ccce
  3. Nov 16, 2021
  4. Sep 29, 2021
    • Vladimir Sementsov-Ogievskiy's avatar
      block/nbd: drop connection_co · 4ddb5d2f
      Vladimir Sementsov-Ogievskiy authored
      
      OK, that's a big rewrite of the logic.
      
      Pre-patch we have an always running coroutine - connection_co. It does
      reply receiving and reconnecting. And it leads to a lot of difficult
      and unobvious code around drained sections and context switch. We also
      abuse bs->in_flight counter which is increased for connection_co and
      temporary decreased in points where we want to allow drained section to
      begin. One of these place is in another file: in nbd_read_eof() in
      nbd/client.c.
      
      We also cancel reconnect and requests waiting for reconnect on drained
      begin which is not correct. And this patch fixes that.
      
      Let's finally drop this always running coroutine and go another way:
      do both reconnect and receiving in request coroutines.
      
      The detailed list of changes below (in the sequence of diff hunks).
      
      1. receiving coroutines are woken directly from nbd_channel_error, when
         we change s->state
      
      2. nbd_co_establish_connection_cancel(): we don't have drain_begin now,
         and in nbd_teardown_connection() all requests should already be
         finished (and reconnect is done from request). So
         nbd_co_establish_connection_cancel() is called from
         nbd_cancel_in_flight() (to cancel the request that is doing
         nbd_co_establish_connection()) and from reconnect_delay_timer_cb()
         (previously we didn't need it, as reconnect delay only should cancel
         active requests not the reconnection itself). But now reconnection
         itself is done in the separate thread (we now call
         nbd_client_connection_enable_retry() in nbd_open()), and we need to
         cancel the requests that wait in nbd_co_establish_connection()
         now).
      
      2A. We do receive headers in request coroutine. But we also should
         dispatch replies for other pending requests. So,
         nbd_connection_entry() is turned into nbd_receive_replies(), which
         does reply dispatching while it receives other request headers, and
         returns when it receives the requested header.
      
      3. All old staff around drained sections and context switch is dropped.
         In details:
         - we don't need to move connection_co to new aio context, as we
           don't have connection_co anymore
         - we don't have a fake "request" of connection_co (extra increasing
           in_flight), so don't care with it in drain_begin/end
         - we don't stop reconnection during drained section anymore. This
           means that drain_begin may wait for a long time (up to
           reconnect_delay). But that's an improvement and more correct
           behavior see below[*]
      
      4. In nbd_teardown_connection() we don't have to wait for
         connection_co, as it is dropped. And cleanup for s->ioc and nbd_yank
         is moved here from removed connection_co.
      
      5. In nbd_co_do_establish_connection() we now should handle
         NBD_CLIENT_CONNECTING_NOWAIT: if new request comes when we are in
         NBD_CLIENT_CONNECTING_NOWAIT, it still should call
         nbd_co_establish_connection() (who knows, maybe the connection was
         already established by another thread in the background). But we
         shouldn't wait: if nbd_co_establish_connection() can't return new
         channel immediately the request should fail (we are in
         NBD_CLIENT_CONNECTING_NOWAIT state).
      
      6. nbd_reconnect_attempt() is simplified: it's now easier to wait for
         other requests in the caller, so here we just assert that fact.
         Also delay time is now initialized here: we can easily detect first
         attempt and start a timer.
      
      7. nbd_co_reconnect_loop() is dropped, we don't need it. Reconnect
         retries are fully handle by thread (nbd/client-connection.c), delay
         timer we initialize in nbd_reconnect_attempt(), we don't have to
         bother with s->drained and friends. nbd_reconnect_attempt() now
         called from nbd_co_send_request().
      
      8. nbd_connection_entry is dropped: reconnect is now handled by
         nbd_co_send_request(), receiving reply is now handled by
         nbd_receive_replies(): all handled from request coroutines.
      
      9. So, welcome new nbd_receive_replies() called from request coroutine,
         that receives reply header instead of nbd_connection_entry().
         Like with sending requests, only one coroutine may receive in a
         moment. So we introduce receive_mutex, which is locked around
         nbd_receive_reply(). It also protects some related fields. Still,
         full audit of thread-safety in nbd driver is a separate task.
         New function waits for a reply with specified handle being received
         and works rather simple:
      
         Under mutex:
           - if current handle is 0, do receive by hand. If another handle
             received - switch to other request coroutine, release mutex and
             yield. Otherwise return success
           - if current handle == requested handle, we are done
           - otherwise, release mutex and yield
      
      10: in nbd_co_send_request() we now do nbd_reconnect_attempt() if
          needed. Also waiting in free_sema queue we now wait for one of two
          conditions:
          - connectED, in_flight < MAX_NBD_REQUESTS (so we can start new one)
          - connectING, in_flight == 0, so we can call
            nbd_reconnect_attempt()
          And this logic is protected by s->send_mutex
      
          Also, on failure we don't have to care of removed s->connection_co
      
      11. nbd_co_do_receive_one_chunk(): now instead of yield() and wait for
          s->connection_co we just call new nbd_receive_replies().
      
      12. nbd_co_receive_one_chunk(): place where s->reply.handle becomes 0,
          which means that handling of the whole reply is finished. Here we
          need to wake one of coroutines sleeping in nbd_receive_replies().
          If none are sleeping - do nothing. That's another behavior change: we
          don't have endless recv() in the idle time. It may be considered as
          a drawback. If so, it may be fixed later.
      
      13. nbd_reply_chunk_iter_receive(): don't care about removed
          connection_co, just ping in_flight waiters.
      
      14. Don't create connection_co, enable retry in the connection thread
          (we don't have own reconnect loop anymore)
      
      15. We now need to add a nbd_co_establish_connection_cancel() call in
          nbd_cancel_in_flight(), to cancel the request that is doing a
          connection attempt.
      
      [*], ok, now we don't cancel reconnect on drain begin. That's correct:
          reconnect feature leads to possibility of long-running requests (up
          to reconnect delay). Still, drain begin is not a reason to kill
          long requests. We should wait for them.
      
          This also means, that we can again reproduce a dead-lock, described
          in 8c517de2.
          Why we are OK with it:
          1. Now this is not absolutely-dead dead-lock: the vm is unfrozen
             after reconnect delay. Actually 8c517de2 fixed a bug in
             NBD logic, that was not described in 8c517de2 and led to
             forever dead-lock. The problem was that nobody woke the free_sema
             queue, but drain_begin can't finish until there is a request in
             free_sema queue. Now we have a reconnect delay timer that works
             well.
          2. It's not a problem of the NBD driver, but of the ide code,
             because it does drain_begin under the global mutex; the problem
             doesn't reproduce when using scsi instead of ide.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20210902103805.25686-5-vsementsov@virtuozzo.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      [eblake: grammar and comment tweaks]
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      4ddb5d2f
    • Vladimir Sementsov-Ogievskiy's avatar
      nbd/client-connection: nbd_co_establish_connection(): fix non set errp · f7ca4aad
      Vladimir Sementsov-Ogievskiy authored
      
      When we don't have a connection and blocking is false, we return NULL
      but don't set errp. That's wrong.
      
      We have two paths for calling nbd_co_establish_connection():
      
      1. nbd_open() -> nbd_do_establish_connection() -> ...
        but that will never set blocking=false
      
      2. nbd_reconnect_attempt() -> nbd_co_do_establish_connection() -> ...
        but that uses errp=NULL
      
      So, we are safe with our wrong errp policy in
      nbd_co_establish_connection(). Still let's fix it.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20210906190654.183421-2-vsementsov@virtuozzo.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      f7ca4aad
    • Eric Blake's avatar
      nbd/server: Allow LIST_META_CONTEXT without STRUCTURED_REPLY · da24597d
      Eric Blake authored
      The NBD protocol just relaxed the requirements on
      NBD_OPT_LIST_META_CONTEXT:
      
      https://github.com/NetworkBlockDevice/nbd/commit/13a4e33a87
      
      
      
      Since listing is not stateful (unlike SET_META_CONTEXT), we don't care
      if a client asks for meta contexts without first requesting structured
      replies.  Well-behaved clients will still ask for structured reply
      first (if for no other reason than for back-compat to older servers),
      but that's no reason to avoid this change.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20210907173505.1499709-1-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      da24597d
  5. Jul 26, 2021
  6. Jun 18, 2021
  7. Jun 02, 2021
  8. Mar 08, 2021
    • Nir Soffer's avatar
      nbd: server: Report holes for raw images · 0da98568
      Nir Soffer authored
      When querying image extents for raw image, qemu-nbd reports holes as
      zero:
      
      $ qemu-nbd -t -r -f raw empty-6g.raw
      
      $ qemu-img map --output json nbd://localhost
      [{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": true, "offset": 0}]
      
      $ qemu-img map --output json empty-6g.raw
      [{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": false, "offset": 0}]
      
      Turns out that qemu-img map reports a hole based on BDRV_BLOCK_DATA, but
      nbd server reports a hole based on BDRV_BLOCK_ALLOCATED.
      
      The NBD protocol says:
      
          NBD_STATE_HOLE (bit 0): if set, the block represents a hole (and
          future writes to that area may cause fragmentation or encounter an
          NBD_ENOSPC error); if clear, the block is allocated or the server
          could not otherwise determine its status.
      
      qemu-img manual says:
      
          whether the sectors contain actual data or not (boolean field data;
          if false, the sectors are either unallocated or stored as
          optimized all-zero clusters);
      
      To me, data=false looks compatible with NBD_STATE_HOLE. From user point
      of view, getting same results from qemu-nbd and qemu-img is more
      important than being more correct about allocation status.
      
      Changing nbd server to report holes using BDRV_BLOCK_DATA makes qemu-nbd
      results compatible with qemu-img map:
      
      $ qemu-img map --output json nbd://localhost
      
      
      [{ "start": 0, "length": 6442450944, "depth": 0, "zero": true, "data": false, "offset": 0}]
      
      Signed-off-by: default avatarNir Soffer <nsoffer@redhat.com>
      Message-Id: <20210219160752.1826830-1-nsoffer@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      0da98568
  9. Jan 20, 2021
    • Sergio Lopez's avatar
      nbd/server: Quiesce coroutines on context switch · f148ae7d
      Sergio Lopez authored
      When switching between AIO contexts we need to me make sure that both
      recv_coroutine and send_coroutine are not scheduled to run. Otherwise,
      QEMU may crash while attaching the new context with an error like
      this one:
      
      aio_co_schedule: Co-routine was already scheduled in 'aio_co_schedule'
      
      To achieve this we need a local implementation of
      'qio_channel_readv_all_eof' named 'nbd_read_eof' (a trick already done
      by 'nbd/client.c') that allows us to interrupt the operation and to
      know when recv_coroutine is yielding.
      
      With this in place, we delegate detaching the AIO context to the
      owning context with a BH ('nbd_aio_detach_bh') scheduled using
      'aio_wait_bh_oneshot'. This BH signals that we need to quiesce the
      channel by setting 'client->quiescing' to 'true', and either waits for
      the coroutine to finish using AIO_WAIT_WHILE or, if it's yielding in
      'nbd_read_eof', actively enters the coroutine to interrupt it.
      
      RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1900326
      
      
      Signed-off-by: default avatarSergio Lopez <slp@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201214170519.223781-4-slp@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      f148ae7d
  10. Nov 17, 2020
  11. Nov 16, 2020
    • Eric Blake's avatar
      nbd: Silence Coverity false positive · c0b21f2e
      Eric Blake authored
      
      Coverity noticed (CID 1436125) that we check the return value of
      nbd_extent_array_add in most places, but not at the end of
      bitmap_to_extents().  The return value exists to break loops before a
      future iteration, so there is nothing to check if we are already done
      iterating.  Adding a cast to void, plus a comment why, pacifies
      Coverity.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201111163510.713855-1-eblake@redhat.com>
      [eblake: Prefer cast to void over odd && usage]
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      c0b21f2e
  12. Oct 30, 2020
    • Eric Blake's avatar
      nbd: Add 'qemu-nbd -A' to expose allocation depth · dbc7b014
      Eric Blake authored
      
      Allow the server to expose an additional metacontext to be requested
      by savvy clients.  qemu-nbd adds a new option -A to expose the
      qemu:allocation-depth metacontext through NBD_CMD_BLOCK_STATUS; this
      can also be set via QMP when using block-export-add.
      
      qemu as client is hacked into viewing the key aspects of this new
      context by abusing the already-experimental x-dirty-bitmap option to
      collapse all depths greater than 2, which results in a tri-state value
      visible in the output of 'qemu-img map --output=json' (yes, that means
      x-dirty-bitmap is now a bit of a misnomer, but I didn't feel like
      renaming it as it would introduce a needless break of back-compat,
      even though we make no compat guarantees with x- members):
      
      unallocated (depth 0) => "zero":false, "data":true
      local (depth 1)       => "zero":false, "data":false
      backing (depth 2+)    => "zero":true,  "data":true
      
      libnbd as client is probably a nicer way to get at the information
      without having to decipher such hacks in qemu as client. ;)
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201027050556.269064-11-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      dbc7b014
    • Eric Blake's avatar
      nbd: Add new qemu:allocation-depth metadata context · 71719cd5
      Eric Blake authored
      
      'qemu-img map' provides a way to determine which extents of an image
      come from the top layer vs. inherited from a backing chain.  This is
      useful information worth exposing over NBD.  There is a proposal to
      add a QMP command block-dirty-bitmap-populate which can create a dirty
      bitmap that reflects allocation information, at which point the
      qemu:dirty-bitmap:NAME metadata context can expose that information
      via the creation of a temporary bitmap, but we can shorten the effort
      by adding a new qemu:allocation-depth metadata context that does the
      same thing without an intermediate bitmap (this patch does not
      eliminate the need for that proposal, as it will have other uses as
      well).
      
      While documenting things, remember that although the NBD protocol has
      NBD_OPT_SET_META_CONTEXT, the rest of its documentation refers to
      'metadata context', which is a more apt description of what is
      actually being used by NBD_CMD_BLOCK_STATUS: the user is requesting
      metadata by passing one or more context names.  So I also touched up
      some existing wording to prefer the term 'metadata context' where it
      makes sense.
      
      Note that this patch does not actually enable any way to request a
      server to enable this context; that will come in the next patch.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201027050556.269064-10-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      71719cd5
    • Eric Blake's avatar
      nbd: Allow export of multiple bitmaps for one device · 3b1f244c
      Eric Blake authored
      
      With this, 'qemu-nbd -B b0 -B b1 -f qcow2 img.qcow2' can let you sniff
      out multiple bitmaps from one server.  qemu-img as client can still
      only read one bitmap per client connection, but other NBD clients
      (hello libnbd) can now read multiple bitmaps in a single pass.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20201027050556.269064-8-eblake@redhat.com>
      3b1f244c
    • Eric Blake's avatar
      nbd: Refactor counting of metadata contexts · 47ec485e
      Eric Blake authored
      
      Rather than open-code the count of negotiated contexts at several
      sites, embed it directly into the struct.  This will make it easier
      for upcoming commits to support even more simultaneous contexts.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201027050556.269064-7-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      47ec485e
    • Eric Blake's avatar
      nbd: Simplify qemu bitmap context name · 02e87e3b
      Eric Blake authored
      
      Each dirty bitmap already knows its name; by reducing the scope of the
      places where we construct "qemu:dirty-bitmap:NAME" strings, tracking
      the name is more localized, and there are fewer per-export fields to
      worry about.  This in turn will make it easier for an upcoming patch
      to export more than one bitmap at once.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20201027050556.269064-6-eblake@redhat.com>
      02e87e3b
    • Eric Blake's avatar
      nbd: Update qapi to support exporting multiple bitmaps · cbad81ce
      Eric Blake authored
      
      Since 'block-export-add' is new to 5.2, we can still tweak the
      interface; there, allowing 'bitmaps':['str'] is nicer than
      'bitmap':'str'.  This wires up the qapi and qemu-nbd changes to permit
      passing multiple bitmaps as distinct metadata contexts that the NBD
      client may request, but the actual support for more than one will
      require a further patch to the server.
      
      Note that there are no changes made to the existing deprecated
      'nbd-server-add' command; this required splitting the QAPI type
      BlockExportOptionsNbd, which fortunately does not affect QMP
      introspection.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20201027050556.269064-5-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarPeter Krempa <pkrempa@redhat.com>
      cbad81ce
  13. Oct 23, 2020
    • Stefan Hajnoczi's avatar
      block/export: add iothread and fixed-iothread options · f51d23c8
      Stefan Hajnoczi authored
      
      Make it possible to specify the iothread where the export will run. By
      default the block node can be moved to other AioContexts later and the
      export will follow. The fixed-iothread option forces strict behavior
      that prevents changing AioContext while the export is active. See the
      QAPI docs for details.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Message-id: 20200929125516.186715-5-stefanha@redhat.com
      [Fix stray '#' character in block-export.json and add missing "(since:
      5.2)" as suggested by Eric Blake.
      --Stefan]
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      f51d23c8
    • Stefan Hajnoczi's avatar
      block: move block exports to libblockdev · cbc20bfb
      Stefan Hajnoczi authored
      
      Block exports are used by softmmu, qemu-storage-daemon, and qemu-nbd.
      They are not used by other programs and are not otherwise needed in
      libblock.
      
      Undo the recent move of blockdev-nbd.c from blockdev_ss into block_ss.
      Since bdrv_close_all() (libblock) calls blk_exp_close_all()
      (libblockdev) a stub function is required..
      
      Make qemu-nbd.c use signal handling utility functions instead of
      duplicating the code. This helps because os-posix.c is in libblockdev
      and it depends on a qemu_system_killed() symbol that qemu-nbd.c lacks.
      Once we use the signal handling utility functions we also end up
      providing the necessary symbol.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Message-id: 20200929125516.186715-4-stefanha@redhat.com
      [Fixed s/ndb/nbd/ typo in commit description as suggested by Eric Blake
      --Stefan]
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      cbc20bfb
  14. Oct 09, 2020
    • Eric Blake's avatar
      nbd: Simplify meta-context parsing · ebd57062
      Eric Blake authored
      
      We had a premature optimization of trying to read as little from the
      wire as possible while handling NBD_OPT_SET_META_CONTEXT in phases.
      But in reality, we HAVE to read the entire string from the client
      before we can get to the next command, and it is easier to just read
      it all at once than it is to read it in pieces.  And once we do that,
      several functions end up no longer performing I/O, so they can drop
      length and errp parameters, and just return a bool instead of
      modifying through a pointer.
      
      Our iotests still pass; I also checked that libnbd's testsuite (which
      covers more corner cases of odd meta context requests) still passes.
      There are cases where the sequence of trace messages produced differs
      (for example, when no bitmap is exported, a query for "qemu:" now
      produces two trace lines instead of one), but trace points are for
      debug and have no effect on what the client sees.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20200930121105.667049-4-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      [eblake: enhance commit message]
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      ebd57062
    • Eric Blake's avatar
      nbd/server: Reject embedded NUL in NBD strings · d1e2c3e7
      Eric Blake authored
      
      The NBD spec is clear that any string sent from the client must not
      contain embedded NUL characters.  If the client passes "a\0", we
      should reject that option request rather than act on "a".
      
      Testing this is not possible with a compliant client, but I was able
      to use gdb to coerce libnbd into temporarily behaving as such a
      client.
      
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      Message-Id: <20200930121105.667049-3-eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      d1e2c3e7
    • Christian Borntraeger's avatar
      nbd: silence maybe-uninitialized warnings · bbc35fc2
      Christian Borntraeger authored
      
      gcc 10 from Fedora 32 gives me:
      
      Compiling C object libblock.fa.p/nbd_server.c.o
      ../nbd/server.c: In function ‘nbd_co_client_start’:
      ../nbd/server.c:625:14: error: ‘namelen’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
        625 |         rc = nbd_negotiate_send_info(client, NBD_INFO_NAME, namelen, name,
            |              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        626 |                                      errp);
            |                                      ~~~~~
      ../nbd/server.c:564:14: note: ‘namelen’ was declared here
        564 |     uint32_t namelen;
            |              ^~~~~~~
      cc1: all warnings being treated as errors
      
      As I cannot see how this can happen, let uns silence the warning.
      
      Signed-off-by: default avatarChristian Borntraeger <borntraeger@de.ibm.com>
      Message-Id: <20200930155859.303148-3-borntraeger@de.ibm.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarEric Blake <eblake@redhat.com>
      bbc35fc2
  15. Oct 02, 2020
Loading