Skip to content
Snippets Groups Projects
  1. Dec 28, 2021
    • Vladimir Sementsov-Ogievskiy's avatar
      blockjob: drop BlockJob.blk field · 985cac8f
      Vladimir Sementsov-Ogievskiy authored
      
      It's unused now (except for permission handling)[*]. The only reasonable
      user of it was block-stream job, recently updated to use own blk. And
      other block jobs prefer to use own source node related objects.
      
      So, the arguments of dropping the field are:
      
       - block jobs prefer not to use it
       - block jobs usually has more then one node to operate on, and better
         to operate symmetrically (for example has both source and target
         blk's in specific block-job state structure)
      
      *: BlockJob.blk is used to keep some permissions. We simply move
      permissions to block-job child created in block_job_create() together
      with blk.
      
      In mirror, we just should not care anymore about restoring state of
      blk. Most probably this code could be dropped long ago, after dropping
      bs->job pointer. Now it finally goes away together with BlockJob.blk
      itself.
      
      iotest 141 output is updated, as "bdrv_has_blk(bs)" check in
      qmp_blockdev_del() doesn't fail (we don't have blk now). Still, new
      error message looks even better.
      
      In iotest 283 we need to add a job id, otherwise "Invalid job ID"
      happens now earlier than permission check (as permissions moved from
      blk to block-job node).
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarNikita Lapshin <nikita.lapshin@virtuozzo.com>
      985cac8f
    • Vladimir Sementsov-Ogievskiy's avatar
      blockjob: implement and use block_job_get_aio_context · df9a3165
      Vladimir Sementsov-Ogievskiy authored
      
      We are going to drop BlockJob.blk. So let's retrieve block job context
      from underlying job instead of main node.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarNikita Lapshin <nikita.lapshin@virtuozzo.com>
      df9a3165
  2. Jun 25, 2021
  3. May 04, 2021
    • Paolo Bonzini's avatar
      ratelimit: protect with a mutex · 4951967d
      Paolo Bonzini authored
      
      Right now, rate limiting is protected by the AioContext mutex, which is
      taken for example both by the block jobs and by qmp_block_job_set_speed
      (via find_block_job).
      
      We would like to remove the dependency of block layer code on the
      AioContext mutex, since most drivers and the core I/O code are already
      not relying on it.  However, there is no existing lock that can easily
      be taken by both ratelimit_set_speed and ratelimit_calculate_delay,
      especially because the latter might run in coroutine context (and
      therefore under a CoMutex) but the former will not.
      
      Since concurrent calls to ratelimit_calculate_delay are not possible,
      one idea could be to use a seqlock to get a snapshot of slice_ns and
      slice_quota.  But for now keep it simple, and just add a mutex to the
      RateLimit struct; block jobs are generally not performance critical to
      the point of optimizing the clock cycles spent in synchronization.
      
      This also requires the introduction of init/destroy functions, so
      add them to the two users of ratelimit.h.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4951967d
  4. Apr 30, 2021
  5. Mar 08, 2021
  6. Feb 15, 2021
    • Michael Qiu's avatar
      blockjob: Fix crash with IOthread when block commit after snapshot · 076d467a
      Michael Qiu authored
      
      Currently, if guest has workloads, IO thread will acquire aio_context
      lock before do io_submit, it leads to segmentfault when do block commit
      after snapshot. Just like below:
      
      Program received signal SIGSEGV, Segmentation fault.
      
      [Switching to Thread 0x7f7c7d91f700 (LWP 99907)]
      0x00005576d0f65aab in bdrv_mirror_top_pwritev at ../block/mirror.c:1437
      1437    ../block/mirror.c: No such file or directory.
      (gdb) p s->job
      $17 = (MirrorBlockJob *) 0x0
      (gdb) p s->stop
      $18 = false
      
      Call trace of IO thread:
      0  0x00005576d0f65aab in bdrv_mirror_top_pwritev at ../block/mirror.c:1437
      1  0x00005576d0f7f3ab in bdrv_driver_pwritev at ../block/io.c:1174
      2  0x00005576d0f8139d in bdrv_aligned_pwritev at ../block/io.c:1988
      3  0x00005576d0f81b65 in bdrv_co_pwritev_part at ../block/io.c:2156
      4  0x00005576d0f8e6b7 in blk_do_pwritev_part at ../block/block-backend.c:1260
      5  0x00005576d0f8e84d in blk_aio_write_entry at ../block/block-backend.c:1476
      ...
      
      Switch to qemu main thread:
      0  0x00007f903be704ed in __lll_lock_wait at
      /lib/../lib64/libpthread.so.0
      1  0x00007f903be6bde6 in _L_lock_941 at /lib/../lib64/libpthread.so.0
      2  0x00007f903be6bcdf in pthread_mutex_lock at
      /lib/../lib64/libpthread.so.0
      3  0x0000564b21456889 in qemu_mutex_lock_impl at
      ../util/qemu-thread-posix.c:79
      4  0x0000564b213af8a5 in block_job_add_bdrv at ../blockjob.c:224
      5  0x0000564b213b00ad in block_job_create at ../blockjob.c:440
      6  0x0000564b21357c0a in mirror_start_job at ../block/mirror.c:1622
      7  0x0000564b2135a9af in commit_active_start at ../block/mirror.c:1867
      8  0x0000564b2133d132 in qmp_block_commit at ../blockdev.c:2768
      9  0x0000564b2141fef3 in qmp_marshal_block_commit at
      qapi/qapi-commands-block-core.c:346
      10 0x0000564b214503c9 in do_qmp_dispatch_bh at
      ../qapi/qmp-dispatch.c:110
      11 0x0000564b21451996 in aio_bh_poll at ../util/async.c:164
      12 0x0000564b2146018e in aio_dispatch at ../util/aio-posix.c:381
      13 0x0000564b2145187e in aio_ctx_dispatch at ../util/async.c:306
      14 0x00007f9040239049 in g_main_context_dispatch at
      /lib/../lib64/libglib-2.0.so.0
      15 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:232
      16 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:255
      17 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:531
      18 0x0000564b212304e1 in qemu_main_loop at ../softmmu/runstate.c:721
      19 0x0000564b20f7975e in main at ../softmmu/main.c:50
      
      In IO thread when do bdrv_mirror_top_pwritev, the job is NULL, and stop field
      is false, this means the MirrorBDSOpaque "s" object has not been initialized
      yet, and this object is initialized by block_job_create(), but the initialize
      process is stuck in acquiring the lock.
      
      In this situation, IO thread come to bdrv_mirror_top_pwritev(),which means that
      mirror-top node is already inserted into block graph, but its bs->opaque->job
      is not initialized.
      
      The root cause is that qemu main thread do release/acquire when hold the lock,
      at the same time, IO thread get the lock after release stage, and the crash
      occured.
      
      Actually, in this situation, job->job.aio_context will not equal to
      qemu_get_aio_context(), and will be the same as bs->aio_context,
      thus, no need to release the lock, becasue bdrv_root_attach_child()
      will not change the context.
      
      This patch fix this issue.
      
      Fixes: 132ada80 "block: Adjust AioContexts when attaching nodes"
      
      Signed-off-by: default avatarMichael Qiu <qiudayu@huayun.com>
      Message-Id: <20210203024059.52683-1-08005325@163.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      076d467a
  7. Jan 26, 2021
  8. Sep 23, 2020
    • Stefan Hajnoczi's avatar
      qemu/atomic.h: rename atomic_ to qatomic_ · d73415a3
      Stefan Hajnoczi authored
      
      clang's C11 atomic_fetch_*() functions only take a C11 atomic type
      pointer argument. QEMU uses direct types (int, etc) and this causes a
      compiler error when a QEMU code calls these functions in a source file
      that also included <stdatomic.h> via a system header file:
      
        $ CC=clang CXX=clang++ ./configure ... && make
        ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
      
      Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
      used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
      and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
      searched GitHub for existing "qatomic_" users but there seem to be none.
      
      This patch was generated using:
      
        $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
          sort -u >/tmp/changed_identifiers
        $ for identifier in $(</tmp/changed_identifiers); do
              sed -i "s%\<$identifier\>%q$identifier%g" \
                  $(git grep -I -l "\<$identifier\>")
          done
      
      I manually fixed line-wrap issues and misaligned rST tables.
      
      Signed-off-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Acked-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20200923105646.47864-1-stefanha@redhat.com>
      d73415a3
  9. May 18, 2020
  10. May 05, 2020
  11. Mar 11, 2020
  12. Dec 18, 2019
  13. Sep 16, 2019
  14. Sep 10, 2019
  15. Aug 16, 2019
    • Markus Armbruster's avatar
      Include qemu/main-loop.h less · db725815
      Markus Armbruster authored
      
      In my "build everything" tree, changing qemu/main-loop.h triggers a
      recompile of some 5600 out of 6600 objects (not counting tests and
      objects that don't depend on qemu/osdep.h).  It includes block/aio.h,
      which in turn includes qemu/event_notifier.h, qemu/notify.h,
      qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h,
      qemu/thread.h, qemu/timer.h, and a few more.
      
      Include qemu/main-loop.h only where it's needed.  Touching it now
      recompiles only some 1700 objects.  For block/aio.h and
      qemu/event_notifier.h, these numbers drop from 5600 to 2800.  For the
      others, they shrink only slightly.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20190812052359.30071-21-armbru@redhat.com>
      Reviewed-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Tested-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      db725815
    • Kevin Wolf's avatar
      block-backend: Queue requests while drained · cf312932
      Kevin Wolf authored
      
      This fixes devices like IDE that can still start new requests from I/O
      handlers in the CPU thread while the block backend is drained.
      
      The basic assumption is that in a drain section, no new requests should
      be allowed through a BlockBackend (blk_drained_begin/end don't exist,
      we get drain sections only on the node level). However, there are two
      special cases where requests should not be queued:
      
      1. Block jobs: We already make sure that block jobs are paused in a
         drain section, so they won't start new requests. However, if the
         drain_begin is called on the job's BlockBackend first, it can happen
         that we deadlock because the job stays busy until it reaches a pause
         point - which it can't if its requests aren't processed any more.
      
         The proper solution here would be to make all requests through the
         job's filter node instead of using a BlockBackend. For now, just
         disabling request queuing on the job BlockBackend is simpler.
      
      2. In test cases where making requests through bdrv_* would be
         cumbersome because we'd need a BdrvChild. As we already got the
         functionality to disable request queuing from 1., use it in tests,
         too, for convenience.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      cf312932
  16. Jul 19, 2019
    • Hanna Reitz's avatar
      block: Do not poll in bdrv_do_drained_end() · e037c09c
      Hanna Reitz authored
      
      We should never poll anywhere in bdrv_do_drained_end() (including its
      recursive callees like bdrv_drain_invoke()), because it does not cope
      well with graph changes.  In fact, it has been written based on the
      postulation that no graph changes will happen in it.
      
      Instead, the callers that want to poll must poll, i.e. all currently
      globally available wrappers: bdrv_drained_end(),
      bdrv_subtree_drained_end(), bdrv_unapply_subtree_drain(), and
      bdrv_drain_all_end().  Graph changes there do not matter.
      
      They can poll simply by passing a pointer to a drained_end_counter and
      wait until it reaches 0.
      
      This patch also adds a non-polling global wrapper for
      bdrv_do_drained_end() that takes a drained_end_counter pointer.  We need
      such a variant because now no function called anywhere from
      bdrv_do_drained_end() must poll.  This includes
      BdrvChildRole.drained_end(), which already must not poll according to
      its interface documentation, but bdrv_child_cb_drained_end() just
      violates that by invoking bdrv_drained_end() (which does poll).
      Therefore, BdrvChildRole.drained_end() must take a *drained_end_counter
      parameter, which bdrv_child_cb_drained_end() can pass on to the new
      bdrv_drained_end_no_poll() function.
      
      Note that we now have a pattern of all drained_end-related functions
      either polling or receiving a *drained_end_counter to let the caller
      poll based on that.
      
      A problem with a single poll loop is that when the drained section in
      bdrv_set_aio_context_ignore() ends, some nodes in the subgraph may be in
      the old contexts, while others are in the new context already.  To let
      the collective poll in bdrv_drained_end() work correctly, we must not
      hold a lock to the old context, so that the old context can make
      progress in case it is different from the current context.
      
      (In the process, remove the comment saying that the current context is
      always the old context, because it is wrong.)
      
      In all other places, all nodes in a subtree must be in the same context,
      so we can just poll that.  The exception of course is
      bdrv_drain_all_end(), but that always runs in the main context, so we
      can just poll NULL (like bdrv_drain_all_begin() does).
      
      Signed-off-by: default avatarMax Reitz <mreitz@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      e037c09c
  17. Jun 18, 2019
  18. Jun 12, 2019
    • Markus Armbruster's avatar
      Include qemu-common.h exactly where needed · a8d25326
      Markus Armbruster authored
      
      No header includes qemu-common.h after this commit, as prescribed by
      qemu-common.h's file comment.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20190523143508.25387-5-armbru@redhat.com>
      [Rebased with conflicts resolved automatically, except for
      include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
      block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
      target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
      target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
      target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
      target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
      net/tap-bsd.c fixed up]
      a8d25326
  19. Jun 04, 2019
    • Kevin Wolf's avatar
      block: Adjust AioContexts when attaching nodes · 132ada80
      Kevin Wolf authored
      
      So far, we only made sure that updating the AioContext of a node
      affected the whole subtree. However, if a node is newly attached to a
      new parent, we also need to make sure that both the subtree of the node
      and the parent are in the same AioContext. This tries to move the new
      child node to the parent AioContext and returns an error if this isn't
      possible.
      
      BlockBackends now actually apply their AioContext to their root node.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      132ada80
    • Kevin Wolf's avatar
      block: Add BlockBackend.ctx · d861ab3a
      Kevin Wolf authored
      
      This adds a new parameter to blk_new() which requires its callers to
      declare from which AioContext this BlockBackend is going to be used (or
      the locks of which AioContext need to be taken anyway).
      
      The given context is only stored and kept up to date when changing
      AioContexts. Actually applying the stored AioContext to the root node
      is saved for another commit.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      d861ab3a
  20. May 28, 2019
    • Alberto Garcia's avatar
      block: Make bdrv_root_attach_child() unref child_bs on failure · b441dc71
      Alberto Garcia authored
      
      A consequence of the previous patch is that bdrv_attach_child()
      transfers the reference to child_bs from the caller to parent_bs,
      which will drop it on bdrv_close() or when someone calls
      bdrv_unref_child().
      
      But this only happens when bdrv_attach_child() succeeds. If it fails
      then the caller is responsible for dropping the reference to child_bs.
      
      This patch makes bdrv_attach_child() take the reference also when
      there is an error, freeing the caller for having to do it.
      
      A similar situation happens with bdrv_root_attach_child(), so the
      changes on this patch affect both functions.
      
      Signed-off-by: default avatarAlberto Garcia <berto@igalia.com>
      Message-id: 20dfb3d9ccec559cdd1a9690146abad5d204a186.1557754872.git.berto@igalia.com
      [mreitz: Removed now superfluous BdrvChild * variable in
               bdrv_open_child()]
      Signed-off-by: default avatarMax Reitz <mreitz@redhat.com>
      b441dc71
  21. May 20, 2019
    • Kevin Wolf's avatar
      blockjob: Remove AioContext notifiers · 657e1203
      Kevin Wolf authored
      
      The notifiers made sure that the job is quiesced and that the
      job->aio_context field is updated. The first part is unnecessary today
      since bdrv_set_aio_context_ignore() drains the block node, and this
      means drainig the block job, too. The second part can be done in the
      .set_aio_ctx callback of the block job BdrvChildRole.
      
      The notifiers were problematic because they poll the AioContext while
      the graph is in an inconsistent state with some nodes already in the new
      context, but others still in the old context. So removing the notifiers
      not only simplifies the code, but actually makes the code safer.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      657e1203
    • Kevin Wolf's avatar
      blockjob: Propagate AioContext change to all job nodes · 9ff7f0df
      Kevin Wolf authored
      
      Block jobs require that all of the nodes the job is using are in the
      same AioContext. Therefore all BdrvChild objects of the job propagate
      .(can_)set_aio_context to all other job nodes, so that the switch is
      checked and performed consistently even if both nodes are in different
      subtrees.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      9ff7f0df
  22. Mar 19, 2019
  23. Sep 25, 2018
    • Kevin Wolf's avatar
      block: Use a single global AioWait · cfe29d82
      Kevin Wolf authored
      
      When draining a block node, we recurse to its parent and for subtree
      drains also to its children. A single AIO_WAIT_WHILE() is then used to
      wait for bdrv_drain_poll() to become true, which depends on all of the
      nodes we recursed to. However, if the respective child or parent becomes
      quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent
      is checked, while AIO_WAIT_WHILE() depends on the AioWait of the
      original node.
      
      Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE().
      
      This may mean that the draining thread gets a few more unnecessary
      wakeups because an unrelated operation got completed, but we already
      wake it up when something _could_ have changed rather than only if it
      has certainly changed.
      
      Apart from that, drain is a slow path anyway. In theory it would be
      possible to use wakeups more selectively and still correctly, but the
      gains are likely not worth the additional complexity. In fact, this
      patch is a nice simplification for some places in the code.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      cfe29d82
    • Kevin Wolf's avatar
      blockjob: Lie better in child_job_drained_poll() · b5a7a057
      Kevin Wolf authored
      
      Block jobs claim in .drained_poll() that they are in a quiescent state
      as soon as job->deferred_to_main_loop is true. This is obviously wrong,
      they still have a completion BH to run. We only get away with this
      because commit 91af091f added an unconditional aio_poll(false) to the
      drain functions, but this is bypassing the regular drain mechanisms.
      
      However, just removing this and telling that the job is still active
      doesn't work either: The completion callbacks themselves call drain
      functions (directly, or indirectly with bdrv_reopen), so they would
      deadlock then.
      
      As a better lie, tell that the job is active as long as the BH is
      pending, but falsely call it quiescent from the point in the BH when the
      completion callback is called. At this point, nested drain calls won't
      deadlock because they ignore the job, and outer drains will wait for the
      job to really reach a quiescent state because the callback is already
      running.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      b5a7a057
    • Kevin Wolf's avatar
      blockjob: Wake up BDS when job becomes idle · 34dc97b9
      Kevin Wolf authored
      
      In the context of draining a BDS, the .drained_poll callback of block
      jobs is called. If this returns true (i.e. there is still some activity
      pending), the drain operation may call aio_poll() with blocking=true to
      wait for completion.
      
      As soon as the pending activity is completed and the job finally arrives
      in a quiescent state (i.e. its coroutine either yields with busy=false
      or terminates), the block job must notify the aio_poll() loop to wake
      up, otherwise we get a deadlock if both are running in different
      threads.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarFam Zheng <famz@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      34dc97b9
  24. Aug 28, 2018
  25. Jun 18, 2018
    • Kevin Wolf's avatar
      block: Really pause block jobs on drain · 89bd0305
      Kevin Wolf authored
      
      We already requested that block jobs be paused in .bdrv_drained_begin,
      but no guarantee was made that the job was actually inactive at the
      point where bdrv_drained_begin() returned.
      
      This introduces a new callback BdrvChildRole.bdrv_drained_poll() and
      uses it to make bdrv_drain_poll() consider block jobs using the node to
      be drained.
      
      For the test case to work as expected, we have to switch from
      block_job_sleep_ns() to qemu_co_sleep_ns() so that the test job is even
      considered active and must be waited for when draining the node.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      89bd0305
  26. May 23, 2018
    • Kevin Wolf's avatar
      blockjob: Remove BlockJob.driver · 9f6bb4c0
      Kevin Wolf authored
      
      BlockJob.driver is redundant with Job.driver and only used in very few
      places any more. Remove it.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      9f6bb4c0
    • Kevin Wolf's avatar
      job: Move progress fields to Job · 30a5c887
      Kevin Wolf authored
      
      BlockJob has fields .offset and .len, which are actually misnomers today
      because they are no longer tied to block device sizes, but just progress
      counters. As such they make a lot of sense in generic Jobs.
      
      This patch moves the fields to Job and renames them to .progress_current
      and .progress_total to describe their function better.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      30a5c887
    • Kevin Wolf's avatar
      job: Add job_transition_to_ready() · 2e1795b5
      Kevin Wolf authored
      
      The transition to the READY state was still performed in the BlockJob
      layer, in the same function that sent the BLOCK_JOB_READY QMP event.
      
      This patch brings the state transition to the Job layer and implements
      the QMP event using a notifier called from the Job layer, like we
      already do for other events related to state transitions.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      2e1795b5
    • Kevin Wolf's avatar
      job: Add job_is_ready() · df956ae2
      Kevin Wolf authored
      
      Instead of having a 'bool ready' in BlockJob, add a function that
      derives its value from the job status.
      
      At the same time, this fixes the behaviour to match what the QAPI
      documentation promises for query-block-job: 'true if the job may be
      completed'. When the ready flag was introduced in commit ef6dbf1e,
      the flag never had to be reset to match the description because after
      being ready, the jobs would immediately complete and disappear.
      
      Job transactions and manual job finalisation were introduced only later.
      With these changes, jobs may stay around even after having completed
      (and they are not ready to be completed a second time), however their
      patches forgot to reset the ready flag.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      df956ae2
Loading