Skip to content
Snippets Groups Projects
  1. Apr 11, 2017
  2. Mar 22, 2017
    • John Snow's avatar
      blockjob: add devops to blockjob backends · 600ac6a0
      John Snow authored
      
      This lets us hook into drained_begin and drained_end requests from the
      backend level, which is particularly useful for making sure that all
      jobs associated with a particular node (whether the source or the target)
      receive a drain request.
      
      Suggested-by: default avatarKevin Wolf <kwolf@redhat.com>
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Message-id: 20170316212351.13797-4-jsnow@redhat.com
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      600ac6a0
    • John Snow's avatar
      blockjob: add block_job_start_shim · e3796a24
      John Snow authored
      
      The purpose of this shim is to allow us to pause pre-started jobs.
      The purpose of *that* is to allow us to buffer a pause request that
      will be able to take effect before the job ever does any work, allowing
      us to create jobs during a quiescent state (under which they will be
      automatically paused), then resuming the jobs after the critical section
      in any order, either:
      
      (1) -block_job_start
          -block_job_resume (via e.g. drained_end)
      
      (2) -block_job_resume (via e.g. drained_end)
          -block_job_start
      
      The problem that requires a startup wrapper is the idea that a job must
      start in the busy=true state only its first time-- all subsequent entries
      require busy to be false, and the toggling of this state is otherwise
      handled during existing pause and yield points.
      
      The wrapper simply allows us to mandate that a job can "start," set busy
      to true, then immediately pause only if necessary. We could avoid
      requiring a wrapper, but all jobs would need to do it, so it's been
      factored out here.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Message-id: 20170316212351.13797-2-jsnow@redhat.com
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      e3796a24
    • Paolo Bonzini's avatar
      blockjob: avoid recursive AioContext locking · d79df2a2
      Paolo Bonzini authored
      
      Streaming or any other block job hangs when performed on a block device
      that has a non-default iothread.  This happens because the AioContext
      is acquired twice by block_job_defer_to_main_loop_bh and then released
      only once by BDRV_POLL_WHILE.  (Insert rants on recursive mutexes, which
      unfortunately are a temporary but necessary evil for iothreads at the
      moment).
      
      Luckily, the reason for the double acquisition is simple; the function
      acquires the AioContext for both the job iothread and the BDS iothread,
      in case the BDS iothread was changed while the job was running.  It
      is therefore enough to skip the second acquisition when the two
      AioContexts are one and the same.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Message-id: 1490118490-5597-1-git-send-email-pbonzini@redhat.com
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      d79df2a2
  3. Feb 28, 2017
  4. Jan 31, 2017
  5. Nov 15, 2016
    • John Snow's avatar
      blockjob: add block_job_start · 5ccac6f1
      John Snow authored
      
      Instead of automatically starting jobs at creation time via backup_start
      et al, we'd like to return a job object pointer that can be started
      manually at later point in time.
      
      For now, add the block_job_start mechanism and start the jobs
      automatically as we have been doing, with conversions job-by-job coming
      in later patches.
      
      Of note: cancellation of unstarted jobs will perform all the normal
      cleanup as if the job had started, particularly abort and clean. The
      only difference is that we will not emit any events, because the job
      never actually started.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-5-git-send-email-jsnow@redhat.com
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      5ccac6f1
    • John Snow's avatar
      blockjob: add .clean property · e8a40bf7
      John Snow authored
      
      Cleaning up after we have deferred to the main thread but before the
      transaction has converged can be dangerous and result in deadlocks
      if the job cleanup invokes any BH polling loops.
      
      A job may attempt to begin cleaning up, but may induce another job to
      enter its cleanup routine. The second job, part of our same transaction,
      will block waiting for the first job to finish, so neither job may now
      make progress.
      
      To rectify this, allow jobs to register a cleanup operation that will
      always run regardless of if the job was in a transaction or not, and
      if the transaction job group completed successfully or not.
      
      Move sensitive cleanup to this callback instead which is guaranteed to
      be run only after the transaction has converged, which removes sensitive
      timing constraints from said cleanup.
      
      Furthermore, in future patches these cleanup operations will be performed
      regardless of whether or not we actually started the job. Therefore,
      cleanup callbacks should essentially confine themselves to undoing create
      operations, e.g. setup actions taken in what is now backup_start.
      
      Reported-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-id: 1478587839-9834-3-git-send-email-jsnow@redhat.com
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      e8a40bf7
    • Vladimir Sementsov-Ogievskiy's avatar
      blockjob: fix dead pointer in txn list · 1e93b9fb
      Vladimir Sementsov-Ogievskiy authored
      
      Though it is not intended to be reached through normal circumstances,
      if we do not gracefully deconstruct the transaction QLIST, we may wind
      up with stale pointers in the list.
      
      The rest of this series attempts to address the underlying issues,
      but this should fix list inconsistencies.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Tested-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-id: 1478587839-9834-2-git-send-email-jsnow@redhat.com
      [Rewrote commit message. --js]
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Signed-off-by: default avatarJeff Cody <jcody@redhat.com>
      1e93b9fb
  6. Nov 01, 2016
  7. Oct 31, 2016
  8. Oct 28, 2016
  9. Oct 07, 2016
  10. Sep 05, 2016
  11. Jul 13, 2016
  12. Jun 29, 2016
  13. Jun 20, 2016
  14. Jun 16, 2016
  15. May 25, 2016
  16. May 19, 2016
    • Kevin Wolf's avatar
      blockjob: Don't set iostatus of target · 81e254dc
      Kevin Wolf authored
      
      When block job errors were introduced, we assigned the iostatus of the
      target BDS "just in case". The field has never been accessible for the
      user because the target isn't listed in query-block.
      
      Before we can allow the user to have a second BlockBackend on the
      target, we need to clean this up. If anything, we would want to set the
      iostatus for the internal BB of the job (which we can always do later),
      but certainly not for a separate BB which the job doesn't even use.
      
      As a nice side effect, this gets us rid of another bs->blk use.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      81e254dc
Loading