Skip to content
Snippets Groups Projects
  1. Oct 31, 2023
  2. Oct 27, 2022
  3. Oct 07, 2022
  4. Mar 04, 2022
  5. Dec 28, 2021
  6. Oct 07, 2021
    • Hanna Reitz's avatar
      mirror: Do not clear .cancelled · a640fa0e
      Hanna Reitz authored
      
      Clearing .cancelled before leaving the main loop when the job has been
      soft-cancelled is no longer necessary since job_is_cancelled() only
      returns true for jobs that have been force-cancelled.
      
      Therefore, this only makes a differences in places that call
      job_cancel_requested().  In block/mirror.c, this is done only before
      .cancelled was cleared.
      
      In job.c, there are two callers:
      - job_completed_txn_abort() asserts that .cancelled is true, so keeping
        it true will not affect this place.
      
      - job_complete() refuses to let a job complete that has .cancelled set.
        It is correct to refuse to let the user invoke job-complete on mirror
        jobs that have already been soft-cancelled.
      
      With this change, there are no places that reset .cancelled to false and
      so we can be sure that .force_cancel can only be true if .cancelled is
      true as well.  Assert this in job_is_cancelled().
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-13-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      a640fa0e
    • Hanna Reitz's avatar
      job: Add job_cancel_requested() · 08b83bff
      Hanna Reitz authored
      Most callers of job_is_cancelled() actually want to know whether the job
      is on its way to immediate termination.  For example, we refuse to pause
      jobs that are cancelled; but this only makes sense for jobs that are
      really actually cancelled.
      
      A mirror job that is cancelled during READY with force=false should
      absolutely be allowed to pause.  This "cancellation" (which is actually
      a kind of completion) may take an indefinite amount of time, and so
      should behave like any job during normal operation.  For example, with
      on-target-error=stop, the job should stop on write errors.  (In
      contrast, force-cancelled jobs should not get write errors, as they
      should just terminate and not do further I/O.)
      
      Therefore, redefine job_is_cancelled() to only return true for jobs that
      are force-cancelled (which as of HEAD^ means any job that interprets the
      cancellation request as a request for immediate termination), and add
      job_cancel_requested() as the general variant, which returns true for
      any jobs which have been requested to be cancelled, whether it be
      immediately or after an arbitrarily long completion phase.
      
      Finally, here is a justification for how different job_is_cancelled()
      invocations are treated by this patch:
      
      - block/mirror.c (mirror_run()):
        - The first invocation is a while loop that should loop until the job
          has been cancelled or scheduled for completion.  What kind of cancel
          does not matter, only the fact that the job is supposed to end.
      
        - The second invocation wants to know whether the job has been
          soft-cancelled.  Calling job_cancel_requested() is a bit too broad,
          but if the job were force-cancelled, we should leave the main loop
          as soon as possible anyway, so this should not matter here.
      
        - The last two invocations already check force_cancel, so they should
          continue to use job_is_cancelled().
      
      - block/backup.c, block/commit.c, block/stream.c, anything in tests/:
        These jobs know only force-cancel, so there is no difference between
        job_is_cancelled() and job_cancel_requested().  We can continue using
        job_is_cancelled().
      
      - job.c:
        - job_pause_point(), job_yield(), job_sleep_ns(): Only force-cancelled
          jobs should be prevented from being paused.  Continue using job_is_cancelled().
      
        - job_update_rc(), job_finalize_single(), job_finish_sync(): These
          functions are all called after the job has left its main loop.  The
          mirror job (the only job that can be soft-cancelled) will clear
          .cancelled before leaving the main loop if it has been
          soft-cancelled.  Therefore, these functions will observe .cancelled
          to be true only if the job has been force-cancelled.  We can
          continue to use job_is_cancelled().
          (Furthermore, conceptually, a soft-cancelled mirror job should not
          report to have been cancelled.  It should report completion (see
          also the block-job-cancel QAPI documentation).  Therefore, it makes
          sense for these functions not to distinguish between a
          soft-cancelled mirror job and a job that has completed as normal.)
      
        - job_completed_txn_abort(): All jobs other than @job have been
          force-cancelled.  job_is_cancelled() must be true for them.
          Regarding @job itself: job_completed_txn_abort() is mostly called
          when the job's return value is not 0.  A soft-cancelled mirror has a
          return value of 0, and so will not end up here then.
          However, job_cancel() invokes job_completed_txn_abort() if the job
          has been deferred to the main loop, which is mostly the case for
          completed jobs (which skip the assertion), but not for sure.
          To be safe, use job_cancel_requested() in this assertion.
      
        - job_complete(): This is function eventually invoked by the user
          (through qmp_block_job_complete() or qmp_job_complete(), or
          job_complete_sync(), which comes from qemu-img).  The intention here
          is to prevent a user from invoking job-complete after the job has
          been cancelled.  This should also apply to soft cancelling: After a
          mirror job has been soft-cancelled, the user should not be able to
          decide otherwise and have it complete as normal (i.e. pivoting to
          the target).
      
        - job_cancel(): Both functions are equivalent (see comment there), but
          we want to use job_is_cancelled(), because this shows that we call
          job_completed_txn_abort() only for force-cancelled jobs.  (As
          explained for job_update_rc(), soft-cancelled jobs should be treated
          as if they have completed as normal.)
      
      Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
      
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-9-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      08b83bff
    • Hanna Reitz's avatar
      job: Do not soft-cancel after a job is done · 401dd096
      Hanna Reitz authored
      
      The only job that supports a soft cancel mode is the mirror job, and in
      such a case it resets its .cancelled field before it leaves its .run()
      function, so it does not really count as cancelled.
      
      However, it is possible to cancel the job after .run() returns and
      before job_exit() (which is run in the main loop) is executed.  Then,
      .cancelled would still be true and the job would count as cancelled.
      This does not seem to be in the interest of the mirror job, so adjust
      job_cancel_async() to not set .cancelled in such a case, and
      job_cancel() to not invoke job_completed_txn_abort().
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-8-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      401dd096
    • Hanna Reitz's avatar
      jobs: Give Job.force_cancel more meaning · 73895f38
      Hanna Reitz authored
      
      We largely have two cancel modes for jobs:
      
      First, there is actual cancelling.  The job is terminated as soon as
      possible, without trying to reach a consistent result.
      
      Second, we have mirror in the READY state.  Technically, the job is not
      really cancelled, but it just is a different completion mode.  The job
      can still run for an indefinite amount of time while it tries to reach a
      consistent result.
      
      We want to be able to clearly distinguish which cancel mode a job is in
      (when it has been cancelled).  We can use Job.force_cancel for this, but
      right now it only reflects cancel requests from the user with
      force=true, but clearly, jobs that do not even distinguish between
      force=false and force=true are effectively always force-cancelled.
      
      So this patch has Job.force_cancel signify whether the job will
      terminate as soon as possible (force_cancel=true) or whether it will
      effectively remain running despite being "cancelled"
      (force_cancel=false).
      
      To this end, we let jobs that provide JobDriver.cancel() tell the
      generic job code whether they will terminate as soon as possible or not,
      and for jobs that do not provide that method we assume they will.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20211006151940.214590-7-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      73895f38
    • Hanna Reitz's avatar
      job: @force parameter for job_cancel_sync() · 4cfb3f05
      Hanna Reitz authored
      Callers should be able to specify whether they want job_cancel_sync() to
      force-cancel the job or not.
      
      In fact, almost all invocations do not care about consistency of the
      result and just want the job to terminate as soon as possible, so they
      should pass force=true.  The replication block driver is the exception,
      specifically the active commit job it runs.
      
      As for job_cancel_sync_all(), all callers want it to force-cancel all
      jobs, because that is the point of it: To cancel all remaining jobs as
      quickly as possible (generally on process termination).  So make it
      invoke job_cancel_sync() with force=true.
      
      This changes some iotest outputs, because quitting qemu while a mirror
      job is active will now lead to it being cancelled instead of completed,
      which is what we want.  (Cancelling a READY mirror job with force=false
      may take an indefinite amount of time, which we do not want when
      quitting.  If users want consistent results, they must have all jobs be
      done before they quit qemu.)
      
      Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
      
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-6-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      4cfb3f05
    • Hanna Reitz's avatar
      job: Force-cancel jobs in a failed transaction · 1d4a43e9
      Hanna Reitz authored
      
      When a transaction is aborted, no result matters, and so all jobs within
      should be force-cancelled.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-5-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      1d4a43e9
    • Hanna Reitz's avatar
      job: Context changes in job_completed_txn_abort() · d4311314
      Hanna Reitz authored
      
      Finalizing the job may cause its AioContext to change.  This is noted by
      job_exit(), which points at job_txn_apply() to take this fact into
      account.
      
      However, job_completed() does not necessarily invoke job_txn_apply()
      (through job_completed_txn_success()), but potentially also
      job_completed_txn_abort().  The latter stores the context in a local
      variable, and so always acquires the same context at its end that it has
      released in the beginning -- which may be a different context from the
      one that job_exit() releases at its end.  If it is different, qemu
      aborts ("qemu_mutex_unlock_impl: Operation not permitted").
      
      Drop the local @outer_ctx variable from job_completed_txn_abort(), and
      instead re-acquire the actual job's context at the end of the function,
      so job_exit() will release the same.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-2-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      d4311314
  7. Jun 25, 2021
  8. May 14, 2021
  9. Apr 09, 2021
  10. Feb 12, 2021
  11. Jan 26, 2021
    • Vladimir Sementsov-Ogievskiy's avatar
      job: call job_enter from job_pause · 3ee1483b
      Vladimir Sementsov-Ogievskiy authored
      
      If main job coroutine called job_yield (while some background process
      is in progress), we should give it a chance to call job_pause_point().
      It will be used in backup, when moved on async block-copy.
      
      Note, that job_user_pause is not enough: we want to handle
      child_job_drained_begin() as well, which call job_pause().
      
      Still, if job is already in job_do_yield() in job_pause_point() we
      should not enter it.
      
      iotest 109 output is modified: on stop we do bdrv_drain_all() which now
      triggers job pause immediately (and pause after ready is standby).
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20210116214705.822267-10-vsementsov@virtuozzo.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      Signed-off-by: default avatarMax Reitz <mreitz@redhat.com>
      3ee1483b
  12. Aug 21, 2020
    • Paolo Bonzini's avatar
      trace: switch position of headers to what Meson requires · 243af022
      Paolo Bonzini authored
      
      Meson doesn't enjoy the same flexibility we have with Make in choosing
      the include path.  In particular the tracing headers are using
      $(build_root)/$(<D).
      
      In order to keep the include directives unchanged,
      the simplest solution is to generate headers with patterns like
      "trace/trace-audio.h" and place forwarding headers in the source tree
      such that for example "audio/trace.h" includes "trace/trace-audio.h".
      
      This patch is too ugly to be applied to the Makefiles now.  It's only
      a way to separate the changes to the tracing header files from the
      Meson rewrite of the tracing logic.
      
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      243af022
  13. Apr 07, 2020
    • Stefan Reiter's avatar
      job: take each job's lock individually in job_txn_apply · b660a84b
      Stefan Reiter authored
      
      All callers of job_txn_apply hold a single job's lock, but different
      jobs within a transaction can have different contexts, thus we need to
      lock each one individually before applying the callback function.
      
      Similar to job_completed_txn_abort this also requires releasing the
      caller's context before and reacquiring it after to avoid recursive
      locks which might break AIO_WAIT_WHILE in the callback. This is safe, since
      existing code would already have to take this into account, lest
      job_completed_txn_abort might have broken.
      
      This also brings to light a different issue: When a callback function in
      job_txn_apply moves it's job to a different AIO context, callers will
      try to release the wrong lock (now that we re-acquire the lock
      correctly, previously it would just continue with the old lock, leaving
      the job unlocked for the rest of the return path). Fix this by not caching
      the job's context.
      
      This is only necessary for qmp_block_job_finalize, qmp_job_finalize and
      job_exit, since everyone else calls through job_exit.
      
      One test needed adapting, since it calls job_finalize directly, so it
      manually needs to acquire the correct context.
      
      Signed-off-by: default avatarStefan Reiter <s.reiter@proxmox.com>
      Message-Id: <20200407115651.69472-2-s.reiter@proxmox.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      b660a84b
  14. Mar 11, 2020
  15. Sep 10, 2019
  16. Jun 12, 2019
    • Markus Armbruster's avatar
      Include qemu-common.h exactly where needed · a8d25326
      Markus Armbruster authored
      
      No header includes qemu-common.h after this commit, as prescribed by
      qemu-common.h's file comment.
      
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20190523143508.25387-5-armbru@redhat.com>
      [Rebased with conflicts resolved automatically, except for
      include/hw/arm/xlnx-zynqmp.h hw/arm/nrf51_soc.c hw/arm/msf2-soc.c
      block/qcow2-refcount.c block/qcow2-cluster.c block/qcow2-cache.c
      target/arm/cpu.h target/lm32/cpu.h target/m68k/cpu.h target/mips/cpu.h
      target/moxie/cpu.h target/nios2/cpu.h target/openrisc/cpu.h
      target/riscv/cpu.h target/tilegx/cpu.h target/tricore/cpu.h
      target/unicore32/cpu.h target/xtensa/cpu.h; bsd-user/main.c and
      net/tap-bsd.c fixed up]
      a8d25326
  17. May 10, 2019
    • Kevin Wolf's avatar
      blockjob: Fix coroutine thread after AioContext change · 13726123
      Kevin Wolf authored
      
      Commit 463e0be1 ('blockjob: add AioContext attached callback') tried to
      make block jobs robust against AioContext changes of their main node,
      but it never made sure that the job coroutine actually runs in the new
      thread.
      
      Instead of waking up the job coroutine in whatever thread it ran before,
      let's always pass the AioContext where it should be running now.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      13726123
  18. Nov 12, 2018
  19. Sep 25, 2018
    • Kevin Wolf's avatar
      block: Use a single global AioWait · cfe29d82
      Kevin Wolf authored
      
      When draining a block node, we recurse to its parent and for subtree
      drains also to its children. A single AIO_WAIT_WHILE() is then used to
      wait for bdrv_drain_poll() to become true, which depends on all of the
      nodes we recursed to. However, if the respective child or parent becomes
      quiescent and calls bdrv_wakeup(), only the AioWait of the child/parent
      is checked, while AIO_WAIT_WHILE() depends on the AioWait of the
      original node.
      
      Fix this by using a single AioWait for all callers of AIO_WAIT_WHILE().
      
      This may mean that the draining thread gets a few more unnecessary
      wakeups because an unrelated operation got completed, but we already
      wake it up when something _could_ have changed rather than only if it
      has certainly changed.
      
      Apart from that, drain is a slow path anyway. In theory it would be
      possible to use wakeups more selectively and still correctly, but the
      gains are likely not worth the additional complexity. In fact, this
      patch is a nice simplification for some places in the code.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      cfe29d82
    • Kevin Wolf's avatar
      job: Avoid deadlocks in job_completed_txn_abort() · 644f3a29
      Kevin Wolf authored
      
      Amongst others, job_finalize_single() calls the .prepare/.commit/.abort
      callbacks of the individual job driver. Recently, their use was adapted
      for all block jobs so that they involve code calling AIO_WAIT_WHILE()
      now. Such code must be called under the AioContext lock for the
      respective job, but without holding any other AioContext lock.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      644f3a29
    • Kevin Wolf's avatar
      blockjob: Lie better in child_job_drained_poll() · b5a7a057
      Kevin Wolf authored
      
      Block jobs claim in .drained_poll() that they are in a quiescent state
      as soon as job->deferred_to_main_loop is true. This is obviously wrong,
      they still have a completion BH to run. We only get away with this
      because commit 91af091f added an unconditional aio_poll(false) to the
      drain functions, but this is bypassing the regular drain mechanisms.
      
      However, just removing this and telling that the job is still active
      doesn't work either: The completion callbacks themselves call drain
      functions (directly, or indirectly with bdrv_reopen), so they would
      deadlock then.
      
      As a better lie, tell that the job is active as long as the BH is
      pending, but falsely call it quiescent from the point in the BH when the
      completion callback is called. At this point, nested drain calls won't
      deadlock because they ignore the job, and outer drains will wait for the
      job to really reach a quiescent state because the callback is already
      running.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      b5a7a057
    • Kevin Wolf's avatar
      job: Use AIO_WAIT_WHILE() in job_finish_sync() · de0fbe64
      Kevin Wolf authored
      
      job_finish_sync() needs to release the AioContext lock of the job before
      calling aio_poll(). Otherwise, callbacks called by aio_poll() would
      possibly take the lock a second time and run into a deadlock with a
      nested AIO_WAIT_WHILE() call.
      
      Also, job_drain() without aio_poll() isn't necessarily enough to make
      progress on a job, it could depend on bottom halves to be executed.
      
      Combine both open-coded while loops into a single AIO_WAIT_WHILE() call
      that solves both of these problems.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarFam Zheng <famz@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      de0fbe64
    • Kevin Wolf's avatar
      blockjob: Wake up BDS when job becomes idle · 34dc97b9
      Kevin Wolf authored
      
      In the context of draining a BDS, the .drained_poll callback of block
      jobs is called. If this returns true (i.e. there is still some activity
      pending), the drain operation may call aio_poll() with blocking=true to
      wait for completion.
      
      As soon as the pending activity is completed and the job finally arrives
      in a quiescent state (i.e. its coroutine either yields with busy=false
      or terminates), the block job must notify the aio_poll() loop to wake
      up, otherwise we get a deadlock if both are running in different
      threads.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarFam Zheng <famz@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      34dc97b9
    • Kevin Wolf's avatar
      job: Fix missing locking due to mismerge · d1756c78
      Kevin Wolf authored
      
      job_completed() had a problem with double locking that was recently
      fixed independently by two different commits:
      
      "job: Fix nested aio_poll() hanging in job_txn_apply"
      "jobs: add exit shim"
      
      One fix removed the first aio_context_acquire(), the other fix removed
      the other one. Now we have a bug again and the code is run without any
      locking.
      
      Add it back in one of the places.
      
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      Reviewed-by: default avatarJohn Snow <jsnow@redhat.com>
      d1756c78
    • Fam Zheng's avatar
      job: Fix nested aio_poll() hanging in job_txn_apply · 49880165
      Fam Zheng authored
      
      All callers have acquired ctx already. Doing that again results in
      aio_poll() hang. This fixes the problem that a BDRV_POLL_WHILE() in the
      callback cannot make progress because ctx is recursively locked, for
      example, when drive-backup finishes.
      
      There are two callers of job_finalize():
      
          fam@lemon:~/work/qemu [master]$ git grep -w -A1 '^\s*job_finalize'
          blockdev.c:    job_finalize(&job->job, errp);
          blockdev.c-    aio_context_release(aio_context);
          --
          job-qmp.c:    job_finalize(job, errp);
          job-qmp.c-    aio_context_release(aio_context);
          --
          tests/test-blockjob.c:    job_finalize(&job->job, &error_abort);
          tests/test-blockjob.c-    assert(job->job.status == JOB_STATUS_CONCLUDED);
      
      Ignoring the test, it's easy to see both callers to job_finalize (and
      job_do_finalize) have acquired the context.
      
      Cc: qemu-stable@nongnu.org
      Reported-by: default avatarGu Nini <ngu@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Signed-off-by: default avatarFam Zheng <famz@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      49880165
    • John Snow's avatar
      jobs: remove .exit callback · ccbfb331
      John Snow authored
      
      Now that all of the jobs use the component finalization callbacks,
      there's no use for the heavy-hammer .exit callback anymore.
      
      job_exit becomes a glorified type shim so that we can call
      job_completed from aio_bh_schedule_oneshot.
      
      Move these three functions down into job.c to eliminate a
      forward reference.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarMax Reitz <mreitz@redhat.com>
      Message-id: 20180906130225.5118-12-jsnow@redhat.com
      Reviewed-by: default avatarJeff Cody <jcody@redhat.com>
      Signed-off-by: default avatarMax Reitz <mreitz@redhat.com>
      ccbfb331
Loading