Skip to content
Snippets Groups Projects
  1. Oct 07, 2021
    • Hanna Reitz's avatar
      mirror: Use job_is_cancelled() · 20ad4d20
      Hanna Reitz authored
      
      mirror_drained_poll() returns true whenever the job is cancelled,
      because "we [can] be sure that it won't issue more requests".  However,
      this is only true for force-cancelled jobs, so use job_is_cancelled().
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-10-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      20ad4d20
    • Hanna Reitz's avatar
      job: Add job_cancel_requested() · 08b83bff
      Hanna Reitz authored
      Most callers of job_is_cancelled() actually want to know whether the job
      is on its way to immediate termination.  For example, we refuse to pause
      jobs that are cancelled; but this only makes sense for jobs that are
      really actually cancelled.
      
      A mirror job that is cancelled during READY with force=false should
      absolutely be allowed to pause.  This "cancellation" (which is actually
      a kind of completion) may take an indefinite amount of time, and so
      should behave like any job during normal operation.  For example, with
      on-target-error=stop, the job should stop on write errors.  (In
      contrast, force-cancelled jobs should not get write errors, as they
      should just terminate and not do further I/O.)
      
      Therefore, redefine job_is_cancelled() to only return true for jobs that
      are force-cancelled (which as of HEAD^ means any job that interprets the
      cancellation request as a request for immediate termination), and add
      job_cancel_requested() as the general variant, which returns true for
      any jobs which have been requested to be cancelled, whether it be
      immediately or after an arbitrarily long completion phase.
      
      Finally, here is a justification for how different job_is_cancelled()
      invocations are treated by this patch:
      
      - block/mirror.c (mirror_run()):
        - The first invocation is a while loop that should loop until the job
          has been cancelled or scheduled for completion.  What kind of cancel
          does not matter, only the fact that the job is supposed to end.
      
        - The second invocation wants to know whether the job has been
          soft-cancelled.  Calling job_cancel_requested() is a bit too broad,
          but if the job were force-cancelled, we should leave the main loop
          as soon as possible anyway, so this should not matter here.
      
        - The last two invocations already check force_cancel, so they should
          continue to use job_is_cancelled().
      
      - block/backup.c, block/commit.c, block/stream.c, anything in tests/:
        These jobs know only force-cancel, so there is no difference between
        job_is_cancelled() and job_cancel_requested().  We can continue using
        job_is_cancelled().
      
      - job.c:
        - job_pause_point(), job_yield(), job_sleep_ns(): Only force-cancelled
          jobs should be prevented from being paused.  Continue using job_is_cancelled().
      
        - job_update_rc(), job_finalize_single(), job_finish_sync(): These
          functions are all called after the job has left its main loop.  The
          mirror job (the only job that can be soft-cancelled) will clear
          .cancelled before leaving the main loop if it has been
          soft-cancelled.  Therefore, these functions will observe .cancelled
          to be true only if the job has been force-cancelled.  We can
          continue to use job_is_cancelled().
          (Furthermore, conceptually, a soft-cancelled mirror job should not
          report to have been cancelled.  It should report completion (see
          also the block-job-cancel QAPI documentation).  Therefore, it makes
          sense for these functions not to distinguish between a
          soft-cancelled mirror job and a job that has completed as normal.)
      
        - job_completed_txn_abort(): All jobs other than @job have been
          force-cancelled.  job_is_cancelled() must be true for them.
          Regarding @job itself: job_completed_txn_abort() is mostly called
          when the job's return value is not 0.  A soft-cancelled mirror has a
          return value of 0, and so will not end up here then.
          However, job_cancel() invokes job_completed_txn_abort() if the job
          has been deferred to the main loop, which is mostly the case for
          completed jobs (which skip the assertion), but not for sure.
          To be safe, use job_cancel_requested() in this assertion.
      
        - job_complete(): This is function eventually invoked by the user
          (through qmp_block_job_complete() or qmp_job_complete(), or
          job_complete_sync(), which comes from qemu-img).  The intention here
          is to prevent a user from invoking job-complete after the job has
          been cancelled.  This should also apply to soft cancelling: After a
          mirror job has been soft-cancelled, the user should not be able to
          decide otherwise and have it complete as normal (i.e. pivoting to
          the target).
      
        - job_cancel(): Both functions are equivalent (see comment there), but
          we want to use job_is_cancelled(), because this shows that we call
          job_completed_txn_abort() only for force-cancelled jobs.  (As
          explained for job_update_rc(), soft-cancelled jobs should be treated
          as if they have completed as normal.)
      
      Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
      
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-9-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      08b83bff
    • Hanna Reitz's avatar
      job: Do not soft-cancel after a job is done · 401dd096
      Hanna Reitz authored
      
      The only job that supports a soft cancel mode is the mirror job, and in
      such a case it resets its .cancelled field before it leaves its .run()
      function, so it does not really count as cancelled.
      
      However, it is possible to cancel the job after .run() returns and
      before job_exit() (which is run in the main loop) is executed.  Then,
      .cancelled would still be true and the job would count as cancelled.
      This does not seem to be in the interest of the mirror job, so adjust
      job_cancel_async() to not set .cancelled in such a case, and
      job_cancel() to not invoke job_completed_txn_abort().
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-8-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      401dd096
    • Hanna Reitz's avatar
      jobs: Give Job.force_cancel more meaning · 73895f38
      Hanna Reitz authored
      
      We largely have two cancel modes for jobs:
      
      First, there is actual cancelling.  The job is terminated as soon as
      possible, without trying to reach a consistent result.
      
      Second, we have mirror in the READY state.  Technically, the job is not
      really cancelled, but it just is a different completion mode.  The job
      can still run for an indefinite amount of time while it tries to reach a
      consistent result.
      
      We want to be able to clearly distinguish which cancel mode a job is in
      (when it has been cancelled).  We can use Job.force_cancel for this, but
      right now it only reflects cancel requests from the user with
      force=true, but clearly, jobs that do not even distinguish between
      force=false and force=true are effectively always force-cancelled.
      
      So this patch has Job.force_cancel signify whether the job will
      terminate as soon as possible (force_cancel=true) or whether it will
      effectively remain running despite being "cancelled"
      (force_cancel=false).
      
      To this end, we let jobs that provide JobDriver.cancel() tell the
      generic job code whether they will terminate as soon as possible or not,
      and for jobs that do not provide that method we assume they will.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20211006151940.214590-7-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      73895f38
    • Hanna Reitz's avatar
      job: @force parameter for job_cancel_sync() · 4cfb3f05
      Hanna Reitz authored
      Callers should be able to specify whether they want job_cancel_sync() to
      force-cancel the job or not.
      
      In fact, almost all invocations do not care about consistency of the
      result and just want the job to terminate as soon as possible, so they
      should pass force=true.  The replication block driver is the exception,
      specifically the active commit job it runs.
      
      As for job_cancel_sync_all(), all callers want it to force-cancel all
      jobs, because that is the point of it: To cancel all remaining jobs as
      quickly as possible (generally on process termination).  So make it
      invoke job_cancel_sync() with force=true.
      
      This changes some iotest outputs, because quitting qemu while a mirror
      job is active will now lead to it being cancelled instead of completed,
      which is what we want.  (Cancelling a READY mirror job with force=false
      may take an indefinite amount of time, which we do not want when
      quitting.  If users want consistent results, they must have all jobs be
      done before they quit qemu.)
      
      Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462
      
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-6-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      4cfb3f05
    • Hanna Reitz's avatar
      job: Force-cancel jobs in a failed transaction · 1d4a43e9
      Hanna Reitz authored
      
      When a transaction is aborted, no result matters, and so all jobs within
      should be force-cancelled.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-5-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      1d4a43e9
    • Hanna Reitz's avatar
      mirror: Drop s->synced · 44716224
      Hanna Reitz authored
      
      As of HEAD^, there is no meaning to s->synced other than whether the job
      is READY or not.  job_is_ready() gives us that information, too.
      
      Suggested-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarEric Blake <eblake@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20211006151940.214590-4-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      44716224
    • Hanna Reitz's avatar
      mirror: Keep s->synced on error · a3810da5
      Hanna Reitz authored
      
      An error does not take us out of the READY phase, which is what
      s->synced signifies.  It does of course mean that source and target are
      no longer in sync, but that is what s->actively_sync is for -- s->synced
      never meant that source and target are in sync, only that they were at
      some point (and at that point we transitioned into the READY phase).
      
      The tangible problem is that we transition to READY once we are in sync
      and s->synced is false.  By resetting s->synced here, we will transition
      from READY to READY once the error is resolved (if the job keeps
      running), and that transition is not allowed.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Reviewed-by: default avatarKevin Wolf <kwolf@redhat.com>
      Message-Id: <20211006151940.214590-3-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      a3810da5
    • Hanna Reitz's avatar
      job: Context changes in job_completed_txn_abort() · d4311314
      Hanna Reitz authored
      
      Finalizing the job may cause its AioContext to change.  This is noted by
      job_exit(), which points at job_txn_apply() to take this fact into
      account.
      
      However, job_completed() does not necessarily invoke job_txn_apply()
      (through job_completed_txn_success()), but potentially also
      job_completed_txn_abort().  The latter stores the context in a local
      variable, and so always acquires the same context at its end that it has
      released in the beginning -- which may be a different context from the
      one that job_exit() releases at its end.  If it is different, qemu
      aborts ("qemu_mutex_unlock_impl: Operation not permitted").
      
      Drop the local @outer_ctx variable from job_completed_txn_abort(), and
      instead re-acquire the actual job's context at the end of the function,
      so job_exit() will release the same.
      
      Signed-off-by: default avatarHanna Reitz <hreitz@redhat.com>
      Reviewed-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20211006151940.214590-2-hreitz@redhat.com>
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      d4311314
  2. Oct 05, 2021
  3. Oct 04, 2021
  4. Oct 03, 2021
    • Richard Henderson's avatar
      Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging · 30bd1db5
      Richard Henderson authored
      
      * -smp cleanpus (Yanan)
      * Hyper-V enlightenment functionality (Vitaly)
      * virtio-mem support in dump, tpm and QMP (David)
      * NetBSD GCC 7.4 compiler support (Nia)
      
      # gpg: Signature made Sun 03 Oct 2021 03:41:30 AM EDT
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      
      * remotes/bonzini/tags/for-upstream:
        softmmu/memory_mapping: optimize for RamDiscardManager sections
        softmmu/memory_mapping: factor out adding physical memory ranges
        softmmu/memory_mapping: never merge ranges accross memory regions
        tpm: mark correct memory region range dirty when clearing RAM
        monitor: Rate-limit MEMORY_DEVICE_SIZE_CHANGE qapi events per device
        qapi: Include qom-path in MEMORY_DEVICE_SIZE_CHANGE qapi events
        virtio-mem-pci: Fix memory leak when creating MEMORY_DEVICE_SIZE_CHANGE event
        configure: Loosen GCC requirement from 7.5.0 to 7.4.0
      
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      30bd1db5
  5. Oct 02, 2021
    • Richard Henderson's avatar
      Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2021-10-02' into staging · f50ecf54
      Richard Henderson authored
      
      QAPI patches patches for 2021-10-02
      
      # gpg: Signature made Sat 02 Oct 2021 01:37:11 AM EDT
      # gpg:                using RSA key 354BC8B3D7EB2A6B68674E5F3870B400EB918653
      # gpg:                issuer "armbru@redhat.com"
      # gpg: Good signature from "Markus Armbruster <armbru@redhat.com>" [full]
      # gpg:                 aka "Markus Armbruster <armbru@pond.sub.org>" [full]
      
      * remotes/armbru/tags/pull-qapi-2021-10-02:
        qapi/parser: enable pylint checks
        qapi/parser: Silence too-few-public-methods warning
        qapi/parser: enable mypy checks
        qapi/parser: Add FIXME for consolidating JSON-related types
        qapi/parser: add type hint annotations (QAPIDoc)
        qapi/parser: add import cycle workaround
        qapi/parser: Introduce NullSection
        qapi/parser: clarify _end_section() logic
        qapi/parser: remove FIXME comment from _append_body_line
        qapi: Add spaces after symbol declaration for consistency
        qapi/parser: fix unused check_args_section arguments
        qapi/gen: use dict.items() to iterate over _modules
        qapi/pylintrc: ignore 'consider-using-f-string' warning
      
      Signed-off-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      f50ecf54
    • David Hildenbrand's avatar
      softmmu/memory_mapping: optimize for RamDiscardManager sections · cb83ba8c
      David Hildenbrand authored
      
      virtio-mem logically plugs/unplugs memory within a sparse memory region
      and notifies via the RamDiscardManager interface when parts become
      plugged (populated) or unplugged (discarded).
      
      Currently, we end up (via the two users)
      1) zeroing all logically unplugged/discarded memory during TPM resets.
      2) reading all logically unplugged/discarded memory when dumping, to
         figure out the content is zero.
      
      1) is always bad, because we assume unplugged memory stays discarded
         (and is already implicitly zero).
      2) isn't that bad with anonymous memory, we end up reading the zero
         page (slow and unnecessary, though). However, once we use some
         file-backed memory (future use case), even reading will populate memory.
      
      Let's cut out all parts marked as not-populated (discarded) via the
      RamDiscardManager. As virtio-mem is the single user, this now means that
      logically unplugged memory ranges will no longer be included in the
      dump, which results in smaller dump files and faster dumping.
      
      virtio-mem has a minimum granularity of 1 MiB (and the default is usually
      2 MiB). Theoretically, we can see quite some fragmentation, in practice
      we won't have it completely fragmented in 1 MiB pieces. Still, we might
      end up with many physical ranges.
      
      Both, the ELF format and kdump seem to be ready to support many
      individual ranges (e.g., for ELF it seems to be UINT32_MAX, kdump has a
      linear bitmap).
      
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Claudio Fontana <cfontana@suse.de>
      Cc: Thomas Huth <thuth@redhat.com>
      Cc: "Alex Bennée" <alex.bennee@linaro.org>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Laurent Vivier <lvivier@redhat.com>
      Cc: Stefan Berger <stefanb@linux.ibm.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210727082545.17934-5-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      cb83ba8c
    • David Hildenbrand's avatar
      softmmu/memory_mapping: factor out adding physical memory ranges · 3513bb1b
      David Hildenbrand authored
      
      Let's factor out adding a MemoryRegionSection to the list, to be reused in
      RamDiscardManager context next.
      
      Reviewed-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Claudio Fontana <cfontana@suse.de>
      Cc: Thomas Huth <thuth@redhat.com>
      Cc: "Alex Bennée" <alex.bennee@linaro.org>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Laurent Vivier <lvivier@redhat.com>
      Cc: Stefan Berger <stefanb@linux.ibm.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210727082545.17934-4-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3513bb1b
    • David Hildenbrand's avatar
      softmmu/memory_mapping: never merge ranges accross memory regions · 602f8ea7
      David Hildenbrand authored
      
      Let's make sure to not merge when different memory regions are involved.
      Unlikely, but theoretically possible.
      
      Acked-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Claudio Fontana <cfontana@suse.de>
      Cc: Thomas Huth <thuth@redhat.com>
      Cc: "Alex Bennée" <alex.bennee@linaro.org>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Laurent Vivier <lvivier@redhat.com>
      Cc: Stefan Berger <stefanb@linux.ibm.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210727082545.17934-3-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      602f8ea7
    • David Hildenbrand's avatar
      tpm: mark correct memory region range dirty when clearing RAM · 45e576c7
      David Hildenbrand authored
      
      We might not start at the beginning of the memory region. Let's
      calculate the offset into the memory region via the difference in the
      host addresses.
      
      Acked-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Fixes: ffab1be7 ("tpm: clear RAM when "memory overwrite" requested")
      Cc: Marc-André Lureau <marcandre.lureau@redhat.com>
      Cc: Paolo Bonzini <pbonzini@redhat.com>
      Cc: "Michael S. Tsirkin" <mst@redhat.com>
      Cc: Eduardo Habkost <ehabkost@redhat.com>
      Cc: Alex Williamson <alex.williamson@redhat.com>
      Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Cc: Igor Mammedov <imammedo@redhat.com>
      Cc: Claudio Fontana <cfontana@suse.de>
      Cc: Thomas Huth <thuth@redhat.com>
      Cc: "Alex Bennée" <alex.bennee@linaro.org>
      Cc: Peter Xu <peterx@redhat.com>
      Cc: Laurent Vivier <lvivier@redhat.com>
      Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarPeter Xu <peterx@redhat.com>
      Message-Id: <20210727082545.17934-2-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      45e576c7
    • David Hildenbrand's avatar
      monitor: Rate-limit MEMORY_DEVICE_SIZE_CHANGE qapi events per device · 77ae2302
      David Hildenbrand authored
      
      We want to rate-limit MEMORY_DEVICE_SIZE_CHANGE events per device,
      otherwise we can lose some events for devices. We can now use the
      qom-path to reliably map an event to a device and make rate-limiting
      device-aware.
      
      This was noticed by starting a VM with two virtio-mem devices that each
      have a requested size > 0. The Linux guest will initialize both devices
      in parallel, resulting in losing MEMORY_DEVICE_SIZE_CHANGE events for
      one of the devices.
      
      Fixes: 722a3c78 ("virtio-pci: Send qapi events when the virtio-mem size changes")
      Suggested-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210929162445.64060-4-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      77ae2302
    • David Hildenbrand's avatar
      qapi: Include qom-path in MEMORY_DEVICE_SIZE_CHANGE qapi events · d89dd28f
      David Hildenbrand authored
      
      As we might not always have a device id, it is impossible to always
      match MEMORY_DEVICE_SIZE_CHANGE events to an actual device. Let's
      include the qom-path in the event, which allows for reliable mapping of
      events to devices.
      
      Fixes: 722a3c78 ("virtio-pci: Send qapi events when the virtio-mem size changes")
      Suggested-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Message-Id: <20210929162445.64060-3-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      d89dd28f
    • David Hildenbrand's avatar
      virtio-mem-pci: Fix memory leak when creating MEMORY_DEVICE_SIZE_CHANGE event · 75b98cb9
      David Hildenbrand authored
      
      Apparently, we don't have to duplicate the string.
      
      Fixes: 722a3c78 ("virtio-pci: Send qapi events when the virtio-mem size changes")
      Cc: qemu-stable@nongnu.org
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Message-Id: <20210929162445.64060-2-david@redhat.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      75b98cb9
    • nia's avatar
      configure: Loosen GCC requirement from 7.5.0 to 7.4.0 · 3830df5f
      nia authored
      
      As discussed in issue 614, we're shipping GCC 7.4.0 as the
      system compiler in NetBSD 9, the most recent stable branch,
      and are still actively interested in QEMU on this platform.
      
      The differences between GCC 7.5.0 and 7.4.0 are trivial.
      
      Signed-off-by: default avatarNia Alarie <nia@NetBSD.org>
      Reviewed-by: default avatarRichard Henderson <richard.henderson@linaro.org>
      Message-Id: <YVcpe79I0rly1HJh@homeworld.netbsd.org>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      3830df5f
    • John Snow's avatar
      qapi/parser: enable pylint checks · d183e048
      John Snow authored
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      
      Message-Id: <20210930205716.1148693-14-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      d183e048
    • John Snow's avatar
      qapi/parser: Silence too-few-public-methods warning · 18e3673e
      John Snow authored
      
      Eh. Not worth the fuss today. There are bigger fish to fry.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-13-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      18e3673e
    • John Snow's avatar
      qapi/parser: enable mypy checks · 2e28283e
      John Snow authored
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      
      Message-Id: <20210930205716.1148693-12-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      2e28283e
    • John Snow's avatar
      qapi/parser: Add FIXME for consolidating JSON-related types · 15acf48c
      John Snow authored
      
      The fix for this comment is forthcoming in a future commit, but this
      will keep me honest. The linting configuration in ./python/setup.cfg
      prohibits 'FIXME' comments. A goal of this long-running series is to
      move ./scripts/qapi to ./python/qemu/qapi so that the QAPI generator is
      regularly type-checked by GitLab CI.
      
      This comment is a time-bomb to force me to address this issue prior to
      that step.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-11-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      15acf48c
    • John Snow's avatar
      qapi/parser: add type hint annotations (QAPIDoc) · 5f0d9f3b
      John Snow authored
      
      Annotations do not change runtime behavior.
      This commit consists of only annotations.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-10-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      5f0d9f3b
    • John Snow's avatar
      qapi/parser: add import cycle workaround · e7ac60fc
      John Snow authored
      Adding static types causes a cycle in the QAPI generator:
      [schema -> expr -> parser -> schema]. It exists because the QAPIDoc
      class needs the names of types defined by the schema module, but the
      schema module needs to import both expr.py/parser.py to do its actual
      parsing.
      
      Ultimately, the layering violation is that parser.py should not have any
      knowledge of specifics of the Schema. QAPIDoc performs double-duty here
      both as a parser *and* as a finalized object that is part of the schema.
      
      In this patch, add the offending type hints alongside the workaround to
      avoid the cycle becoming a problem at runtime. See
      https://mypy.readthedocs.io/en/latest/runtime_troubles.html#import-cycles
      
      
      for more information on this workaround technique.
      
      I see three ultimate resolutions here:
      
      (1) Just keep this patch and use the TYPE_CHECKING trick to eliminate
          the cycle which is only present during static analysis.
      
      (2) Don't bother to annotate connect_member() et al, give them 'object'
          or 'Any'. I don't particularly like this, because it diminishes the
          usefulness of type hints for documentation purposes. Still, it's an
          extremely quick fix.
      
      (3) Reimplement doc <--> definition correlation directly in schema.py,
          integrating doc fields directly into QAPISchemaMember and relieving
          the QAPIDoc class of the responsibility. Users of the information
          would instead visit the members first and retrieve their
          documentation instead of the inverse operation -- visiting the
          documentation and retrieving their members.
      
      My preference is (3), but in the short-term (1) is the easiest way to
      have my cake (strong type hints) and eat it too (Not have import
      cycles). Do (1) for now, but plan for (3).
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-9-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      e7ac60fc
    • John Snow's avatar
      qapi/parser: Introduce NullSection · f4c05aaf
      John Snow authored
      
      Here's the weird bit. QAPIDoc generally expects -- virtually everywhere
      -- that it will always have a current section. The sole exception to
      this is in the case that end_comment() is called, which leaves us with
      *no* section. However, in this case, we also don't expect to actually
      ever mutate the comment contents ever again.
      
      NullSection is just a Null-object that allows us to maintain the
      invariant that we *always* have a current section, enforced by static
      typing -- allowing us to type that field as QAPIDoc.Section instead of
      the more ambiguous Optional[QAPIDoc.Section].
      
      end_section is renamed to switch_section and now accepts as an argument
      the new section to activate, clarifying that no callers ever just
      unilaterally end a section; they only do so when starting a new section.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      
      Message-Id: <20210930205716.1148693-8-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      f4c05aaf
    • John Snow's avatar
      qapi/parser: clarify _end_section() logic · 1e20a775
      John Snow authored
      
      The "if self._section" clause in end_section is mysterious: In which
      circumstances might we end a section when we don't have one?
      
      QAPIDoc always expects there to be a "current section", only except
      after a call to end_comment(). This actually *shouldn't* ever be 'None',
      so let's remove that logic so I don't wonder why it's like this again in
      three months.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-7-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      1e20a775
    • John Snow's avatar
      qapi/parser: remove FIXME comment from _append_body_line · cd87c14c
      John Snow authored
      
      True, we do not check the validity of this symbol -- but we don't check
      the validity of definition names during parse, either -- that happens
      later, during the expr check. I don't want to introduce a dependency on
      expr.py:check_name_str here and introduce a cycle.
      
      Instead, rest assured that a documentation block is required for each
      definition. This requirement uses the names of each section to ensure
      that we fulfilled this requirement.
      
      e.g., let's say that block-core.json has a comment block for
      "Snapshot!Info" by accident. We'll see this error message:
      
      In file included from ../../qapi/block.json:8:
      ../../qapi/block-core.json: In struct 'SnapshotInfo':
      ../../qapi/block-core.json:38: documentation comment is for 'Snapshot!Info'
      
      That's a pretty decent error message.
      
      Now, let's say that we actually mangle it twice, identically:
      
      ../../qapi/block-core.json: In struct 'Snapshot!Info':
      ../../qapi/block-core.json:38: struct has an invalid name
      
      That's also pretty decent. If we forget to fix it in both places, we'll
      just be back to the first error.
      
      Therefore, let's just drop this FIXME and adjust the error message to
      not imply a more thorough check than is actually performed.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-6-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      cd87c14c
    • John Snow's avatar
      qapi: Add spaces after symbol declaration for consistency · a9e2eb06
      John Snow authored
      
      Several QGA definitions omit a blank line after the symbol
      declaration. This works OK currently, but it's the only place where we
      do this. Adjust it for consistency.
      
      Future commits may wind up enforcing this formatting.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      
      Message-Id: <20210930205716.1148693-5-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      a9e2eb06
    • John Snow's avatar
      qapi/parser: fix unused check_args_section arguments · 012336a1
      John Snow authored
      
      Pylint informs us we're not using these arguments. Oops, it's
      right. Correct the error message and remove the remaining unused
      parameter.
      
      Fix test output now that the error message is improved.
      
      Fixes: e151941d
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-4-jsnow@redhat.com>
      [Commit message formatting tweaked]
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      012336a1
    • John Snow's avatar
      qapi/gen: use dict.items() to iterate over _modules · 2adb988e
      John Snow authored
      
      New pylint warning. I could silence it, but this is the only occurrence
      in the entire tree, including everything in iotests/ and python/. Easier
      to just change this one instance.
      
      (The warning is emitted in cases where you are fetching the values
      anyway, so you may as well just take advantage of the iterator to avoid
      redundant lookups.)
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Message-Id: <20210930205716.1148693-3-jsnow@redhat.com>
      Reviewed-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      2adb988e
Loading