Skip to content
Snippets Groups Projects
  1. Jun 04, 2010
  2. Jun 01, 2010
  3. May 11, 2010
  4. May 05, 2010
  5. Apr 26, 2010
  6. Apr 25, 2010
  7. Apr 18, 2010
    • Blue Swirl's avatar
      kvm: avoid collision with dprintf macro in stdio.h, spotted by clang · 8c0d577e
      Blue Swirl authored
      
      Fixes clang errors:
        CC    i386-softmmu/kvm.o
      /src/qemu/target-i386/kvm.c:40:9: error: 'dprintf' macro redefined
      In file included from /src/qemu/target-i386/kvm.c:21:
      In file included from /src/qemu/qemu-common.h:27:
      In file included from /usr/include/stdio.h:910:
      /usr/include/bits/stdio2.h:189:12: note: previous definition is here
        CC    i386-softmmu/kvm-all.o
      /src/qemu/kvm-all.c:39:9: error: 'dprintf' macro redefined
      In file included from /src/qemu/kvm-all.c:23:
      In file included from /src/qemu/qemu-common.h:27:
      In file included from /usr/include/stdio.h:910:
      /usr/include/bits/stdio2.h:189:12: note: previous definition is here
      
      Signed-off-by: default avatarBlue Swirl <blauwirbel@gmail.com>
      8c0d577e
  8. Apr 08, 2010
  9. Mar 17, 2010
    • Paul Brook's avatar
      Large page TLB flush · d4c430a8
      Paul Brook authored
      
      QEMU uses a fixed page size for the CPU TLB.  If the guest uses large
      pages then we effectively split these into multiple smaller pages, and
      populate the corresponding TLB entries on demand.
      
      When the guest invalidates the TLB by virtual address we must invalidate
      all entries covered by the large page.  However the address used to
      invalidate the entry may not be present in the QEMU TLB, so we do not
      know which regions to clear.
      
      Implementing a full vaiable size TLB is hard and slow, so just keep a
      simple address/mask pair to record which addresses may have been mapped by
      large pages.  If the guest invalidates this region then flush the
      whole TLB.
      
      Signed-off-by: default avatarPaul Brook <paul@codesourcery.com>
      d4c430a8
  10. Mar 13, 2010
  11. Mar 12, 2010
  12. Mar 10, 2010
    • Aurelien Jarno's avatar
      target-i386: fix SIB decoding with index = 4 · b16f827b
      Aurelien Jarno authored
      
      A SIB byte with an index of 4 means "no scaled index", even if the scale
      value is not 0. In 64-bit mode, if REX.X is used, an index of 4 selects
      %r12. This is correctly handled by the computation of the index variable,
      which includes the index bits, and also the REX.X prefix:
      
          index = ((code >> 3) & 7) | REX_X(s);
      
      Thanks to Avi Kivity, Jamie Lokier and Malc for the analysis of the
      problem and the initial patch.
      
      Signed-off-by: default avatarAurelien Jarno <aurelien@aurel32.net>
      b16f827b
  13. Mar 06, 2010
  14. Mar 04, 2010
    • Jan Kiszka's avatar
      KVM: x86: Restrict writeback of VCPU state · ea643051
      Jan Kiszka authored
      
      Do not write nmi_pending, sipi_vector, and mpstate unless we at least go
      through a reset. And TSC as well as KVM wallclocks should only be
      written on full sync, otherwise we risk to drop some time on state
      read-modify-write.
      
      Signed-off-by: default avatarJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      ea643051
    • Jan Kiszka's avatar
      KVM: Rework VCPU state writeback API · ea375f9a
      Jan Kiszka authored
      
      This grand cleanup drops all reset and vmsave/load related
      synchronization points in favor of four(!) generic hooks:
      
      - cpu_synchronize_all_states in qemu_savevm_state_complete
        (initial sync from kernel before vmsave)
      - cpu_synchronize_all_post_init in qemu_loadvm_state
        (writeback after vmload)
      - cpu_synchronize_all_post_init in main after machine init
      - cpu_synchronize_all_post_reset in qemu_system_reset
        (writeback after system reset)
      
      These writeback points + the existing one of VCPU exec after
      cpu_synchronize_state map on three levels of writeback:
      
      - KVM_PUT_RUNTIME_STATE (during runtime, other VCPUs continue to run)
      - KVM_PUT_RESET_STATE   (on synchronous system reset, all VCPUs stopped)
      - KVM_PUT_FULL_STATE    (on init or vmload, all VCPUs stopped as well)
      
      This level is passed to the arch-specific VCPU state writing function
      that will decide which concrete substates need to be written. That way,
      no writer of load, save or reset functions that interact with in-kernel
      KVM states will ever have to worry about synchronization again. That
      also means that a lot of reasons for races, segfaults and deadlocks are
      eliminated.
      
      cpu_synchronize_state remains untouched, just as Anthony suggested. We
      continue to need it before reading or writing of VCPU states that are
      also tracked by in-kernel KVM subsystems.
      
      Consequently, this patch removes many cpu_synchronize_state calls that
      are now redundant, just like remaining explicit register syncs.
      
      Signed-off-by: default avatarJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      ea375f9a
    • Jan Kiszka's avatar
      KVM: Rework of guest debug state writing · b0b1d690
      Jan Kiszka authored
      
      So far we synchronized any dirty VCPU state back into the kernel before
      updating the guest debug state. This was a tribute to a deficite in x86
      kernels before 2.6.33. But as this is an arch-dependent issue, it is
      better handle in the x86 part of KVM and remove the writeback point for
      generic code. This also avoids overwriting the flushed state later on if
      user space decides to change some more registers before resuming the
      guest.
      
      We furthermore need to reinject guest exceptions via the appropriate
      mechanism. That is KVM_SET_GUEST_DEBUG for older kernels and
      KVM_SET_VCPU_EVENTS for recent ones. Using both mechanisms at the same
      time will cause state corruptions.
      
      Signed-off-by: default avatarJan Kiszka <jan.kiszka@siemens.com>
      Signed-off-by: default avatarMarcelo Tosatti <mtosatti@redhat.com>
      b0b1d690
  15. Mar 01, 2010
  16. Feb 28, 2010
  17. Feb 23, 2010
Loading