- Feb 04, 2016
-
-
Peter Maydell authored
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by:
Peter Maydell <peter.maydell@linaro.org> Message-id: 1454089805-5470-16-git-send-email-peter.maydell@linaro.org
-
- Jan 13, 2016
-
-
Markus Armbruster authored
Done with this Coccinelle semantic patch @@ expression FMT, E, S; expression list ARGS; @@ - error_report(FMT, ARGS, error_get_pretty(E)); + error_reportf_err(E, FMT/*@@@*/, ARGS); ( - error_free(E); | exit(S); | abort(); ) followed by a replace of '%s"/*@@@*/' by '"' and some line rewrapping, because I can't figure out how to make Coccinelle transform strings. We now use the error whole instead of just its message obtained with error_get_pretty(). This avoids suppressing its hint (see commit 50b7b000), but I can't see how the errors touched in this commit could come with hints. Signed-off-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <1450452927-8346-12-git-send-email-armbru@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com>
-
- Aug 13, 2015
-
-
Wei Huang authored
To share smbios among different architectures, this patch moves SMBIOS code (smbios.c and smbios.h) from x86 specific folders into new hw/smbios directories. As a result, CONFIG_SMBIOS=y is defined in x86 default config files. Acked-by:
Gabriel Somlo <somlo@cmu.edu> Tested-by:
Gabriel Somlo <somlo@cmu.edu> Reviewed-by:
Laszlo Ersek <lersek@redhat.com> Tested-by:
Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by:
Wei Huang <wei@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Jun 12, 2015
-
-
Juan Quintela authored
To make changes easier, with the copy, I maintained almost all include files. Now I remove the unnecessary ones on this patch. This compiles on linux x64 with all architectures configured, and cross-compiles for windows 32 and 64 bits. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com>
-
Juan Quintela authored
For historic reasons, ram migration have been on arch_init.c. Just split it into migration/ram.c, the same that happened with block.c. There is only code movement, no changes altogether. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com>
-
- Jun 05, 2015
-
-
Stefan Hajnoczi authored
The dirty memory bitmap is managed by ram_addr.h and copied to migration_bitmap[] periodically during live migration. Move the code to sync the bitmap to ram_addr.h where related code lives. Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <1417519399-3166-5-git-send-email-stefanha@redhat.com> Reviewed-by:
Fam Zheng <famz@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jun 02, 2015
-
-
Ikey Doherty authored
The target-x86_64.conf sysconfig file has been empty and essentially ignored now for several years. This change removes the unused file to enable moving towards a stateless configuration. Signed-off-by:
Ikey Doherty <michael.i.doherty@intel.com> Acked-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Eduardo Habkost <ehabkost@redhat.com> Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com>
-
- May 07, 2015
-
-
Liang Li authored
If live migration is very fast and can be completed in 1 second, the dirty_sync_count of MigrationState will not be updated. Then you will see "dirty sync count: 0" in qemu monitor even if the actual dirty sync count is not 0. Signed-off-by:
Liang Li <liang.z.li@intel.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Michael Chapman authored
This bug manifested itself as a VM that could not be resumed by libvirt following a migration: # virsh resume example error: Failed to resume domain example error: internal error: cannot parse json {"return": {"xbzrle-cache": {..., "cache-miss-rate": -nan, ...}, ... } }: lexical error: malformed number, a digit is required after the minus sign. This patch also ensures xbzrle_cache_miss_prev and iterations_prev are cleared at the start of the migration. Signed-off-by:
Michael Chapman <mike@very.puzzling.org> Reviewed-by:
Amit Shah <amit.shah@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Implement the core logic of multiple thread decompression, the decompression can work now. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Now, multiple thread compression can co-work with xbzrle. when xbzrle is on, multiple thread compression will only work at the first round of RAM data sync. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Implement the core logic of the multiple thread compression. At this point, multiple thread compression can't co-work with xbzrle yet. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- May 06, 2015
-
-
Liang Li authored
Split the function save_zero_page from ram_save_page so that we can reuse it later. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Define the data structure and variables used to do multiple thread decompression, and add the code to initialize and free them. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Define the data structure and variables used to do multiple thread compression, and add the code to initialize and free them. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Add the code to create and destroy the multiple threads those will be used to do data decompression. Left some functions empty just to keep clearness, and the code will be added later. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Liang Li authored
Add the code to create and destroy the multiple threads those will be used to do data compression. Left some functions empty to keep clearness, and the code will be added later. Signed-off-by:
Liang Li <liang.z.li@intel.com> Signed-off-by:
Yang Zhang <yang.z.zhang@intel.com> Reviewed-by:
Dr.David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Mar 26, 2015
-
-
Juan Quintela authored
Compression code (still not on tree) want to call this funtion from outside the migration thread, so we can't write to last_sent_block. Instead of reverting full patch: [PULL 07/11] save_block_hdr: we can recalculate Just revert the parts that touch last_sent_block. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com>
-
- Mar 17, 2015
-
-
Hailiang Zhang authored
There is already a helper function ram_bytes_total(), we can use it to help counting the total number of pages used by ram blocks. Signed-off-by:
zhanghailiang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Mar 16, 2015
-
-
Juan Quintela authored
It has always been a page header, not a block header. Once there, the flag argument was only passed to make a bit or with it, just do the or on the caller. Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
No need to pass it through all the callers. Once there, update last_sent_block here. Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
Add a parameter to pass the number of bytes written, and make it return the number of pages written instead. Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
It used to be an int, but then we can't pass directly the bytes_transferred parameter, that would happen later in the series. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Amit Shah <amit.shah@redhat.com>
-
- Feb 18, 2015
-
-
Markus Armbruster authored
Coccinelle semantic patch: @@ expression E; @@ - error_report("%s", error_get_pretty(E)); - error_free(E); + error_report_err(E); @@ expression E, S; @@ - error_report("%s", error_get_pretty(E)); + error_report_err(E); ( exit(S); | abort(); ) Trivial manual touch-ups in block/sheepdog.c. Signed-off-by:
Markus Armbruster <armbru@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com>
-
- Feb 16, 2015
-
-
Mike Day authored
Allow "unlocked" reads of the ram_list by using an RCU-enabled QLIST. The ramlist mutex is kept. call_rcu callbacks are run with the iothread lock taken, but that may change in the future. Writers still take the ramlist mutex, but they no longer need to assume that the iothread lock is taken. Readers of the list, instead, no longer require either the iothread or ramlist mutex, but they need to use rcu_read_lock() and rcu_read_unlock(). One place in arch_init.c was downgrading from write side to read side like this: qemu_mutex_lock_iothread() qemu_mutex_lock_ramlist() ... qemu_mutex_unlock_iothread() ... qemu_mutex_unlock_ramlist() and the equivalent idiom is: qemu_mutex_lock_ramlist() rcu_read_lock() ... qemu_mutex_unlock_ramlist() ... rcu_read_unlock() Reviewed-by:
Fam Zheng <famz@redhat.com> Signed-off-by:
Mike Day <ncmike@ncultra.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Mike Day authored
QLIST has RCU-friendly primitives, so switch to it. Reviewed-by:
Fam Zheng <famz@redhat.com> Signed-off-by:
Mike Day <ncmike@ncultra.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Mike Day authored
Reviewed-by:
Fam Zheng <famz@redhat.com> Signed-off-by:
Mike Day <ncmike@ncultra.org> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jan 15, 2015
-
-
ChenLiang authored
Avoid hot pages being replaced by others to remarkably decrease cache misses Sample results with the test program which quote from xbzrle.txt ran in vm:(migrate bandwidth:1GE and xbzrle cache size 8MB) the test program: include <stdlib.h> include <stdio.h> int main() { char *buf = (char *) calloc(4096, 4096); while (1) { int i; for (i = 0; i < 4096 * 4; i++) { buf[i * 4096 / 4]++; } printf("."); } } before this patch: virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}' {"return":{"expected-downtime":1020,"xbzrle-cache":{"bytes":1108284, "cache-size":8388608,"cache-miss-rate":0.987013,"pages":18297,"overflow":8, "cache-miss":1228737},"status":"active","setup-time":10,"total-time":52398, "ram":{"total":12466991104,"remaining":1695744,"mbps":935.559472, "transferred":5780760580,"dirty-sync-counter":271,"duplicate":2878530, "dirty-pages-rate":29130,"skipped":0,"normal-bytes":5748592640, "normal":1403465}},"id":"libvirt-706"} 18k pages sent compressed in 52 seconds. cache-miss-rate is 98.7%, totally miss. after optimizing: virsh qemu-monitor-command test_vm '{"execute": "query-migrate"}' {"return":{"expected-downtime":2054,"xbzrle-cache":{"bytes":5066763, "cache-size":8388608,"cache-miss-rate":0.485924,"pages":194823,"overflow":0, "cache-miss":210653},"status":"active","setup-time":11,"total-time":18729, "ram":{"total":12466991104,"remaining":3895296,"mbps":937.663549, "transferred":1615042219,"dirty-sync-counter":98,"duplicate":2869840, "dirty-pages-rate":58781,"skipped":0,"normal-bytes":1588404224, "normal":387794}},"id":"libvirt-266"} 194k pages sent compressed in 18 seconds. The value of cache-miss-rate decrease to 48.59%. Signed-off-by:
ChenLiang <chenliang88@huawei.com> Signed-off-by:
Gonglei <arei.gonglei@huawei.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Amit Shah <amit.shah@redhat.com>
-
- Jan 08, 2015
-
-
Michael S. Tsirkin authored
If block used_length does not match, try to resize it. Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Michael S. Tsirkin authored
This patch allows us to distinguish between two length values for each block: max_length - length of memory block that was allocated used_length - length of block used by QEMU/guest Currently, we set used_length - max_length, unconditionally. Follow-up patches allow used_length <= max_length. Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Nov 20, 2014
-
-
ChenLiang authored
The static variables in migration_bitmap_sync will not be reset in the case of a second attempted migration. Signed-off-by:
ChenLiang <chenliang88@huawei.com> Signed-off-by:
Gonglei <arei.gonglei@huawei.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Amit Shah <amit.shah@redhat.com>
-
- Nov 18, 2014
-
-
Michael S. Tsirkin authored
During migration, the values read from migration stream during ram load are not validated. Especially offset in host_from_stream_offset() and also the length of the writes in the callers of said function. To fix this, we need to make sure that the [offset, offset + length] range fits into one of the allocated memory regions. Validating addr < len should be sufficient since data seems to always be managed in TARGET_PAGE_SIZE chunks. Fixes: CVE-2014-7840 Note: follow-up patches add extra checks on each block->host access. Signed-off-by:
Michael S. Tsirkin <mst@redhat.com> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Amit Shah <amit.shah@redhat.com>
-
- Oct 14, 2014
-
-
Peter Lieven authored
this patch extends commit db80face by not only checking for unknown flags, but also filtering out unknown flag combinations. Suggested-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Peter Lieven <pl@kamp.de> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Oct 04, 2014
-
-
Eduardo Habkost authored
As the function always return 1, it is not needed anymore. Signed-off-by:
Eduardo Habkost <ehabkost@redhat.com> Reviewed-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Sep 01, 2014
-
-
Bastian Koppelmann authored
Add TriCore target stubs, and QOM cpu, and Maintainer Signed-off-by:
Bastian Koppelmann <kbastian@mail.uni-paderborn.de> Message-id: 1409572800-4116-2-git-send-email-kbastian@mail.uni-paderborn.de Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Aug 08, 2014
-
-
Alex Bligh authored
When live migrate fails due to a section length mismatch we currently see an error message like: Length mismatch: 0000:00:03.0/virtio-net-pci.rom: 10000 in != 20000 The section lengths are in fact in hex, so this should read Length mismatch: 0000:00:03.0/virtio-net-pci.rom: 0x10000 in != 0x20000 Correct the error string to reflect this. Signed-off-by:
Alex Bligh <alex@alex.org.uk> Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru>
-
- Jun 16, 2014
-
-
Peter Lieven authored
if a saved vm has unknown flags in the memory data qemu currently simply ignores this flag and continues which yields in an unpredictable result. This patch catches all unknown flags and aborts the loading of the vm. Additionally error reports are thrown if the migration aborts abnormally. Signed-off-by:
Peter Lieven <pl@kamp.de> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Jun 10, 2014
-
-
Chen Gang authored
We call g_free() after cache_fini() in migration_end(), but we don't call it after cache_fini() in xbzrle_cache_resize(), leaking the memory. cache_init() and cache_fini() are a pair. Since cache_init() allocates the cache, let cache_fini() free it. This plugs the leak. Signed-off-by:
Chen Gang <gang.chen.5i5j@gmail.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru>
-