- Mar 12, 2013
-
-
Andreas Färber authored
No functional change, just a reduction of CPU loops. The mon_cpu field is left untouched for now since changing that requires a number of larger prerequisites, including cpu_synchronize_state() and mon_get_cpu(). Reviewed-by:
Luiz Capitulino <lcapitulino@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Andreas Färber <afaerber@suse.de>
-
Igor Mammedov authored
Commit 55e5c285 breaks CPU not found return value, and returns CPU corresponding to the last non NULL env. Fix it by returning CPU only if env is not NULL, otherwise CPU is not found and function should return NULL. Signed-off-by:
Igor Mammedov <imammedo@redhat.com> Signed-off-by:
Andreas Färber <afaerber@suse.de>
-
- Mar 11, 2013
-
-
Anthony Liguori authored
# By Paolo Bonzini (40) and others # Via Juan Quintela * quintela/migration.next: (46 commits) page_cache: dup memory on insert page_cache: fix memory leak Fix cache_resize to keep old entry age Fix page_cache leak in cache_resize migration: inline migrate_fd_close migration: eliminate s->migration_file migration: move contents of migration_close to migrate_fd_cleanup migration: move rate limiting to QEMUFile migration: small changes around rate-limiting migration: use qemu_ftell to compute bandwidth migration: use QEMUFile for writing outgoing migration data migration: use QEMUFile for migration channel lifetime qemu-file: simplify and export qemu_ftell qemu-file: add writable socket QEMUFile qemu-file: check exit status when closing a pipe QEMUFile qemu-file: fsync a writable stdio QEMUFile migration: merge qemu_popen_cmd with qemu_popen migration: use qemu_file_rate_limit consistently migration: remove useless qemu_file_get_error check migration: detect error before sleeping ...
-
Paolo Bonzini authored
A conflict was resolved the wrong way when merging commit 320ba5fe (build: always link device_tree.o into emulators if libfdt available, 2013-02-05). This causes a build failure for the arm-softmmu target due to multiply defined symbol. Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Message-id: 1362997886-9470-1-git-send-email-pbonzini@redhat.com Signed-off-by:
Anthony Liguori <aliguori@us.ibm.com>
-
Peter Lieven authored
The page cache frees all data on finish, on resize and if there is collision on insert. So it should be the caches responsibility to dup the data that is stored in the cache. Signed-off-by:
Peter Lieven <pl@kamp.de> Signed-off-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Peter Lieven authored
XBZRLE encoded migration introduced a MRU page cache meachnism. Unfortunately, cached items where never freed in case of a collision in the page cache on cache_insert(). This lead to out of memory conditions during XBZRLE migration if the page cache was small and there where a lot of collisions in the cache. Signed-off-by:
Peter Lieven <pl@kamp.de> Signed-off-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Orit Wasserman authored
Instead of using cache_insert do the update itself Signed-off-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Orit Wasserman authored
Signed-off-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
The indirection is useless now. Backends can open s->file directly. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
With this patch, the migration_file is not needed anymore. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Rate limiting is now simply a byte counter; client call qemu_file_rate_limit() manually to determine if they have to exit. So it is possible and simple to move the functionality to QEMUFile. This makes the remaining functionality of s->file redundant; in the next patch we can remove it and write directly to s->migration_file. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
This patch extracts a few small changes from the next patch, which are unrelated to adding generic rate-limiting functionality to QEMUFile. Make migration_set_rate_limit a simple accessor, and use qemu_file_set_rate_limit consistently. Also fix a typo where INT_MAX should have been SIZE_MAX. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Prepare for when s->bytes_xfer will be removed. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Second, drop the file descriptor indirection, and write directly to the QEMUFile. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
As a start, use QEMUFile to store the destination and close it. qemu_get_fd gets a file descriptor that will be used by the write callbacks. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Force a flush when qemu_ftell is called. This simplifies the buffer magic (it also breaks qemu_ftell for input QEMUFiles, but we never use it). Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
This is what exec_close does. Move this to the underlying QEMUFile. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
This is what fd_close does. Prepare for switching to a QEMUFile. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
There is no reason for outgoing exec migration to do popen manually anymore (the reason used to be that we needed the FILE* to make it non-blocking). Use qemu_popen_cmd. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
migration_put_buffer is never called if there has been an error. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
We will go around the loop exactly once after setting last_round. Eliminate the variable altogether. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
This is consistent once that we have moved everything to migration.c Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Buffering was needed because blocking writes could take a long time and starve other threads seeking to grab the big QEMU mutex. Now that all writes (except within _complete callbacks) are done outside the big QEMU mutex, we do not need buffering at all. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Only the migration_bitmap_sync() call needs the iothread lock. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
This makes it possible to do blocking writes directly to the socket, with no buffer in the middle. For RAM, only the migration_bitmap_sync() call needs the iothread lock. For block migration, it is needed by the block layer (including bdrv_drain_all and dirty bitmap access), but because some code is shared between iterate and complete, all of mig_save_device_dirty is run with the lock taken. In the savevm case, the iterate callback runs within the big lock. This is annoying because it complicates the rules. Luckily we do not need to do anything about it: the RAM iterate callback does not need the iothread lock, and block migration never runs during savevm. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
This groups together the callbacks that later will have similar locking rules. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Some state is shared between the block migration code and its AIO callbacks. Once block migration will run outside the iothread, the block migration code and the AIO callbacks will be able to run concurrently. Protect the critical sections with a separate lock. Do the same for completed_sectors, which can be used from the monitor. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Some small changes that will simplify the positioning of lock/unlock primitives. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Perform final cleanup in a bottom half, and add joining the thread to the series of cleanup actions. migrate_fd_error remains for connection error, but it doesn't need to cleanup anything anymore. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Accessing s->state outside the big QEMU lock will simplify a bit the locking/unlocking of the iothread lock. We need to keep the lock in migrate_fd_error and migrate_fd_completed, however, because they call migrate_fd_cleanup. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Kazuya Saito authored
Signed-off-by:
Kazuya Saito <saito.kazuya@jp.fujitsu.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Completion of migration is currently done with a "nested" loop that invokes buffered_flush: migrate_fd_completed is called by buffered_file_thread, which calls migrate_fd_cleanup, which calls buffered_close (via qemu_fclose), which flushes the buffer. Simplify this, by reusing the buffered_flush call of buffered_file_thread. Then if qemu_savevm_state_complete was called, and the buffer is empty (including the QEMUFile buffer, for which we need the previous patch), we are done. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Paolo Bonzini authored
Always use qemu_file_get_error to detect errors, since that is how QEMUFile itself drops I/O after an error occurs. There is no need to propagate and check return values all the time. Also remove the "complete" member, since we know that it is set (via migrate_fd_cleanup) only when the state changes. Reviewed-by:
Orit Wasserman <owasserm@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-