- Oct 18, 2023
-
-
Juan Quintela authored
Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
Juan Quintela authored
Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Signed-off-by:
Juan Quintela <quintela@redhat.com>
-
- Oct 17, 2023
-
-
Juan Quintela authored
The new line was only printed when command options were used. When we used migration parameters and capabilities, it wasn't. Reviewed-by:
Fabiano Rosas <farosas@suse.de> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231017172307.22858-2-quintela@redhat.com>
-
Juan Quintela authored
It is used everywhere else in C. Once there, make sure that we don't use the index outside of the for declaring the variable there. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-15-quintela@redhat.com>
-
Juan Quintela authored
Doing a break to do another break is just confused. Just call return when we know we want to return. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-14-quintela@redhat.com>
-
Juan Quintela authored
Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-9-quintela@redhat.com>
-
Juan Quintela authored
Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-8-quintela@redhat.com>
-
Juan Quintela authored
So we don't have to access compression_counters from outside ram-compress.c. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-7-quintela@redhat.com>
-
Juan Quintela authored
Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-6-quintela@redhat.com>
-
Juan Quintela authored
So give an error instead of just ignoring the other methods. Signed-off-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
Lukas Straub <lukasstraub2@web.de> Message-ID: <20230613145757.10131-4-quintela@redhat.com>
-
Fabiano Rosas authored
The function is currently called from two sites, one always gives it a NULL Error and the other always gives it a non-NULL Error. In the non-NULL case, all it does it trace the error and return. One of the callers already have tracing, add a tracepoint to the other and stop passing the error into the function. Cc: Markus Armbruster <armbru@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-4-farosas@suse.de>
-
Fabiano Rosas authored
The preferred usage of the Error type is to always set both the return code and the error when a failure happens. As all code called from the send thread follows this pattern, we'll always have the return code and the error set at the same time. Aside from the convention, in this piece of code this must be the case, otherwise the if (ret != 0) would be exiting the thread without calling multifd_send_terminate_threads() which is incorrect. Unify both paths to make it clear that both are taken when there's an error. Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-3-farosas@suse.de>
-
Fabiano Rosas authored
We're about to enable support for other transports in multifd, so remove direct references to sockets. Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231012134343.23757-2-farosas@suse.de>
-
Fabiano Rosas authored
We don't need to do this in two pieces. One single function makes it easier to grasp, specially since it removes the indirection on the return value handling. Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-7-farosas@suse.de>
-
Fabiano Rosas authored
It makes a bit more sense to have the zero page handling of xbzrle right where we save the zero page. Also invert the exit condition to remove one level of indentation which makes the next patch easier to grasp. Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-6-farosas@suse.de>
-
Fabiano Rosas authored
We don't need the QEMUFile when we're already passing the PageSearchStatus. Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-5-farosas@suse.de>
-
Fabiano Rosas authored
'rs' is not used in that function. It's a leftover from commit 9360447d ("ram: Use MigrationStats for statistics"). Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-4-farosas@suse.de>
-
Nikolay Borisov authored
Extract the ramblock parsing code into a routine that operates on the sequence of headers from the stream and another the parses the individual ramblock. This makes ram_load_precopy() easier to comprehend. Signed-off-by:
Nikolay Borisov <nborisov@suse.com> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184604.32364-3-farosas@suse.de>
-
Elena Ufimtseva authored
Sometimes multifd sends just sync packet with no pages (normal_num is 0). In this case the old value is being preserved and being accounted for while only packet_len is being transferred. Reset it to 0 after sending and accounting for. Signed-off-by:
Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-5-elena.ufimtseva@oracle.com>
-
Elena Ufimtseva authored
Previous commit cbec7eb7 "migration/multifd: Compute transferred bytes correctly" removed accounting for packet_len in non-rdma case, but the next_packet_size only accounts for pages, not for the header packet (normal_pages * PAGE_SIZE) that is being sent as iov[0]. The packet_len part should be added to account for the size of MultiFDPacket and the array of the offsets. Signed-off-by:
Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-4-elena.ufimtseva@oracle.com>
-
Elena Ufimtseva authored
In migration rate limiting atomic operations are used to read the rate limit variables and transferred bytes and they are expensive. Check first if rate_limit_max is equal to RATE_LIMIT_DISABLED and return false immediately if so. Note that with this patch we will also will stop flushing by not calling qemu_fflush() from migration_transferred_bytes() if the migration rate is not exceeded. This should be fine since migration thread calls in the loop migration_update_counters from migration_rate_limit() that calls the migration_transferred_bytes() and flushes there. Signed-off-by:
Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011184358.97349-2-elena.ufimtseva@oracle.com>
-
Juan Quintela authored
Change code that is: int ret; ... ret = foo(); if (ret[ < 0]?) { to: if (foo()[ < 0]) { Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-14-quintela@redhat.com>
-
Juan Quintela authored
Declare all variables that are only used inside a for loop inside the for statement. This makes clear that they are not used outside of the for loop. Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-13-quintela@redhat.com>
-
Juan Quintela authored
Once there, all the uses are local to the for, so declare the variable inside the for statement. Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-12-quintela@redhat.com>
-
Juan Quintela authored
Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-11-quintela@redhat.com>
-
Juan Quintela authored
Functions are long enough even without this. Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-10-quintela@redhat.com>
-
Juan Quintela authored
Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-9-quintela@redhat.com>
-
Juan Quintela authored
The only user was rdma, and its use is gone. Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-8-quintela@redhat.com>
-
Juan Quintela authored
The only user of ram_control_save_page() and save_page() hook was rdma. Just move the function to rdma.c, rename it to rdma_control_save_page(). Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-7-quintela@redhat.com>
-
Juan Quintela authored
There is only one flag called with: RAM_CONTROL_BLOCK_REG. Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-6-quintela@redhat.com>
-
Juan Quintela authored
Instead of going through ram_control_load_hook(), call qemu_rdma_registration_handle() directly. Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-5-quintela@redhat.com>
-
Juan Quintela authored
Once there: - Remove unused data parameter - unfold it in its callers - change all callers to call qemu_rdma_registration_stop() - We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma() Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-4-quintela@redhat.com>
-
Juan Quintela authored
Once there: - Remove unused data parameter - unfold it in its callers. - change all callers to call qemu_rdma_registration_start() - We need to call QIO_CHANNEL_RDMA() after we check for migrate_rdma() Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-3-quintela@redhat.com>
-
Juan Quintela authored
Helper to say if we are doing a migration over rdma. Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011203527.9061-2-quintela@redhat.com>
-
Juan Quintela authored
RDMA was having trouble because migrate_multifd_flush_after_each_section() can only be true or false, but we don't want to send any flush when we are not in multifd migration. CC: Fabiano Rosas <farosas@suse.de Fixes: 294e5a40 ("multifd: Only flush once each full round of memory") Reported-by:
Li Zhijian <lizhijian@fujitsu.com> Reviewed-by:
Li Zhijian <lizhijian@fujitsu.com> Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231011205548.10571-2-quintela@redhat.com>
-
Fiona Ebner authored
This is intended to be a semantic revert of commit 9b095037 ("migration: run setup callbacks out of big lock"). There have been so many changes since that commit (e.g. a new setup callback dirty_bitmap_save_setup() that also needs to be adapted now), it's easier to do the revert manually. For snapshots, the bdrv_writev_vmstate() function is used during setup (in QIOChannelBlock backing the QEMUFile), but not holding the BQL while calling it could lead to an assertion failure. To understand how, first note the following: 1. Generated coroutine wrappers for block layer functions spawn the coroutine and use AIO_WAIT_WHILE()/aio_poll() to wait for it. 2. If the host OS switches threads at an inconvenient time, it can happen that a bottom half scheduled for the main thread's AioContext is executed as part of a vCPU thread's aio_poll(). An example leading to the assertion failure is as follows: main thread: 1. A snapshot-save QMP command gets issued. 2. snapshot_save_job_bh() is scheduled. vCPU thread: 3. aio_poll() for the main thread's AioContext is called (e.g. when the guest writes to a pflash device, as part of blk_pwrite which is a generated coroutine wrapper). 4. snapshot_save_job_bh() is executed as part of aio_poll(). 3. qemu_savevm_state() is called. 4. qemu_mutex_unlock_iothread() is called. Now qemu_get_current_aio_context() returns 0x0. 5. bdrv_writev_vmstate() is executed during the usual savevm setup via qemu_fflush(). But this function is a generated coroutine wrapper, so it uses AIO_WAIT_WHILE. There, the assertion assert(qemu_get_current_aio_context() == qemu_get_aio_context()); will fail. To fix it, ensure that the BQL is held during setup. While it would only be needed for snapshots, adapting migration too avoids additional logic for conditional locking/unlocking in the setup callbacks. Writing the header could (in theory) also trigger qemu_fflush() and thus bdrv_writev_vmstate(), so the locked section also covers the qemu_savevm_state_header() call, even for migration for consistency. The section around multifd_send_sync_main() needs to be unlocked to avoid a deadlock. In particular, the multifd_save_setup() function calls socket_send_channel_create() using multifd_new_send_channel_async() as a callback and then waits for the callback to signal via the channels_ready semaphore. The connection happens via qio_task_run_in_thread(), but the callback is only executed via qio_task_thread_result() which is scheduled for the main event loop. Without unlocking the section, the main thread would never get to process the task result and the callback meaning there would be no signal via the channels_ready semaphore. The comment in ram_init_bitmaps() was introduced by 49877834 ("migration: fix incorrect memory_global_dirty_log_start outside BQL") and is removed, because it referred to the qemu_mutex_lock_iothread() call. Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231013105839.415989-1-f.ebner@proxmox.com>
-
Nikolay Borisov authored
Make the migration json writer part of MigrationState struct, allowing the 'configuration' object be serialized to json. This will facilitate the parsing of the 'configuration' object in the next patch that fixes analyze-migration.py for arm. Signed-off-by:
Nikolay Borisov <nborisov@suse.com> Signed-off-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231009184326.15777-2-farosas@suse.de>
-
Dmitry Frolov authored
qemu_ram_block_from_host() may return NULL, which will be dereferenced w/o check. Usualy return value is checked for this function. Found by Linux Verification Center (linuxtesting.org) with SVACE. Signed-off-by:
Dmitry Frolov <frolov@swemel.ru> Reviewed-by:
Fabiano Rosas <farosas@suse.de> Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231010104851.802947-1-frolov@swemel.ru>
-
Peter Xu authored
Migration bandwidth is a very important value to live migration. It's because it's one of the major factors that we'll make decision on when to switchover to destination in a precopy process. This value is currently estimated by QEMU during the whole live migration process by monitoring how fast we were sending the data. This can be the most accurate bandwidth if in the ideal world, where we're always feeding unlimited data to the migration channel, and then it'll be limited to the bandwidth that is available. However in reality it may be very different, e.g., over a 10Gbps network we can see query-migrate showing migration bandwidth of only a few tens of MB/s just because there are plenty of other things the migration thread might be doing. For example, the migration thread can be busy scanning zero pages, or it can be fetching dirty bitmap from other external dirty sources (like vhost or KVM). It means we may not be pushing data as much as possible to migration channel, so the bandwidth estimated from "how many data we sent in the channel" can be dramatically inaccurate sometimes. With that, the decision to switchover will be affected, by assuming that we may not be able to switchover at all with such a low bandwidth, but in reality we can. The migration may not even converge at all with the downtime specified, with that wrong estimation of bandwidth, keeping iterations forever with a low estimation of bandwidth. The issue is QEMU itself may not be able to avoid those uncertainties on measuing the real "available migration bandwidth". At least not something I can think of so far. One way to fix this is when the user is fully aware of the available bandwidth, then we can allow the user to help providing an accurate value. For example, if the user has a dedicated channel of 10Gbps for migration for this specific VM, the user can specify this bandwidth so QEMU can always do the calculation based on this fact, trusting the user as long as specified. It may not be the exact bandwidth when switching over (in which case qemu will push migration data as fast as possible), but much better than QEMU trying to wildly guess, especially when very wrong. A new parameter "avail-switchover-bandwidth" is introduced just for this. So when the user specified this parameter, instead of trusting the estimated value from QEMU itself (based on the QEMUFile send speed), it trusts the user more by using this value to decide when to switchover, assuming that we'll have such bandwidth available then. Note that specifying this value will not throttle the bandwidth for switchover yet, so QEMU will always use the full bandwidth possible for sending switchover data, assuming that should always be the most important way to use the network at that time. This can resolve issues like "unconvergence migration" which is caused by hilarious low "migration bandwidth" detected for whatever reason. Reported-by:
Zhiyi Guo <zhguo@redhat.com> Reviewed-by:
Joao Martins <joao.m.martins@oracle.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <20231010221922.40638-1-peterx@redhat.com>
-