- Jan 26, 2024
-
-
Ari Sundholm authored
There is a bug in the blklogwrites driver pertaining to logging "write zeroes" operations, causing log corruption. This can be easily observed by setting detect-zeroes to something other than "off" for the driver. The issue is caused by a concurrency bug pertaining to the fact that "write zeroes" operations have to be logged in two parts: first the log entry metadata, then the zeroed-out region. While the log entry metadata is being written by bdrv_co_pwritev(), another operation may begin in the meanwhile and modify the state of the blklogwrites driver. This is as intended by the coroutine-driven I/O model in QEMU, of course. Unfortunately, this specific scenario is mishandled. A short example: 1. Initially, in the current operation (#1), the current log sector number in the driver state is only incremented by the number of sectors taken by the log entry metadata, after which the log entry metadata is written. The current operation yields. 2. Another operation (#2) may start while the log entry metadata is being written. It uses the current log position as the start offset for its log entry. This is in the sector right after the operation #1 log entry metadata, which is bad! 3. After bdrv_co_pwritev() returns (#1), the current log sector number is reread from the driver state in order to find out the start offset for bdrv_co_pwrite_zeroes(). This is an obvious blunder, as the offset will be the sector right after the (misplaced) operation #2 log entry, which means that the zeroed-out region begins at the wrong offset. 4. As a result of the above, the log is corrupt. Fix this by only reading the driver metadata once, computing the offsets and sizes in one go (including the optional zeroed-out region) and setting the log sector number to the appropriate value for the next operation in line. Signed-off-by:
Ari Sundholm <ari@tuxera.com> Cc: qemu-stable@nongnu.org Message-ID: <20240109184646.1128475-1-megari@gmx.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com> (cherry picked from commit a9c8ea95470c27a8a02062b67f9fa6940e828ab6) Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru>
-
- Jan 25, 2024
-
-
Fiona Ebner authored
Using fleecing backup like in [0] on a qcow2 image (with metadata preallocation) can lead to the following assertion failure: > bdrv_co_do_block_status: Assertion `!(ret & BDRV_BLOCK_ZERO)' failed. In the reproducer [0], it happens because the BDRV_BLOCK_RECURSE flag will be set by the qcow2 driver, so the caller will recursively check the file child. Then the BDRV_BLOCK_ZERO set too. Later up the call chain, in bdrv_co_do_block_status() for the snapshot-access driver, the assertion failure will happen, because both flags are set. To fix it, clear the recurse flag after the recursive check was done. In detail: > #0 qcow2_co_block_status Returns 0x45 = BDRV_BLOCK_RECURSE | BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID. > #1 bdrv_co_do_block_status Because of the data flag, bdrv_co_do_block_status() will now also set BDRV_BLOCK_ALLOCATED. Because of the recurse flag, bdrv_co_do_block_status() for the bdrv_file child will be called, which returns 0x16 = BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_ZERO. Now the return value inherits the zero flag. Returns 0x57 = BDRV_BLOCK_RECURSE | BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | BDRV_BLOCK_ALLOCATED | BDRV_BLOCK_ZERO. > #2 bdrv_co_common_block_status_above > #3 bdrv_co_block_status_above > #4 bdrv_co_block_status > #5 cbw_co_snapshot_block_status > #6 bdrv_co_snapshot_block_status > #7 snapshot_access_co_block_status > #8 bdrv_co_do_block_status Return value is propagated all the way up to here, where the assertion failure happens, because BDRV_BLOCK_RECURSE and BDRV_BLOCK_ZERO are both set. > #9 bdrv_co_common_block_status_above > #10 bdrv_co_block_status_above > #11 block_copy_block_status > #12 block_copy_dirty_clusters > #13 block_copy_common > #14 block_copy_async_co_entry > #15 coroutine_trampoline [0]: > #!/bin/bash > rm /tmp/disk.qcow2 > ./qemu-img create /tmp/disk.qcow2 -o preallocation=metadata -f qcow2 1G > ./qemu-img create /tmp/fleecing.qcow2 -f qcow2 1G > ./qemu-img create /tmp/backup.qcow2 -f qcow2 1G > ./qemu-system-x86_64 --qmp stdio \ > --blockdev qcow2,node-name=node0,file.driver=file,file.filename=/tmp/disk.qcow2 \ > --blockdev qcow2,node-name=node1,file.driver=file,file.filename=/tmp/fleecing.qcow2 \ > --blockdev qcow2,node-name=node2,file.driver=file,file.filename=/tmp/backup.qcow2 \ > <<EOF > {"execute": "qmp_capabilities"} > {"execute": "blockdev-add", "arguments": { "driver": "copy-before-write", "file": "node0", "target": "node1", "node-name": "node3" } } > {"execute": "blockdev-add", "arguments": { "driver": "snapshot-access", "file": "node3", "node-name": "snap0" } } > {"execute": "blockdev-backup", "arguments": { "device": "snap0", "target": "node1", "sync": "full", "job-id": "backup0" } } > EOF Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-id: 20240116154839.401030-1-f.ebner@proxmox.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com> (cherry picked from commit 8a9be7992426c8920d4178e7dca59306a18c7a3a) Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru>
-
- Dec 22, 2023
-
-
Kevin Wolf authored
bdrv_is_read_only() only checks if the node is configured to be read-only eventually, but even if it returns false, writing to the node may not be permitted at the moment (because it's inactive). bdrv_is_writable() checks that the node can be written to right now, and this is what the snapshot operations really need. Change bdrv_can_snapshot() to use bdrv_is_writable() to fix crashes like the following: $ ./qemu-system-x86_64 -hda /tmp/test.qcow2 -loadvm foo -incoming defer qemu-system-x86_64: ../block/io.c:1990: int bdrv_co_write_req_prepare(BdrvChild *, int64_t, int64_t, BdrvTrackedRequest *, int): Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed. The resulting error message after this patch isn't perfect yet, but at least it doesn't crash any more: $ ./qemu-system-x86_64 -hda /tmp/test.qcow2 -loadvm foo -incoming defer qemu-system-x86_64: Device 'ide0-hd0' is writable but does not support snapshots Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231201142520.32255-2-kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com> (cherry picked from commit d3007d348adaaf04ee8b099a475282034a662414) Signed-off-by:
Michael Tokarev <mjt@tls.msk.ru>
-
- Nov 28, 2023
-
-
Kevin Wolf authored
The vhost-user-blk export implement AioContext switches in its drain implementation. This means that on drain_begin, it detaches the server from its AioContext and on drain_end, attaches it again and schedules the server->co_trip coroutine in the updated AioContext. However, nothing guarantees that server->co_trip is even safe to be scheduled. Not only is it unclear that the coroutine is actually in a state where it can be reentered externally without causing problems, but with two consecutive drains, it is possible that the scheduled coroutine didn't have a chance yet to run and trying to schedule an already scheduled coroutine a second time crashes with an assertion failure. Following the model of NBD, this commit makes the vhost-user-blk export shut down server->co_trip during drain so that resuming the export means creating and scheduling a new coroutine, which is always safe. There is one exception: If the drain call didn't poll (for example, this happens in the context of bdrv_graph_wrlock()), then the coroutine didn't have a chance to shut down. However, in this case the AioContext can't have changed; changing the AioContext always involves a polling drain. So in this case we can simply assert that the AioContext is unchanged and just leave the coroutine running or wake it up if it has yielded to wait for the AioContext to be attached again. Fixes: e1054cd4 Fixes: https://issues.redhat.com/browse/RHEL-1708 Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231127115755.22846-1-kwolf@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Fam Zheng authored
If the text description file is larger than DESC_SIZE, we force the last byte in the buffer to be 0 and write it out. This results in a corruption. Try to allocate a big buffer in this case. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1923 Signed-off-by:
Fam Zheng <fam@euphon.net> Message-ID: <20231124115654.3239137-1-fam@euphon.net> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Nov 21, 2023
-
-
Kevin Wolf authored
In stream_prepare(), we need to temporarily drop the AioContext lock that job_prepare_locked() took for us while calling the graph write lock functions which can poll. All block nodes related to this block job are in the same AioContext, so we can pass any of them to bdrv_graph_wrlock()/ bdrv_graph_wrunlock(). Unfortunately, the one that we picked is base, which can be NULL - and in this case the AioContext lock is not released and deadlocks can occur. Fix this by passing s->target_bs, which is never NULL. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-4-kwolf@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making progress, so in order to allow them to make progress it must not hold the AioContext lock of another thread while calling aio_poll(). This introduces a @bs parameter to bdrv_graph_wrunlock() whose AioContext is temporarily dropped (which matches bdrv_graph_wrlock()), and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState doesn't necessarily exist any more when unlocking. This also requires a change to bdrv_schedule_unref(), which was relying on the incorrectly taken lock. It needs to take the lock itself now. While this is a separate bug, it can't be fixed a separate patch because otherwise the intermediate state would either deadlock or try to release a lock that we don't even hold. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-3-kwolf@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> [kwolf: Fixed up bdrv_schedule_unref()] Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
While not all callers of blk_remove_bs() are correct in this respect, the assumption in the function is that callers hold the AioContext lock of the BlockBackend (this is required by the drain calls in it). In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock() has then to be called with the root BlockDriverState as its parameter instead of NULL, so that this AioContext lock is temporarily dropped. Fixes: https://issues.redhat.com/browse/RHEL-1761 Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-2-kwolf@redhat.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Nov 13, 2023
-
-
Thomas Huth authored
No need to declare a new variable in the the inner code block here, we can re-use the "ret" variable that has been declared at the beginning of the function. With this change, the code can now be successfully compiled with -Wshadow=local again. Signed-off-by:
Thomas Huth <thuth@redhat.com> Message-ID: <20231023175038.111607-1-thuth@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
- Nov 08, 2023
-
-
Kevin Wolf authored
Almost all functions that access bs->file already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-25-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Most implementations of .bdrv_open first open their file child (which is an operation that internally takes the write lock and therefore we shouldn't hold the graph lock while calling it), and afterwards many operations that require holding the graph lock, e.g. for accessing bs->file. This changes block drivers that follow this pattern to take the graph lock after opening the child node. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-24-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This updates the vhdx code to add GRAPH_RDLOCK annotations for all places that read bs->file. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-23-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This updates the qcow2 code to add GRAPH_RDLOCK annotations for all places that read bs->file. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-22-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK to some driver callbacks that are already called with the graph lock held, and which will need the annotation because they access bs->file, but don't have it yet. This also covers a few callbacks that were not marked GRAPH_RDLOCK before, but where updating BlockDriver is trivially possible. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-21-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
bdrv_change_backing_file() is called both inside and outside coroutine context. This makes it difficult for it to take the graph lock internally. It also means that driver implementations need to be able to run outside of coroutines, too. Switch it to the usual model with a coroutine based implementation and a co_wrapper instead. The new function is marked GRAPH_RDLOCK. As the co_wrapper now runs the function in the AioContext of the node (as it should always have done), this is not GLOBAL_STATE_CODE() any more. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-20-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This is either bdrv_co_preadv() or bdrv_co_pwritev() which both need to have the graph locked. Annotate the function pointer accordingly and add locking to its callers. This shouldn't actually have resulted in a bug because the graph lock is already held by blkverify_co_prwv(), which waits for the coroutines to terminate. Annotate with GRAPH_RDLOCK as well to make this clearer. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-19-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Almost all functions that access bs->backing already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-18-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Nov 07, 2023
-
-
Kevin Wolf authored
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_replace_node(). Its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-17-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_set_backing_hd_drained(). Basically everthing in the function needs the lock and its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-14-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_cow_child() need to hold a reader lock for the graph because it accesses bs->backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-13-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_chain_contains() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_bs(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-11-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_(un)freeze_backing_chain() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_child(), which accesses bs->file/backing. Use the opportunity to make bdrv_is_backing_chain_frozen() static, it has no external callers. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-10-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_filters() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-9-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_implicit_filters() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-8-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_or_cow_bs() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_child(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-7-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Instead of taking the writer lock internally, require callers to already hold it when calling block_job_add_bdrv(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-6-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_root_attach_child(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-5-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_bs() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-4-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_has_zero_init() need to hold a reader lock for the graph because it calls bdrv_filter_bs(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-3-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Kevin Wolf authored
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_probe_blocksizes() need to hold a reader lock for the graph because it calls bdrv_filter_bs(), which accesses bs->file/backing. Signed-off-by:
Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-2-kwolf@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Nov 06, 2023
-
-
Naohiro Aota authored
raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer is used later at raw_co_prw() to save the block address where the data is written. When multiple IOs are on-going at the same time, a later IO's raw_co_zone_append() call over-writes a former IO's offset address before raw_co_prw() completes. As a result, the former zone append IO returns the initial value (= the start address of the writing zone), instead of the proper address. Fix the issue by passing the offset pointer to raw_co_prw() instead of passing it through s->offset. Also, remove "offset" from BDRVRawState as there is no usage anymore. Fixes: 4751d09a ("block: introduce zone append write for zoned devices") Signed-off-by:
Naohiro Aota <naohiro.aota@wdc.com> Message-Id: <20231030073853.2601162-1-naohiro.aota@wdc.com> Reviewed-by:
Sam Li <faithilikerun@gmail.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Sam Li authored
When the zoned request fail, it needs to update only the wp of the target zones for not disrupting the in-flight writes on these other zones. The wp is updated successfully after the request completes. Fixed the callers with right offset and nr_zones. Signed-off-by:
Sam Li <faithilikerun@gmail.com> Message-Id: <20230825040556.4217-1-faithilikerun@gmail.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> [hreitz: Rebased and fixed comment spelling] Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Jean-Louis Dupond authored
When the discard-no-unref flag is enabled, we keep the reference for normal discard requests. But when a discard is executed on a snapshot/qcow2 image with backing, the discards are saved as zero clusters in the snapshot image. When committing the snapshot to the backing file, not discard_in_l2_slice is called but zero_in_l2_slice. Which did not had any logic to keep the reference when discard-no-unref is enabled. Therefor we add logic in the zero_in_l2_slice call to keep the reference on commit. Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621 Signed-off-by:
Jean-Louis Dupond <jean-louis@dupond.be> Message-Id: <20231003125236.216473-2-jean-louis@dupond.be> [hreitz: Made the documentation change more verbose, as discussed on-list] Signed-off-by:
Hanna Czenczek <hreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
NVMeQueuePair::reqs has length NVME_NUM_REQS, which less than NVME_QUEUE_SIZE by 1. Fixes: 1086e95d ("block/nvme: switch to a NVMeRequest freelist") Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by:
Maksim Davydov <davydov-max@yandex-team.ru> Message-id: 20231017125941.810461-5-vsementsov@yandex-team.ru Signed-off-by:
Peter Maydell <peter.maydell@linaro.org>
-
- Nov 03, 2023
-
-
Cédric Le Goater authored
qemu_uuid_unparse() includes a trailing NUL when writing the uuid string and the buffer size should be UUID_FMT_LEN + 1 bytes. Add a define for this size and use it where required. Cc: Fam Zheng <fam@euphon.net> Reviewed-by:
Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by:
Juan Quintela <quintela@redhat.com> Reviewed-by:
"Denis V. Lunev" <den@openvz.org> Signed-off-by:
Cédric Le Goater <clg@redhat.com>
-
- Nov 01, 2023
-
-
Steve Sistare authored
Some blockdevs block migration because they do not support sharing across hosts and/or do not support dirty bitmaps. These prohibitions do not apply if the old and new qemu processes do not run concurrently, and if new qemu starts on the same host as old, which is the case for cpr. Narrow the scope of these blockers so they only apply to normal mode. They will not block cpr modes when they are added in subsequent patches. No functional change until a new mode is added. Signed-off-by:
Steve Sistare <steven.sistare@oracle.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Message-ID: <1698263069-406971-4-git-send-email-steven.sistare@oracle.com>
-
- Oct 31, 2023
-
-
Fiona Ebner authored
To start out, only actively-synced is returned. For example, this is useful for jobs that started out in background mode and switched to active mode. Once actively-synced is true, it's clear that the mode switch has been completed. Note that completion of the switch might happen much earlier, e.g. if the switch happens before the job is ready, once all background operations have finished. It's assumed that whether the disks are actively-synced or not is more interesting than whether the mode switch completed. That information can still be added if required in the future. In presence of an iothread, the actively_synced member is now shared between the iothread and the main thread, so turn accesses to it atomic. Requires to adapt the output for iotest 109. Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Message-ID: <20231031135431.393137-10-f.ebner@proxmox.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Fiona Ebner authored
In preparation to turn BlockJobInfo into a union with @type as the discriminator. That requires it to be an enum. Even without that requirement, it's nicer to have an enum instead of a str here. No functional change is intended. Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-ID: <20231031135431.393137-7-f.ebner@proxmox.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Fiona Ebner authored
which allows switching the @copy-mode from 'background' to 'write-blocking'. This is useful for management applications, so they can start out in background mode to avoid limiting guest write speed and switch to active mode when certain criteria are fulfilled. In presence of an iothread, the copy_mode member is now shared between the iothread and the main thread, so turn accesses to it atomic. Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Message-ID: <20231031135431.393137-6-f.ebner@proxmox.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Fiona Ebner authored
In preparation to allow changing the copy_mode via QMP. When running in an iothread, it could be that copy_mode is changed from the main thread in between reading copy_mode in bdrv_mirror_top_pwritev() and reading copy_mode in bdrv_mirror_top_do_write(), so they might end up disagreeing about whether copy_to_target is true or false. Avoid that scenario by determining copy_to_target only once and passing it to bdrv_mirror_top_do_write() as an argument. Signed-off-by:
Fiona Ebner <f.ebner@proxmox.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20231031135431.393137-5-f.ebner@proxmox.com> Reviewed-by:
Kevin Wolf <kwolf@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-