- Oct 28, 2019
-
-
Hanna Reitz authored
There is no reason why the format drivers need to truncate the protocol node when formatting it. When using the old .bdrv_co_create_ops() interface, the file will be created with no size option anyway, which generally gives it a size of 0. (Exceptions are block devices, which cannot be truncated anyway.) When using blockdev-create, the user must have given the file node some size anyway, so there is no reason why we should override that. qed is an exception, it needs the file to start completely empty (as explained by c743849b). Signed-off-by:
Max Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-4-mreitz@redhat.com Reviewed-by:
Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
No other filter driver has a .bdrv_co_truncate() implementation, and there is no need to because the general block layer code can handle it just as well. Signed-off-by:
Max Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-3-mreitz@redhat.com Reviewed-by:
Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Make the filter truncation (passing it through to bs->file) a first-class citizen and handle it exactly as if it was the filter driver's native implementation of .bdrv_co_truncate(). I do not see a reason not to, it makes the code a bit shorter, and may be even more correct because this gets us to finish the write_req that we prepared before (may be important to e.g. bring dirty bitmaps to the correct size). Signed-off-by:
Max Reitz <mreitz@redhat.com> Message-id: 20190918095144.955-2-mreitz@redhat.com Reviewed-by:
Maxim Levitsky <mlevitsk@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Add a test how our qcow2 driver handles extra data in snapshot table entries, and how it repairs overly long snapshot tables. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-17-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-16-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
qcow2 v3 images require every snapshot table entry to have at least 16 bytes of extra data. If they do not, let qemu-img check -r all fix it. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-15-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
The user cannot choose which snapshots are removed. This is fine because we have chosen the maximum snapshot table size to be so large (65536 entries) that it cannot be reasonably reached. If the snapshot table exceeds this size, the image has probably been corrupted in some way; in this case, it is most important to just make the image usable such that the user can copy off at least the active layer. (Also note that the snapshots will be removed only with "-r all", so a plain "check" or "check -r leaks" will not delete any data.) Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-14-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
We currently refuse to open qcow2 images with overly long snapshot tables. This patch makes qemu-img check -r all drop all offending entries past what we deem acceptable. The user cannot choose which snapshots are removed. This is fine because we have chosen the maximum snapshot table size to be so large (64 MB) that it cannot be reasonably reached. If the snapshot table exceeds this size, the image has probably been corrupted in some way; in this case, it is most important to just make the image usable such that the user can copy off at least the active layer. (Also note that the snapshots will be removed only with "-r all", so a plain "check" or "check -r leaks" will not delete any data.) Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-13-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
When repairing the snapshot table, we truncate entries that have too much extra data. This frees up space that we do not have to count towards the snapshot table size. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-12-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
The only case where we currently reject snapshot table entries is when they have too much extra data. Fix them with qemu-img check -r all by counting it as a corruption, reducing their extra_data_size, and then letting qcow2_check_fix_snapshot_table() do the rest. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-11-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
qcow2_check_read_snapshot_table() can perform consistency checks, but it cannot fix everything. Specifically, it cannot allocate new clusters, because that should wait until the refcount structures are known to be consistent (i.e., after qcow2_check_refcounts()). Thus, it cannot call qcow2_write_snapshots(). Do that in qcow2_check_fix_snapshot_table(), which is called after qcow2_check_refcounts(). Currently, there is nothing that would set result->corruptions, so this is a no-op. A follow-up patch will change that. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-10-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Reading the snapshot table can fail. That is a problem when we want to repair the image. Therefore, stop reading the snapshot table in qcow2_do_open() in check mode. Instead, add a new function qcow2_check_read_snapshot_table() that reads the snapshot table at a later point. In the future, we want to handle errors here and fix them. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-9-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
qcow2 v3 requires every snapshot table entry to have two extra data fields: The 64-bit VM state size, and the virtual disk size. Both are optional for v2 images, so they may not be present. qcow2_upgrade() therefore should update the snapshot table to ensure all entries have these extra data fields. Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1727347 Reported-by:
Eric Blake <eblake@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-8-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
This does not make sense right now, but it will make sense once we need to do more than to just update s->qcow_version. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-7-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Updating the snapshot list will be useful when upgrading a v2 image to v3, so we will need to call this function in qcow2.c. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-6-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
The qcow2 specification says to ignore unknown extra data fields in snapshot table entries. Currently, we discard it whenever we update the image, which is a bit different from "ignore". This patch makes the qcow2 driver keep all unknown extra data fields when updating an image's snapshot table. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-5-mreitz@redhat.com [mreitz: Adjusted comments as proposed by Eric] Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-4-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-3-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
endof() is a useful macro, we can make use of it outside of virtio. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191011152814.14791-2-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
mirror_exit_common() may be called twice (if it is called from mirror_prepare() and fails, it will be called from mirror_abort() again). In such a case, many of the pointers in the MirrorBlockJob object will already be freed. This can be seen most reliably for s->target, which is set to NULL (and then dereferenced by blk_bs()). Cc: qemu-stable@nongnu.org Fixes: 737efc1e Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
John Snow <jsnow@redhat.com> Reviewed-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-id: 20191014153931.20699-2-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Maxim Levitsky authored
Signed-off-by:
Maxim Levitsky <mlevitsk@redhat.com> Message-id: 20190913133627.28450-3-mlevitsk@redhat.com Reviewed-by:
John Snow <jsnow@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Maxim Levitsky authored
Signed-off-by:
Maxim Levitsky <mlevitsk@redhat.com> Message-id: 20190913133627.28450-2-mlevitsk@redhat.com Reviewed-by:
John Snow <jsnow@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
No reason to limit buffered copy to one cluster. Let's allow up to 1 MiB. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-7-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
Currently total allocation for parallel requests to block-copy instance is unlimited. Let's limit it to 128 MiB. For now block-copy is used only in backup, so actually we limit total allocation for backup job. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-6-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
Introduce an API for some shared splittable resource, like memory. It's going to be used by backup. Backup uses both read/write io and copy_range. copy_range may consume memory implictly, so the new API is abstract: it doesn't allocate any real memory but only hands out tickets. The idea is that we have some total amount of something and callers should wait in coroutine queue if there is not enough of the resource at the moment. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-5-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
Merge copying code into one function block_copy_do_copy, which only calls bdrv_ io functions and don't do any synchronization (like dirty bitmap set/reset). Refactor block_copy() function so that it takes full decision about size of chunk to be copied and does all the synchronization (checking intersecting requests, set/reset dirty bitmaps). It will help: - introduce parallel processing of block_copy iterations: we need to calculate chunk size, start async chunk copying and go to the next iteration - simplify synchronization improvement (like memory limiting in further commit and reducing critical section (now we lock the whole requested range, when actually we need to lock only dirty region which we handle at the moment)) Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-4-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
Large copy range may imply memory allocation and large io effort, so using 2G copy range request may be bad idea. Let's limit it to 16 MiB. It also helps the following patch to refactor copy-with-offload fallback to copy-with-bounce-buffer. Note, that total memory usage of backup is still not limited, it will be fixed in further commit. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-3-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Vladimir Sementsov-Ogievskiy authored
Move bounce_buffer allocation block_copy_with_bounce_buffer. This commit simplifies further work on implementing copying by larger chunks (of different size) and further asynchronous handling of block_copy iterations (with help of block/aio_task API). Allocation works fast, a lot faster than disk io, so it's not a problem that we now allocate/free bounce_buffer more times. And we anyway will have to allocate several bounce_buffers for parallel execution of loop iterations in future. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by:
Max Reitz <mreitz@redhat.com> Message-id: 20191022111805.3432-2-vsementsov@virtuozzo.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Sockets should be placed into $SOCK_DIR instead of $TEST_DIR, so remove the $TEST_DIR filter from _filter_nbd. Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-24-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-23-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-22-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-21-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-20-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-19-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-18-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-17-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-16-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-15-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Message-id: 20191017133155.5327-14-mreitz@redhat.com Reviewed-by:
Thomas Huth <thuth@redhat.com> Signed-off-by:
Max Reitz <mreitz@redhat.com>
-
Hanna Reitz authored
Signed-off-by:
Max Reitz <mreitz@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Thomas Huth <thuth@redhat.com> Message-id: 20191017133155.5327-13-mreitz@redhat.com Signed-off-by:
Max Reitz <mreitz@redhat.com>
-