- Jul 29, 2022
-
-
Ilya Leoshkevich authored
Currently QEMU exits with code 0 on both panic an shutdown. For tests it is useful to return 1 on panic, so that it counts as a test failure. Introduce a new exit-failure PanicAction that makes main() return EXIT_FAILURE. Tests can use -action panic=exit-failure option to activate this behavior. Signed-off-by:
Ilya Leoshkevich <iii@linux.ibm.com> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Reviewed-by:
David Hildenbrand <david@redhat.com> Message-Id: <20220725223746.227063-2-iii@linux.ibm.com> Signed-off-by:
Alex Bennée <alex.bennee@linaro.org>
-
- Jul 20, 2022
-
-
Leonardo Bras authored
Signed-off-by:
Leonardo Bras <leobras@redhat.com> Acked-by:
Markus Armbruster <armbru@redhat.com> Acked-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Message-Id: <20220711211112.18951-3-leobras@redhat.com> Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com>
-
Peter Xu authored
Firstly, postcopy already preempts precopy due to the fact that we do unqueue_page() first before looking into dirty bits. However that's not enough, e.g., when there're host huge page enabled, when sending a precopy huge page, a postcopy request needs to wait until the whole huge page that is sending to finish. That could introduce quite some delay, the bigger the huge page is the larger delay it'll bring. This patch adds a new capability to allow postcopy requests to preempt existing precopy page during sending a huge page, so that postcopy requests can be serviced even faster. Meanwhile to send it even faster, bypass the precopy stream by providing a standalone postcopy socket for sending requested pages. Since the new behavior will not be compatible with the old behavior, this will not be the default, it's enabled only when the new capability is set on both src/dst QEMUs. This patch only adds the capability itself, the logic will be added in follow up patches. Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Peter Xu <peterx@redhat.com> Message-Id: <20220707185342.26794-2-peterx@redhat.com> Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com>
-
Hyman Huang authored
Implement dirtyrate calculation periodically basing on dirty-ring and throttle virtual CPU until it reachs the quota dirty page rate given by user. Introduce qmp commands "set-vcpu-dirty-limit", "cancel-vcpu-dirty-limit", "query-vcpu-dirty-limit" to enable, disable, query dirty page limit for virtual CPU. Meanwhile, introduce corresponding hmp commands "set_vcpu_dirty_limit", "cancel_vcpu_dirty_limit", "info vcpu_dirty_limit" so the feature can be more usable. "query-vcpu-dirty-limit" success depends on enabling dirty page rate limit, so just add it to the list of skipped command to ensure qmp-cmd-test run successfully. Signed-off-by:
Hyman Huang(黄勇) <huangy81@chinatelecom.cn> Acked-by:
Markus Armbruster <armbru@redhat.com> Reviewed-by:
Peter Xu <peterx@redhat.com> Message-Id: <4143f26706d413dd29db0b672fe58b3d3fbe34bc.1656177590.git.huangy81@chinatelecom.cn> Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com>
-
Eugenio Pérez authored
Finally offering the possibility to enable SVQ from the command line. Signed-off-by:
Eugenio Pérez <eperezma@redhat.com> Acked-by:
Markus Armbruster <armbru@redhat.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
- Jul 19, 2022
-
-
Felix xq Queißner authored
The patch adds "show_tabs" command line option for GTK ui similar to "grab_on_hover". This option allows tabbed view mode to not have to be enabled by hand at each start of the VM. Signed-off-by:
Felix "xq" Queißner <xq@random-projects.net> Reviewed-by:
Thomas Huth <thuth@redhat.com> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> Message-Id: <20220712133753.18937-1-xq@random-projects.net> Signed-off-by:
Gerd Hoffmann <kraxel@redhat.com>
-
- Jul 18, 2022
-
-
Paolo Bonzini authored
The next version of Linux will introduce boolean statistics, which can only have 0 or 1 values. Support them in the schema and in the HMP command. Suggested-by:
Amneesh Singh <natto@weirdnatto.in> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jun 29, 2022
-
-
Vladimir Sementsov-Ogievskiy authored
In some scenarios, when copy-before-write operations lasts too long time, it's better to cancel it. Most useful would be to use the new option together with on-cbw-error=break-snapshot: this way if cbw operation takes too long time we'll just cancel backup process but do not disturb the guest too much. Note the tricky point of realization: we keep additional point in bs->in_flight during block_copy operation even if it's timed-out. Background "cancelled" block_copy operations will finish at some point and will want to access state. We should care to not free the state in .bdrv_close() earlier. Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> [vsementsov: use bdrv_inc_in_flight()/bdrv_dec_in_flight() instead of direct manipulation on bs->in_flight] Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
-
- Jun 28, 2022
-
-
Dr. David Alan Gilbert authored
Inspired by Julia Lawall's fixing of Linux kernel comments, I looked at qemu, although I did it manually. Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by:
Daniel Henrique Barboza <danielhb413@gmail.com> Reviewed-by:
Klaus Jensen <k.jensen@samsung.com> Message-Id: <20220614104045.85728-2-dgilbert@redhat.com> Signed-off-by:
Laurent Vivier <laurent@vivier.eu>
-
Vladimir Sementsov-Ogievskiy authored
Currently, behavior on copy-before-write operation failure is simple: report error to the guest. Let's implement alternative behavior: break the whole copy-before-write process (and corresponding backup job or NBD client) but keep guest working. It's needed if we consider guest stability as more important. The realisation is simple: on copy-before-write failure we set s->snapshot_ret and continue guest operations. s->snapshot_ret being set will lead to all further snapshot API requests. Note that all in-flight snapshot-API requests may still success: we do wait for them on BREAK_SNAPSHOT-failure path in cbw_do_copy_before_write(). Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org> Reviewed-by:
Hanna Reitz <hreitz@redhat.com> Signed-off-by:
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
-
- Jun 24, 2022
-
-
Xie Yongji authored
Currently we use 'id' option as the name of VDUSE device. It's a bit confusing since we use one value for two different purposes: the ID to identfy the export within QEMU (must be distinct from any other exports in the same QEMU process, but can overlap with names used by other processes), and the VDUSE name to uniquely identify it on the host (must be distinct from other VDUSE devices on the same host, but can overlap with other export types like NBD in the same process). To make it clear, this patch adds a separate 'name' option to specify the VDUSE name for the vduse-blk export instead. Signed-off-by:
Xie Yongji <xieyongji@bytedance.com> Message-Id: <20220614051532.92-7-xieyongji@bytedance.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Xie Yongji authored
Add a 'serial' option to allow user to specify this value explicitly. And the default value is changed to an empty string as what we did in "hw/block/virtio-blk.c". Signed-off-by:
Xie Yongji <xieyongji@bytedance.com> Message-Id: <20220614051532.92-6-xieyongji@bytedance.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
Xie Yongji authored
This implements a VDUSE block backends based on the libvduse library. We can use it to export the BDSs for both VM and container (host) usage. The new command-line syntax is: $ qemu-storage-daemon \ --blockdev file,node-name=drive0,filename=test.img \ --export vduse-blk,node-name=drive0,id=vduse-export0,writable=on After the qemu-storage-daemon started, we need to use the "vdpa" command to attach the device to vDPA bus: $ vdpa dev add name vduse-export0 mgmtdev vduse Also the device must be removed via the "vdpa" command before we stop the qemu-storage-daemon. Signed-off-by:
Xie Yongji <xieyongji@bytedance.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220523084611.91-7-xieyongji@bytedance.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-
- Jun 22, 2022
-
-
Leonardo Bras authored
When originally implemented, zero_copy_send was designed as a Migration paramenter. But taking into account how is that supposed to work, and how the difference between a capability and a parameter, it only makes sense that zero-copy-send would work better as a capability. Taking into account how recently the change got merged, it was decided that it's still time to make it right, and convert zero_copy_send into a Migration capability. Signed-off-by:
Leonardo Bras <leobras@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Acked-by:
Markus Armbruster <armbru@redhat.com> Acked-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Juan Quintela <quintela@redhat.com> Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> dgilbert: always define the capability, even on non-Linux but error if set; avoids build problems with the capability
-
- Jun 15, 2022
-
-
Jagannathan Raman authored
Setup a handler to run vfio-user context. The context is driven by messages to the file descriptor associated with it - get the fd for the context and hook up the handler with it Signed-off-by:
Elena Ufimtseva <elena.ufimtseva@oracle.com> Signed-off-by:
John G Johnson <john.g.johnson@oracle.com> Signed-off-by:
Jagannathan Raman <jag.raman@oracle.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-id: e934b0090529d448b6a7972b21dfc3d7421ce494.1655151679.git.jag.raman@oracle.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
Jagannathan Raman authored
Define vfio-user object which is remote process server for QEMU. Setup object initialization functions and properties necessary to instantiate the object Signed-off-by:
Elena Ufimtseva <elena.ufimtseva@oracle.com> Signed-off-by:
John G Johnson <john.g.johnson@oracle.com> Signed-off-by:
Jagannathan Raman <jag.raman@oracle.com> Reviewed-by:
Stefan Hajnoczi <stefanha@redhat.com> Message-id: e45a17001e9b38f451543a664ababdf860e5f2f2.1655151679.git.jag.raman@oracle.com Signed-off-by:
Stefan Hajnoczi <stefanha@redhat.com>
-
- Jun 14, 2022
-
-
Paolo Bonzini authored
Of the block device commands, those that are available outside system emulators do not require a fully constructed machine by definition. Allow running them before machine initialization has concluded. Of the ones that are available inside system emulation, allow querying the PR managers, and setting up accounting and throttling. Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Allow retrieving only a subset of statistics. This can be useful for example in order to plot a subset of the statistics many times a second: KVM publishes ~40 statistics for each vCPU on x86; retrieving and serializing all of them would be useless. Another use will be in HMP in the following patch; implementing the filter in the backend is easy enough that it was deemed okay to make this a public interface. Example: { "execute": "query-stats", "arguments": { "target": "vcpu", "vcpus": [ "/machine/unattached/device[2]", "/machine/unattached/device[4]" ], "providers": [ { "provider": "kvm", "names": [ "l1d_flush", "exits" ] } } } { "return": { "vcpus": [ { "path": "/machine/unattached/device[2]" "providers": [ { "provider": "kvm", "stats": [ { "name": "l1d_flush", "value": 41213 }, { "name": "exits", "value": 74291 } ] } ] }, { "path": "/machine/unattached/device[4]" "providers": [ { "provider": "kvm", "stats": [ { "name": "l1d_flush", "value": 16132 }, { "name": "exits", "value": 57922 } ] } ] } ] } } Extracted from a patch by Mark Kanda. Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Allow retrieving the statistics from a specific provider only. This can be used in the future by HMP commands such as "info sync-profile" or "info profile". The next patch also adds filter-by-provider capabilities to the HMP equivalent of query-stats, "info stats". Example: { "execute": "query-stats", "arguments": { "target": "vm", "providers": [ { "provider": "kvm" } ] } } The QAPI is a bit more verbose than just a list of StatsProvider, so that it can be subsequently extended with filtering of statistics by name. If a provider is specified more than once in the filter, each request will be included separately in the output. Extracted from a patch by Mark Kanda. Reviewed-by:
Markus Armbruster <armbru@redhat.com> Reviewed-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Paolo Bonzini authored
Introduce a simple filtering of statistics, that allows to retrieve statistics for a subset of the guest vCPUs. This will be used for example by the HMP monitor, in order to retrieve the statistics for the currently selected CPU. Example: { "execute": "query-stats", "arguments": { "target": "vcpu", "vcpus": [ "/machine/unattached/device[2]", "/machine/unattached/device[4]" ] } } Extracted from a patch by Mark Kanda. Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Mark Kanda authored
Add support for querying fd-based KVM stats - as introduced by Linux kernel commit: cb082bfab59a ("KVM: stats: Add fd-based API to read binary stats data") This allows the user to analyze the behavior of the VM without access to debugfs. Signed-off-by:
Mark Kanda <mark.kanda@oracle.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
Mark Kanda authored
Gathering statistics is important for development, for monitoring and for performance measurement. There are tools such as kvm_stat that do this and they rely on the _user_ knowing the interesting data points rather than the tool (which can treat them as opaque). The commands introduced in this commit introduce QMP support for querying stats; the goal is to take the capabilities of these tools and making them available throughout the whole virtualization stack, so that one can observe, monitor and measure virtual machines without having shell access + root on the host that runs them. query-stats returns a list of all stats per target type (only VM and vCPU to start); future commits add extra options for specifying stat names, vCPU qom paths, and providers. All these are used by the HMP command "info stats". Because of the development usecases around statistics, a good HMP interface is important. query-stats-schemas returns a list of stats included in each target type, with an option for specifying the provider. The concepts in the schema are based on the KVM binary stats' own introspection data, just translated to QAPI. There are two reasons to have a separate schema that is not tied to the QAPI schema. The first is the contents of the schemas: the new introspection data provides different information than the QAPI data, namely unit of measurement, how the numbers are gathered and change (peak/instant/cumulative/histogram), and histogram bucket sizes. There's really no reason to have this kind of metadata in the QAPI introspection schema (except possibly for the unit of measure, but there's a very weak justification). Another reason is the dynamicity of the schema. The QAPI introspection data is very much static; and while QOM is somewhat more dynamic, generally we consider that to be a bug rather than a feature these days. On the other hand, the statistics that are exposed by QEMU might be passed through from another source, such as KVM, and the disadvantages of manually updating the QAPI schema for outweight the benefits from vetting the statistics and filtering out anything that seems "too unstable". Running old QEMU with new kernel is a supported usecase; if old QEMU cannot expose statistics from a new kernel, or if a kernel developer needs to change QEMU before gathering new info from the new kernel, then that is a poor user interface. The framework provides a method to register callbacks for these QMP commands. Most of the work in fact is done by the callbacks, and a large majority of this patch is new QAPI structs and commands. Examples (with KVM stats): - Query all VM stats: { "execute": "query-stats", "arguments" : { "target": "vm" } } { "return": [ { "provider": "kvm", "stats": [ { "name": "max_mmu_page_hash_collisions", "value": 0 }, { "name": "max_mmu_rmap_size", "value": 0 }, { "name": "nx_lpage_splits", "value": 148 }, ... ] }, { "provider": "xyz", "stats": [ ... ] } ] } - Query all vCPU stats: { "execute": "query-stats", "arguments" : { "target": "vcpu" } } { "return": [ { "provider": "kvm", "qom_path": "/machine/unattached/device[0]" "stats": [ { "name": "guest_mode", "value": 0 }, { "name": "directed_yield_successful", "value": 0 }, { "name": "directed_yield_attempted", "value": 106 }, ... ] }, { "provider": "kvm", "qom_path": "/machine/unattached/device[1]" "stats": [ { "name": "guest_mode", "value": 0 }, { "name": "directed_yield_successful", "value": 0 }, { "name": "directed_yield_attempted", "value": 106 }, ... ] }, ] } - Retrieve the schemas: { "execute": "query-stats-schemas" } { "return": [ { "provider": "kvm", "target": "vcpu", "stats": [ { "name": "guest_mode", "unit": "none", "base": 10, "exponent": 0, "type": "instant" }, { "name": "directed_yield_successful", "unit": "none", "base": 10, "exponent": 0, "type": "cumulative" }, ... ] }, { "provider": "kvm", "target": "vm", "stats": [ { "name": "max_mmu_page_hash_collisions", "unit": "none", "base": 10, "exponent": 0, "type": "peak" }, ... ] }, { "provider": "xyz", "target": "vm", "stats": [ ... ] } ] } Signed-off-by:
Mark Kanda <mark.kanda@oracle.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com>
-
- Jun 09, 2022
-
-
Jonathan Cameron authored
Paolo Bonzini requested this change to simplify the ongoing effort to allow machine setup entirely via RPC. Includes shortening the command line form cxl-fixed-memory-window to cxl-fmw as the command lines are extremely long even with this change. The json change is needed to ensure that there is a CXLFixedMemoryWindowOptionsList even though the actual element in the json is never used. Similar to existing SgxEpcProperties. Update qemu-options.hx to reflect that this is now a -machine parameter. The bulk of -M / -machine parameters are documented under machine, so use that in preference to M. Update cxl-test and bios-tables-test to reflect new parameters. Signed-off-by:
Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by:
Ben Widawsky <ben@bwidawsk.net> Reviewed-by:
Davidlohr Bueso <dave@stgolabs.net> Message-Id: <20220608145440.26106-2-Jonathan.Cameron@huawei.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- Jun 06, 2022
-
-
Xiaojuan Yang authored
Emulate a 3A5000 board use the new loongarch instruction. 3A5000 belongs to the Loongson3 series processors. The board consists of a 3A5000 cpu model and the virt bridge. The host 3A5000 board is really complicated and contains many functions.Now for the tcg softmmu mode only part functions are emulated. More detailed info you can see https://github.com/loongson/LoongArch-Documentation Signed-off-by:
Xiaojuan Yang <yangxiaojuan@loongson.cn> Signed-off-by:
Song Gao <gaosong@loongson.cn> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220606124333.2060567-31-yangxiaojuan@loongson.cn> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
Xiaojuan Yang authored
Signed-off-by:
Xiaojuan Yang <yangxiaojuan@loongson.cn> Signed-off-by:
Song Gao <gaosong@loongson.cn> Reviewed-by:
Richard Henderson <richard.henderson@linaro.org> Message-Id: <20220606124333.2060567-22-yangxiaojuan@loongson.cn> Signed-off-by:
Richard Henderson <richard.henderson@linaro.org>
-
- Jun 03, 2022
-
-
Thomas Huth authored
The "-display sdl" option still uses a hand-crafted parser for its parameters since we didn't want to drag an interface we considered somewhat flawed into the QAPI schema. Since the flaws are gone now, it's time to QAPIfy. This introduces the new "DisplaySDL" QAPI struct that is used to hold the parameters that are unique to the SDL display. The only specific parameter is currently "grab-mod" that is used to specify the required modifier keys to escape from the mouse grabbing mode. Message-Id: <20220519155625.1414365-3-thuth@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Thomas Huth <thuth@redhat.com>
-
- May 26, 2022
-
-
Lei He authored
Introduce akcipher types, also include RSA related types. Signed-off-by:
Lei He <helei.sig11@bytedance.com> Signed-off-by:
zhenwei pi <pizhenwei@bytedance.com> Signed-off-by:
Daniel P. Berrangé <berrange@redhat.com>
-
- May 17, 2022
-
-
Vladislav Yaroshchuk authored
Create separate netdevs for each vmnet operating mode: - vmnet-host - vmnet-shared - vmnet-bridged Reviewed-by:
Akihiko Odaki <akihiko.odaki@gmail.com> Tested-by:
Akihiko Odaki <akihiko.odaki@gmail.com> Acked-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Vladislav Yaroshchuk <Vladislav.Yaroshchuk@jetbrains.com> Signed-off-by:
Jason Wang <jasowang@redhat.com>
-
- May 16, 2022
-
-
Leonardo Bras authored
Add property that allows zero-copy migration of memory pages on the sending side, and also includes a helper function migrate_use_zero_copy_send() to check if it's enabled. No code is introduced to actually do the migration, but it allow future implementations to enable/disable this feature. On non-Linux builds this parameter is compiled-out. Signed-off-by:
Leonardo Bras <leobras@redhat.com> Reviewed-by:
Peter Xu <peterx@redhat.com> Reviewed-by:
Daniel P. Berrangé <berrange@redhat.com> Reviewed-by:
Juan Quintela <quintela@redhat.com> Acked-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220513062836.965425-5-leobras@redhat.com> Signed-off-by:
Dr. David Alan Gilbert <dgilbert@redhat.com>
-
Markus Armbruster authored
Commit 05ebf841 "qapi: Enforce command naming rules" inserted new code between a comment and the code it applies to. Move the comment back to its code, and add one for the new code. Signed-off-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220510081433.3289762-1-armbru@redhat.com>
-
Andrea Bolognani authored
Perfectly aligned things look pretty, but keeping them that way as the schema evolves requires churn, and in some cases newly-added lines are not aligned properly. Overall, trying to align things is just not worth the trouble. Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Message-Id: <20220503073737.84223-8-abologna@redhat.com> Message-Id: <20220503073737.84223-9-abologna@redhat.com> Reviewed-by:
Eric Blake <eblake@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> [Two patches squashed together] Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
The only instances that get changed are those in which the additional whitespace was not (or couldn't possibly be) used for alignment purposes. Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Message-Id: <20220503073737.84223-7-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220503073737.84223-6-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220503073737.84223-5-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
This only affects readability. The generated documentation doesn't change. Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220503073737.84223-4-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
It should start on the very first column. Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220503073737.84223-3-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Andrea Bolognani authored
Signed-off-by:
Andrea Bolognani <abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220503073737.84223-2-abologna@redhat.com> Reviewed-by:
Markus Armbruster <armbru@redhat.com> Signed-off-by:
Markus Armbruster <armbru@redhat.com>
-
Markus Armbruster authored
"Since X.Y" is not recognized as a tagged section, and therefore not formatted as such in generated documentation. Fix by adding the required colon. Signed-off-by:
Markus Armbruster <armbru@redhat.com> Message-Id: <20220422132807.1704411-1-armbru@redhat.com> Reviewed-by:
Andrea Bolognani <abologna@redhat.com>
-
- May 13, 2022
-
-
Jonathan Cameron authored
The concept of these is introduced in [1] in terms of the description the CEDT ACPI table. The principal is more general. Unlike once traffic hits the CXL root bridges, the host system memory address routing is implementation defined and effectively static once observable by standard / generic system software. Each CXL Fixed Memory Windows (CFMW) is a region of PA space which has fixed system dependent routing configured so that accesses can be routed to the CXL devices below a set of target root bridges. The accesses may be interleaved across multiple root bridges. For QEMU we could have fully specified these regions in terms of a base PA + size, but as the absolute address does not matter it is simpler to let individual platforms place the memory regions. ExampleS: -cxl-fixed-memory-window targets.0=cxl.0,size=128G -cxl-fixed-memory-window targets.0=cxl.1,size=128G -cxl-fixed-memory-window targets.0=cxl0,targets.1=cxl.1,size=256G,interleave-granularity=2k Specifies * 2x 128G regions not interleaved across root bridges, one for each of the root bridges with ids cxl.0 and cxl.1 * 256G region interleaved across root bridges with ids cxl.0 and cxl.1 with a 2k interleave granularity. When system software enumerates the devices below a given root bridge it can then decide which CFMW to use. If non interleave is desired (or possible) it can use the appropriate CFMW for the root bridge in question. If there are suitable devices to interleave across the two root bridges then it may use the 3rd CFMS. A number of other designs were considered but the following constraints made it hard to adapt existing QEMU approaches to this particular problem. 1) The size must be known before a specific architecture / board brings up it's PA memory map. We need to set up an appropriate region. 2) Using links to the host bridges provides a clean command line interface but these links cannot be established until command line devices have been added. Hence the two step process used here of first establishing the size, interleave-ways and granularity + caching the ids of the host bridges and then, once available finding the actual host bridges so they can be used later to support interleave decoding. [1] CXL 2.0 ECN: CEDT CFMWS & QTG DSM (computeexpresslink.org / specifications) Signed-off-by:
Jonathan Cameron <jonathan.cameron@huawei.com> Acked-by: Markus Armbruster <armbru@redhat.com> # QAPI Schema Message-Id: <20220429144110.25167-28-Jonathan.Cameron@huawei.com> Reviewed-by:
Michael S. Tsirkin <mst@redhat.com> Signed-off-by:
Michael S. Tsirkin <mst@redhat.com>
-
- May 12, 2022
-
-
Eric Blake authored
According to the NBD spec, a server that advertises NBD_FLAG_CAN_MULTI_CONN promises that multiple client connections will not see any cache inconsistencies: when properly separated by a single flush, actions performed by one client will be visible to another client, regardless of which client did the flush. We always satisfy these conditions in qemu - even when we support multiple clients, ALL clients go through a single point of reference into the block layer, with no local caching. The effect of one client is instantly visible to the next client. Even if our backend were a network device, we argue that any multi-path caching effects that would cause inconsistencies in back-to-back actions not seeing the effect of previous actions would be a bug in that backend, and not the fault of caching in qemu. As such, it is safe to unconditionally advertise CAN_MULTI_CONN for any qemu NBD server situation that supports parallel clients. Note, however, that we don't want to advertise CAN_MULTI_CONN when we know that a second client cannot connect (for historical reasons, qemu-nbd defaults to a single connection while nbd-server-add and QMP commands default to unlimited connections; but we already have existing means to let either style of NBD server creation alter those defaults). This is visible by no longer advertising MULTI_CONN for 'qemu-nbd -r' without -e, as in the iotest nbd-qemu-allocation. The harder part of this patch is setting up an iotest to demonstrate behavior of multiple NBD clients to a single server. It might be possible with parallel qemu-io processes, but I found it easier to do in python with the help of libnbd, and help from Nir and Vladimir in writing the test. Signed-off-by:
Eric Blake <eblake@redhat.com> Suggested-by:
Nir Soffer <nsoffer@redhat.com> Suggested-by:
Vladimir Sementsov-Ogievskiy <v.sementsov-og@mail.ru> Message-Id: <20220512004924.417153-3-eblake@redhat.com> Signed-off-by:
Kevin Wolf <kwolf@redhat.com>
-