Skip to content
Snippets Groups Projects
  1. Jan 14, 2022
    • Vladimir Sementsov-Ogievskiy's avatar
      block: drop BLK_PERM_GRAPH_MOD · 64631f36
      Vladimir Sementsov-Ogievskiy authored
      
      First, this permission never protected a node from being changed, as
      generic child-replacing functions don't check it.
      
      Second, it's a strange thing: it presents a permission of parent node
      to change its child. But generally, children are replaced by different
      mechanisms, like jobs or qmp commands, not by nodes.
      
      Graph-mod permission is hard to understand. All other permissions
      describe operations which done by parent node on its child: read,
      write, resize. Graph modification operations are something completely
      different.
      
      The only place where BLK_PERM_GRAPH_MOD is used as "perm" (not shared
      perm) is mirror_start_job, for s->target. Still modern code should use
      bdrv_freeze_backing_chain() to protect from graph modification, if we
      don't do it somewhere it may be considered as a bug. So, it's a bit
      risky to drop GRAPH_MOD, and analyzing of possible loss of protection
      is hard. But one day we should do it, let's do it now.
      
      One more bit of information is that locking the corresponding byte in
      file-posix doesn't make sense at all.
      
      Signed-off-by: default avatarVladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Message-Id: <20210902093754.2352-1-vsementsov@virtuozzo.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      64631f36
    • Philippe Mathieu-Daudé's avatar
      qapi/block: Restrict vhost-user-blk to CONFIG_VHOST_USER_BLK_SERVER · bb01ea73
      Philippe Mathieu-Daudé authored
      
      When building QEMU with --disable-vhost-user and using introspection,
      query-qmp-schema lists vhost-user-blk even though it's not actually
      available:
      
        { "execute": "query-qmp-schema" }
        {
            "return": [
                ...
                {
                    "name": "312",
                    "members": [
                        {
                            "name": "nbd"
                        },
                        {
                            "name": "vhost-user-blk"
                        }
                    ],
                    "meta-type": "enum",
                    "values": [
                        "nbd",
                        "vhost-user-blk"
                    ]
                },
      
      Restrict vhost-user-blk in BlockExportType when
      CONFIG_VHOST_USER_BLK_SERVER is disabled, so it
      doesn't end listed by query-qmp-schema.
      
      Fixes: 90fc91d5 ("convert vhost-user-blk server to block export API")
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <f4bug@amsat.org>
      Message-Id: <20220107105420.395011-4-f4bug@amsat.org>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      bb01ea73
    • Daniel P. Berrangé's avatar
      softmmu: fix device deletion events with -device JSON syntax · 64b4529a
      Daniel P. Berrangé authored
      The -device JSON syntax impl leaks a reference on the created
      DeviceState instance. As a result when you hot-unplug the
      device, the device_finalize method won't be called and thus
      it will fail to emit the required DEVICE_DELETED event.
      
      A 'json-cli' feature was previously added against the
      'device_add' QMP command QAPI schema to indicated to mgmt
      apps that -device supported JSON syntax. Given the hotplug
      bug that feature flag is not usable for its purpose, so
      we add a new 'json-cli-hotplug' feature to indicate the
      -device supports JSON without breaking hotplug.
      
      Fixes: 5dacda51
      Resolves: https://gitlab.com/qemu-project/qemu/-/issues/802
      
      
      Signed-off-by: default avatarDaniel P. Berrangé <berrange@redhat.com>
      Message-Id: <20220105123847.4047954-2-berrange@redhat.com>
      Reviewed-by: default avatarLaurent Vivier <lvivier@redhat.com>
      Tested-by: default avatarJán Tomko <jtomko@redhat.com>
      Reviewed-by: default avatarThomas Huth <thuth@redhat.com>
      Signed-off-by: default avatarKevin Wolf <kwolf@redhat.com>
      64b4529a
  2. Jan 07, 2022
  3. Dec 31, 2021
    • Yanan Wang's avatar
      hw/core/machine: Introduce CPU cluster topology support · 864c3b5c
      Yanan Wang authored
      
      The new Cluster-Aware Scheduling support has landed in Linux 5.16,
      which has been proved to benefit the scheduling performance (e.g.
      load balance and wake_affine strategy) on both x86_64 and AArch64.
      
      So now in Linux 5.16 we have four-level arch-neutral CPU topology
      definition like below and a new scheduler level for clusters.
      struct cpu_topology {
          int thread_id;
          int core_id;
          int cluster_id;
          int package_id;
          int llc_id;
          cpumask_t thread_sibling;
          cpumask_t core_sibling;
          cpumask_t cluster_sibling;
          cpumask_t llc_sibling;
      }
      
      A cluster generally means a group of CPU cores which share L2 cache
      or other mid-level resources, and it is the shared resources that
      is used to improve scheduler's behavior. From the point of view of
      the size range, it's between CPU die and CPU core. For example, on
      some ARM64 Kunpeng servers, we have 6 clusters in each NUMA node,
      and 4 CPU cores in each cluster. The 4 CPU cores share a separate
      L2 cache and a L3 cache tag, which brings cache affinity advantage.
      
      In virtualization, on the Hosts which have pClusters (physical
      clusters), if we can design a vCPU topology with cluster level for
      guest kernel and have a dedicated vCPU pinning. A Cluster-Aware
      Guest kernel can also make use of the cache affinity of CPU clusters
      to gain similar scheduling performance.
      
      This patch adds infrastructure for CPU cluster level topology
      configuration and parsing, so that the user can specify cluster
      parameter if their machines support it.
      
      Signed-off-by: default avatarYanan Wang <wangyanan55@huawei.com>
      Message-Id: <20211228092221.21068-3-wangyanan55@huawei.com>
      Reviewed-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      [PMD: Added '(since 7.0)' to @clusters in qapi/machine.json]
      Signed-off-by: default avatarPhilippe Mathieu-Daudé <philmd@redhat.com>
      864c3b5c
  4. Dec 21, 2021
  5. Dec 10, 2021
    • Yang Zhong's avatar
      numa: Support SGX numa in the monitor and Libvirt interfaces · 4755927a
      Yang Zhong authored
      
      Add the SGXEPCSection list into SGXInfo to show the multiple
      SGX EPC sections detailed info, not the total size like before.
      This patch can enable numa support for 'info sgx' command and
      QMP interfaces. The new interfaces show each EPC section info
      in one numa node. Libvirt can use QMP interface to get the
      detailed host SGX EPC capabilities to decide how to allocate
      host EPC sections to guest.
      
      (qemu) info sgx
       SGX support: enabled
       SGX1 support: enabled
       SGX2 support: enabled
       FLC support: enabled
       NUMA node #0: size=67108864
       NUMA node #1: size=29360128
      
      The QMP interface show:
      (QEMU) query-sgx
      {"return": {"sgx": true, "sgx2": true, "sgx1": true, "sections": \
      [{"node": 0, "size": 67108864}, {"node": 1, "size": 29360128}], "flc": true}}
      
      (QEMU) query-sgx-capabilities
      {"return": {"sgx": true, "sgx2": true, "sgx1": true, "sections": \
      [{"node": 0, "size": 17070817280}, {"node": 1, "size": 17079205888}], "flc": true}}
      
      Signed-off-by: default avatarYang Zhong <yang.zhong@intel.com>
      Message-Id: <20211101162009.62161-4-yang.zhong@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      4755927a
    • Yang Zhong's avatar
      numa: Enable numa for SGX EPC sections · 11058123
      Yang Zhong authored
      
      The basic SGX did not enable numa for SGX EPC sections, which
      result in all EPC sections located in numa node 0. This patch
      enable SGX numa function in the guest and the EPC section can
      work with RAM as one numa node.
      
      The Guest kernel related log:
      [    0.009981] ACPI: SRAT: Node 0 PXM 0 [mem 0x180000000-0x183ffffff]
      [    0.009982] ACPI: SRAT: Node 1 PXM 1 [mem 0x184000000-0x185bfffff]
      The SRAT table can normally show SGX EPC sections menory info in different
      numa nodes.
      
      The SGX EPC numa related command:
       ......
       -m 4G,maxmem=20G \
       -smp sockets=2,cores=2 \
       -cpu host,+sgx-provisionkey \
       -object memory-backend-ram,size=2G,host-nodes=0,policy=bind,id=node0 \
       -object memory-backend-epc,id=mem0,size=64M,prealloc=on,host-nodes=0,policy=bind \
       -numa node,nodeid=0,cpus=0-1,memdev=node0 \
       -object memory-backend-ram,size=2G,host-nodes=1,policy=bind,id=node1 \
       -object memory-backend-epc,id=mem1,size=28M,prealloc=on,host-nodes=1,policy=bind \
       -numa node,nodeid=1,cpus=2-3,memdev=node1 \
       -M sgx-epc.0.memdev=mem0,sgx-epc.0.node=0,sgx-epc.1.memdev=mem1,sgx-epc.1.node=1 \
       ......
      
      Signed-off-by: default avatarYang Zhong <yang.zhong@intel.com>
      Message-Id: <20211101162009.62161-2-yang.zhong@intel.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      11058123
  6. Nov 30, 2021
  7. Nov 18, 2021
  8. Nov 10, 2021
  9. Nov 09, 2021
  10. Nov 08, 2021
    • John Snow's avatar
      docs: remove non-reference uses of single backticks · 450e0f28
      John Snow authored
      
      The single backtick markup in ReST is the "default role". Currently,
      Sphinx's default role is called "content". Sphinx suggests you can use
      the "Any" role instead to turn any single-backtick enclosed item into a
      cross-reference.
      
      This is useful for things like autodoc for Python docstrings, where it's
      often nicer to reference other types with `foo` instead of the more
      laborious :py:meth:`foo`. It's also useful in multi-domain cases to
      easily reference definitions from other Sphinx domains, such as
      referencing C code definitions from outside of kerneldoc comments.
      
      Before we do that, though, we'll need to turn all existing usages of the
      "content" role to inline verbatim markup wherever it does not correctly
      resolve into a cross-refernece by using double backticks instead.
      
      Signed-off-by: default avatarJohn Snow <jsnow@redhat.com>
      Reviewed-by: default avatarEduardo Habkost <ehabkost@redhat.com>
      Reviewed-by: default avatarAlexander Bulekov <alxndr@bu.edu>
      Message-Id: <20211004215238.1523082-2-jsnow@redhat.com>
      450e0f28
  11. Nov 06, 2021
  12. Nov 02, 2021
  13. Nov 01, 2021
  14. Oct 29, 2021
Loading