qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 80cc1a: vmbus: Don't make QOM property regist


From: Peter Maydell
Subject: [Qemu-commits] [qemu/qemu] 80cc1a: vmbus: Don't make QOM property registration condit...
Date: Fri, 09 Jul 2021 09:59:05 -0700

  Branch: refs/heads/staging
  Home:   https://github.com/qemu/qemu
  Commit: 80cc1a0dd19cc414ddaa3f1b9b6ef91e3ebc12b2
      
https://github.com/qemu/qemu/commit/80cc1a0dd19cc414ddaa3f1b9b6ef91e3ebc12b2
  Author: Eduardo Habkost <ehabkost@redhat.com>
  Date:   2021-07-06 (Tue, 06 Jul 2021)

  Changed paths:
    M hw/hyperv/vmbus.c

  Log Message:
  -----------
  vmbus: Don't make QOM property registration conditional

Having properties registered conditionally makes QOM type
introspection difficult.  Instead of skipping registration of the
"instanceid" property, always register the property but validate
its value against the instance id required by the class.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Acked-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
Message-Id: <20201009200701.1830060-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: cdcf766d0b0364165ba9e5ceacfdf37c8b1fe4ae
      
https://github.com/qemu/qemu/commit/cdcf766d0b0364165ba9e5ceacfdf37c8b1fe4ae
  Author: Igor Mammedov <imammedo@redhat.com>
  Date:   2021-07-06 (Tue, 06 Jul 2021)

  Changed paths:
    M docs/system/deprecated.rst
    M util/mmap-alloc.c

  Log Message:
  -----------
  Deprecate pmem=on with non-DAX capable backend file

It is not safe to pretend that emulated NVDIMM supports
persistence while backend actually failed to enable it
and used non-persistent mapping as fall back.
Instead of falling-back, QEMU should be more strict and
error out with clear message that it's not supported.
So if user asks for persistence (pmem=on), they should
store backing file on NVDIMM.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210111203332.740815-1-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 8947d7fc4e77d36fae44411b1b63c513863f89a7
      
https://github.com/qemu/qemu/commit/8947d7fc4e77d36fae44411b1b63c513863f89a7
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M include/exec/memory.h
    M softmmu/memory.c

  Log Message:
  -----------
  memory: Introduce RamDiscardManager for RAM memory regions

We have some special RAM memory regions (managed by virtio-mem), whereby
the guest agreed to only use selected memory ranges. "unused" parts are
discarded so they won't consume memory - to logically unplug these memory
ranges. Before the VM is allowed to use such logically unplugged memory
again, coordination with the hypervisor is required.

This results in "sparse" mmaps/RAMBlocks/memory regions, whereby only
coordinated parts are valid to be used/accessed by the VM.

In most cases, we don't care about that - e.g., in KVM, we simply have a
single KVM memory slot. However, in case of vfio, registering the
whole region with the kernel results in all pages getting pinned, and
therefore an unexpected high memory consumption - discarding of RAM in
that context is broken.

Let's introduce a way to coordinate discarding/populating memory within a
RAM memory region with such special consumers of RAM memory regions: they
can register as listeners and get updates on memory getting discarded and
populated. Using this machinery, vfio will be able to map only the
currently populated parts, resulting in discarded parts not getting pinned
and not consuming memory.

A RamDiscardManager has to be set for a memory region before it is getting
mapped, and cannot change while the memory region is mapped.

Note: At some point, we might want to let RAMBlock users (esp. vfio used
for nvme://) consume this interface as well. We'll need RAMBlock notifier
calls when a RAMBlock is getting mapped/unmapped (via the corresponding
memory region), so we can properly register a listener there as well.

Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-2-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 228438384e64407949671e0b8b07258afb206ac2
      
https://github.com/qemu/qemu/commit/228438384e64407949671e0b8b07258afb206ac2
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M include/exec/memory.h
    M softmmu/memory.c

  Log Message:
  -----------
  memory: Helpers to copy/free a MemoryRegionSection

In case one wants to create a permanent copy of a MemoryRegionSections,
one needs access to flatview_ref()/flatview_unref(). Instead of exposing
these, let's just add helpers to copy/free a MemoryRegionSection and
properly adjust references.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-3-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 7a9d5d0282c7f64b3e728c99edbacf9806fdba2c
      
https://github.com/qemu/qemu/commit/7a9d5d0282c7f64b3e728c99edbacf9806fdba2c
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/virtio/virtio-mem.c

  Log Message:
  -----------
  virtio-mem: Factor out traversing unplugged ranges

Let's factor out the core logic, no need to replicate.

Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-4-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 3aca6380fdf566e518640de0d90ea6fa74b41825
      
https://github.com/qemu/qemu/commit/3aca6380fdf566e518640de0d90ea6fa74b41825
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/virtio/virtio-mem.c

  Log Message:
  -----------
  virtio-mem: Don't report errors when ram_block_discard_range() fails

Any errors are unexpected and ram_block_discard_range() already properly
prints errors. Let's stop manually reporting errors.

Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-5-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 2044969f0b27fa67f2b69bc710eaef45998cb6fb
      
https://github.com/qemu/qemu/commit/2044969f0b27fa67f2b69bc710eaef45998cb6fb
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/virtio/virtio-mem.c
    M include/hw/virtio/virtio-mem.h

  Log Message:
  -----------
  virtio-mem: Implement RamDiscardManager interface

Let's properly notify when (un)plugging blocks, after discarding memory
and before allowing the guest to consume memory. Handle errors from
notifiers gracefully (e.g., no remaining VFIO mappings) when plugging,
rolling back the change and telling the guest that the VM is busy.

One special case to take care of is replaying all notifications after
restoring the vmstate. The device starts out with all memory discarded,
so after loading the vmstate, we have to notify about all plugged
blocks.

Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-6-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 5e3b981c330c58c4e97ab85e40c3bd2ee54b2fa7
      
https://github.com/qemu/qemu/commit/5e3b981c330c58c4e97ab85e40c3bd2ee54b2fa7
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/vfio/common.c
    M include/hw/vfio/vfio-common.h

  Log Message:
  -----------
  vfio: Support for RamDiscardManager in the !vIOMMU case

Implement support for RamDiscardManager, to prepare for virtio-mem
support. Instead of mapping the whole memory section, we only map
"populated" parts and update the mapping when notified about
discarding/population of memory via the RamDiscardListener. Similarly, when
syncing the dirty bitmaps, sync only the actually mapped (populated) parts
by replaying via the notifier.

Using virtio-mem with vfio is still blocked via
ram_block_discard_disable()/ram_block_discard_require() after this patch.

Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-7-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 3eed155caf0a9a6db1e140c01bd8f0300ac475ce
      
https://github.com/qemu/qemu/commit/3eed155caf0a9a6db1e140c01bd8f0300ac475ce
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/vfio/common.c
    M include/hw/vfio/vfio-common.h

  Log Message:
  -----------
  vfio: Query and store the maximum number of possible DMA mappings

Let's query the maximum number of possible DMA mappings by querying the
available mappings when creating the container (before any mappings are
created). We'll use this informaton soon to perform some sanity checks
and warn the user.

Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-8-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: a74317f636eb3352210fff5c58896ddc1e5aabdf
      
https://github.com/qemu/qemu/commit/a74317f636eb3352210fff5c58896ddc1e5aabdf
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/vfio/common.c

  Log Message:
  -----------
  vfio: Sanity check maximum number of DMA mappings with RamDiscardManager

Although RamDiscardManager can handle running into the maximum number of
DMA mappings by propagating errors when creating a DMA mapping, we want
to sanity check and warn the user early that there is a theoretical setup
issue and that virtio-mem might not be able to provide as much memory
towards a VM as desired.

As suggested by Alex, let's use the number of KVM memory slots to guess
how many other mappings we might see over time.

Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-9-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 0fd7616e0f1171b8149bb71f59e23ab048a8df83
      
https://github.com/qemu/qemu/commit/0fd7616e0f1171b8149bb71f59e23ab048a8df83
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/vfio/common.c
    M hw/virtio/virtio-mem.c
    M include/migration/vmstate.h

  Log Message:
  -----------
  vfio: Support for RamDiscardManager in the vIOMMU case

vIOMMU support works already with RamDiscardManager as long as guests only
map populated memory. Both, populated and discarded memory is mapped
into &address_space_memory, where vfio_get_xlat_addr() will find that
memory, to create the vfio mapping.

Sane guests will never map discarded memory (e.g., unplugged memory
blocks in virtio-mem) into an IOMMU - or keep it mapped into an IOMMU while
memory is getting discarded. However, there are two cases where a malicious
guests could trigger pinning of more memory than intended.

One case is easy to handle: the guest trying to map discarded memory
into an IOMMU.

The other case is harder to handle: the guest keeping memory mapped in
the IOMMU while it is getting discarded. We would have to walk over all
mappings when discarding memory and identify if any mapping would be a
violation. Let's keep it simple for now and print a warning, indicating
that setting RLIMIT_MEMLOCK can mitigate such attacks.

We have to take care of incoming migration: at the point the
IOMMUs get restored and start creating mappings in vfio, RamDiscardManager
implementations might not be back up and running yet: let's add runstate
priorities to enforce the order when restoring.

Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-10-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 98da491dff558df95768c5f81243fc49c6360a91
      
https://github.com/qemu/qemu/commit/98da491dff558df95768c5f81243fc49c6360a91
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M softmmu/physmem.c

  Log Message:
  -----------
  softmmu/physmem: Don't use atomic operations in 
ram_block_discard_(disable|require)

We have users in migration context that don't hold the BQL (when
finishing migration). To prepare for further changes, use a dedicated mutex
instead of atomic operations. Keep using qatomic_read ("READ_ONCE") for the
functions that only extract the current state (e.g., used by
virtio-balloon), locking isn't necessary.

While at it, split up the counter into two variables to make it easier
to understand.

Suggested-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-11-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 7e6d32ebf79079a88e24da3359e2427ebed5f1be
      
https://github.com/qemu/qemu/commit/7e6d32ebf79079a88e24da3359e2427ebed5f1be
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M include/exec/memory.h
    M softmmu/physmem.c

  Log Message:
  -----------
  softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard 
types

We want to separate the two cases whereby we discard ram
- uncoordinated: e.g., virito-balloon
- coordinated: e.g., virtio-mem coordinated via the RamDiscardManager

Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-12-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: bc072ed403e6f08c1911db4687511adcb3ecf587
      
https://github.com/qemu/qemu/commit/bc072ed403e6f08c1911db4687511adcb3ecf587
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/virtio/virtio-mem.c

  Log Message:
  -----------
  virtio-mem: Require only coordinated discards

We implement the RamDiscardManager interface and only require coordinated
discarding of RAM to work.

Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-13-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: 53d1b5fcfb40c47da4c060dc913df0e9f62894bd
      
https://github.com/qemu/qemu/commit/53d1b5fcfb40c47da4c060dc913df0e9f62894bd
  Author: David Hildenbrand <david@redhat.com>
  Date:   2021-07-08 (Thu, 08 Jul 2021)

  Changed paths:
    M hw/vfio/common.c

  Log Message:
  -----------
  vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus

We support coordinated discarding of RAM using the RamDiscardManager for
the VFIO_TYPE1 iommus. Let's unlock support for coordinated discards,
keeping uncoordinated discards (e.g., via virtio-balloon) disabled if
possible.

This unlocks virtio-mem + vfio on x86-64. Note that vfio used via "nvme://"
by the block layer has to be implemented/unlocked separately. For now,
virtio-mem only supports x86-64; we don't restrict RamDiscardManager to
x86-64, though: arm64 and s390x are supposed to work as well, and we'll
test once unlocking virtio-mem support. The spapr IOMMUs will need special
care, to be tackled later, e.g.., once supporting virtio-mem.

Note: The block size of a virtio-mem device has to be set to sane sizes,
depending on the maximum hotplug size - to not run out of vfio mappings.
The default virtio-mem block size is usually in the range of a couple of
MBs. The maximum number of mapping is 64k, shared with other users.
Assume you want to hotplug 256GB using virtio-mem - the block size would
have to be set to at least 8 MiB (resulting in 32768 separate mappings).

Acked-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Auger Eric <eric.auger@redhat.com>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: teawater <teawaterz@linux.alibaba.com>
Cc: Marek Kedzierski <mkedzier@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210413095531.25603-14-david@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>


  Commit: ebd1f710029e9a5746541d80508d8ea9956b81fc
      
https://github.com/qemu/qemu/commit/ebd1f710029e9a5746541d80508d8ea9956b81fc
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   2021-07-09 (Fri, 09 Jul 2021)

  Changed paths:
    M docs/system/deprecated.rst
    M hw/hyperv/vmbus.c
    M hw/vfio/common.c
    M hw/virtio/virtio-mem.c
    M include/exec/memory.h
    M include/hw/vfio/vfio-common.h
    M include/hw/virtio/virtio-mem.h
    M include/migration/vmstate.h
    M softmmu/memory.c
    M softmmu/physmem.c
    M util/mmap-alloc.c

  Log Message:
  -----------
  Merge remote-tracking branch 
'remotes/ehabkost-gl/tags/machine-next-pull-request' into staging

Machine queue, 2021-07-07

Deprecation:
* Deprecate pmem=on with non-DAX capable backend file
  (Igor Mammedov)

Feature:
* virtio-mem: vfio support (David Hildenbrand)

Cleanup:
* vmbus: Don't make QOM property registration conditional
  (Eduardo Habkost)

# gpg: Signature made Thu 08 Jul 2021 20:55:04 BST
# gpg:                using RSA key 5A322FD5ABC4D3DBACCFD1AA2807936F984DC5A6
# gpg:                issuer "ehabkost@redhat.com"
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>" [full]
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF  D1AA 2807 936F 984D C5A6

* remotes/ehabkost-gl/tags/machine-next-pull-request:
  vfio: Disable only uncoordinated discards for VFIO_TYPE1 iommus
  virtio-mem: Require only coordinated discards
  softmmu/physmem: Extend ram_block_discard_(require|disable) by two discard 
types
  softmmu/physmem: Don't use atomic operations in 
ram_block_discard_(disable|require)
  vfio: Support for RamDiscardManager in the vIOMMU case
  vfio: Sanity check maximum number of DMA mappings with RamDiscardManager
  vfio: Query and store the maximum number of possible DMA mappings
  vfio: Support for RamDiscardManager in the !vIOMMU case
  virtio-mem: Implement RamDiscardManager interface
  virtio-mem: Don't report errors when ram_block_discard_range() fails
  virtio-mem: Factor out traversing unplugged ranges
  memory: Helpers to copy/free a MemoryRegionSection
  memory: Introduce RamDiscardManager for RAM memory regions
  Deprecate pmem=on with non-DAX capable backend file
  vmbus: Don't make QOM property registration conditional

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>


Compare: https://github.com/qemu/qemu/compare/61b6b4382d56...ebd1f710029e



reply via email to

[Prev in Thread] Current Thread [Next in Thread]