qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Virt Storage/Block meeting minutes - 01-Jul-2020


From: John Ferlan
Subject: Virt Storage/Block meeting minutes - 01-Jul-2020
Date: Wed, 22 Jul 2020 08:49:31 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0

<forgot to send these after the last meeting>

Meeting date/time: Jul 01, 13:00 UTC

Previous meetings: http://etherpad.corp.redhat.com/PlatformVirtStorageMeetings
Prior meeting: 
http://etherpad.corp.redhat.com/PlatformVirtStorageMeetings-2020-Jun-10

Join the meeting:
Bluejeans: https://redhat.bluejeans.com/1471327918/
(phone numbers are available in the website above)
*4 to mute/unmute
IRC: #block-call

Attendees: jferlan, philmd, pkrempa, mreitz, stefanha, eblake

Updates and announcements:
 * FYI - QEMU 5.1 Upstream
   * softfreeze 2020-07-07
   * hardfreeze  2020-07-14
   * release 2020-08-11 or 2020-08-18 
 * FYI - RHEL schedules:
   * 8.x-AV - https://pp.engineering.redhat.com/pp/product/rhel_av/schedule
     * 
http://batcave.lab.eng.brq.redhat.com/reports/full-rhel8/short_report.html
     * 8.2.1 - Changes require exception+/blocker+
       * GA: Tue 2020-07-28
     * 8.3.0 - Will get rebase of qemu-5.1 and libvirt-6.7
       * Feature Freeze: Tue 2020-09-01
       * Bug Freeze: Mon 2020-09-14
       * Final Freeze: Tue 2020-10-06 
   * 8.3.0
     * Backport from RHEL AV 8.2.1:
       * https://bugzilla.redhat.com/show_bug.cgi?id=1844296
     * Unless High/Urgent, avoid clone from RHEL AV 8.3.0+
   * 7.9 - changes require blocker+
     * GA: Tue 2020-08-11
     * All future changes occur via zstream processing

 * FYI - Layered Products
   * RHV 4.4 will use on RHEL-AV-8.2.x.
     * GA Planned: 2020-08-04
   * OSP
     * OSP 15 and 16.0 are using RHEL-AV-8.1.1
     * OSP 16.1 compose (as of 11-Mar-2020) is using RHEL-AV-8.2
       * GA Planned: June 2020
   * CNV will use RHEL-AV-8.2.x and stay up to date with RHEL-AV in future 
versions
     * 2.3 GA: 2020-05-04
     * 2.4 GA Planned: 2020-06-24

 * FYI - General QEMU/libvirt Release Rebase schedule: 
https://docs.google.com/spreadsheets/d/1izPBg1YSeEHDcuy8gtfcbVgHdx_xkRYubHnbe6BkV8c

----------------------------------------------------------------------------------------------------------------------------

Projects:

 * Incremental Backup
   * 8.2.1: https://projects.engineering.redhat.com/browse/RHELPLAN-39267
     * Completed:
       * bz1779904 - RFE: ability to estimate bitmap space utilization for qcow2
       * bz1779893 - RFE: Copy bitmaps with qemu-img convert
       * bz1778593 - Qemu coredump when backup to a existing small size image
       * bz1804593 - RFE: Allow incremental backups with externally-created 
snapshots
         * Peter generated kbase/docs on usage

     * Moved to 8.3.0
       * bz1780409 - RFE: Add block-dirty-bitmap-generate block job to 
facilitate bitmap chain recovery
         * Hope to get this completed before qemu-5.1 soft freeze
         * plan for qemu is to get x-block-dirty-bitmap-populate into 5.1 with 
x- prefix, to allow us the ability to change interface as we experiment with 
it, since libvirt is not depending on it yet

     * Questions/Discussion over current implementation
       * 

     * RHV progress [nsoffer, eshenitz]
       * not present


   * 8.3.0: https://projects.engineering.redhat.com/browse/RHELPLAN-38611
     * libvirt: [pkrempa]
       * Need bz for libvirt work to utilize qemu block-dirty-bitmap-generate 
bz1780409 
         * 

       * bz1829829 - backups after disk hotplug
         * refactored some internals which will make fixing this simpler
         * Patches are done but need to be tested prior to sending

       * bz1799011 - bitmaps + live migration
         * 

       * bz1799010 - bitmaps + virDomainBlockPull
         * [jferlan] Previous meeting minutes indicate this is not needed by 
RHV, so should we close the bz and remove from Epic then?
         * patches are done but need to be tested prior to sending

       * bz1799015 - Enable Incremental Backup by default
         * Update depends on?
         * all of the above :)

     * qemu [eblake, kwolf, mreitz]
       * bz1780409 - RFE: Add block-dirty-bitmap-generate block job to 
facilitate bitmap chain recovery (eblake)
         * Hope to get this completed before qemu-5.1 soft freeze
         * v3 sent upstream, review had some command suggestions, and need to 
investigate those options more
           * Considering using the x- prefix before softfreeze to allow for 
changes

       * bz1790492 - bitmaps + live migration (mreitz)
         * First non-RFC posted: 
https://lists.nongnu.org/archive/html/qemu-block/2020-06/msg01669.html
         * goal to get this in before 5.1 softfreeze

       * bz1801625 - block-stream job without dirtying bitmaps (kwolf)
         * [jferlan] Previous meeting minutes indicate this may not be 
necessary, so should we close the bz and remove from Epic then?
         * Will require mreitz series "Deal with filters" to expose the corner 
case

       * bz1780416 - RFE: qemu-img check --repair should optionally remove any 
corrupted bitmaps (virt-maint)
         * [Need owner]

   * QEMU Backlog - Need to determine need, owner, and possible release
     * bz1814664 - backup in push mode will actually write ZEROes to target 
image [eblake]
       * 

     * bz1816692 - when scratch file has no space during pull-mode backup, the 
backup job will still run, this will produce a inconsistent backup file 
[virt-maint]
       * 

     * bz1802401 - The actual size of backup image bigger than base image after 
dd data file in guest [virt-maint]
       * 

     * bz1784266 - Error prompt info when do incremental backup with an invalid 
"bitmap-mode" [virt-maint]
       * 

----------------------------------------------------------------------------------------------------------------------------

 * Project Cheetah: [stefanha]
   * Cheetah - performance improvements for fast storage in KVM: 
https://docs.engineering.redhat.com/display/RHELPLAN/Cheetah+-+performance+improvements+for+fast+storage+in+KVM

     * bz1827722 - QEMU multi-queue virtio-blk/scsi by default and associated 
scalability optimizations that eliminate interprocessor interrupts.
       * Benchmarked guest driver NUMA-awareness but there was no measurable 
improvement, leaving it for now
       * Found that automatic irq affinity of virtqueue interrupts does not 
honor NUMA, so completion interrupts may be handled on a CPU in another NUMA 
node. This is a problem for devices with few interrupts. Not a problem for true 
multiqueue devices because we want an irq on each CPU anyway.
       * Userspace cannot override irq affinity when the guest driver uses 
automatic irq affinity, e.g. virtio_blk.ko

     * bz1827756 - virtio-blk io_uring passthrough to provide guests direct 
access to host storage APIs [sgarzare]
       * Stefano on PTO

     * QEMU nvme [philmd]
       * series posted: "cleanups required to use multiple queues"
         * https://lists.gnu.org/archive/html/qemu-devel/2020-06/msg10136.html
     * Preliminary step required for BZ#1827750
     * Add support for sharing PCI devices between nvme:// block driver 
instances"

     * qemu-storage-daemon [kwolf]
       * Updates on Coiby Xu's vhost-user-blk server patches
         * Review is moving along, getting closer to merge

       * Upstream: [PATCH v6 00/12] monitor: Optionally run handlers in 
coroutines
         * 
https://lists.nongnu.org/archive/html/qemu-devel/2020-05/msg08018.html

       * bz1827724 - qemu-storage-daemon SPDK-like polling mode I/O [stefanha]
         * No change/update

     * virtio_blk.ko completion polling [stefanha]
       * Update on extend cpuidle-haltpoll to peek at virtqueues w/ mtosatti
         * No change/update

     * bz1519010 - QEMU multi-queue block layer support [bonzini]
       * Eliminate CPU bottlenecks by spreading I/O across CPUs on the host 
       * No change/update

     * bz1806887 - High IOPS storage performance optimizations [stefanha]
       * Generic BZ tracking upstream changes targeting RHEL-AV-8.3.0 due to a 
rebase
       * No change/update

----------------------------------------------------------------------------------------------------------------------------

 * virtio-fs [mreitz, slp]
   * Rust virtiofsd implementation - packaging progress/concerns
     * Update to packaging conundrum?
       * Upstream cloud-hypervisor community is moving virtio-fs daemon to 
separate git repo \o/
       * Contact w/ Don Bayly (Intel Partner Manager)

   * Upstream code to enable on s390x (Marc Hartmayer <mhartmay@linux.ibm.com>)
     * 

   * kubevirt/CNV
     * Initial PR from Vladik: https://github.com/kubevirt/kubevirt/pull/3493

   * kvm.ko async pf for virtio-fs DAX [future work]
     * Making progress but Vivek is concerned that there is no mechanism yet 
that is suitable for raising faults to the guest

----------------------------------------------------------------------------------------------------------------------------

 * LUKS encryption slot management [mlevitsk, mreitz]
   * Issues with build/test on PULL, Max has reposted upstream with iotest 
adjustments
   * Going to try to make soft freeze

----------------------------------------------------------------------------------------------------------------------------

 * Project Capybara [areis, jsnow, stefanha]
   * Capybara weekly meeting details: 
https://docs.google.com/document/d/12N_Ml0_ZFEKgS609sDu7EaaU9ePYBM1QXV52sXVQy8o
   * OpenShift Virtualization Storage presentation (pre-meeting):  
https://docs.google.com/presentation/d/1iRBx15NMtWqeLE5QTQAY2W77GDBOnd4jRwDXyzif-hs/edit
   * Various discussion with extended details on capybara-list
     * See Kubernetes CSI (https://github.com/kubernetes-csi) 
qemu-storage-daemon plugin​
       * If qemu-storage-daemon is a storage provider, can we provide backups, 
NVMe userspace driver, storage migration, I/O throttling, etc? The answer to 
many of these is "yes" although we need to fit it into the CSI API.
       * 

----------------------------------------------------------------------------------------------------------------------------

 * virtio-blk vDPA [stefanha]
   * Mellanox looking at VFIO, might not use vDPA

----------------------------------------------------------------------------------------------------------------------------

 * virtio-blk 2.0 [stefanha] - from <virt-devel>
   * Please see discussion on mailing list if you are interested in details

----------------------------------------------------------------------------------------------------------------------------

 * NVMe emulation device maintenance problem
   * Only "include/block/nvme.h" is important for Red Hat
   * Keith - current maintainer - is not very active
   * recent high activity from Samsung and WDC
   * Kevin suggests Klaus from Samsung step in to help

 * 
-----------------------------------------------------------------------------------------------------------------------

Next meeting will be 22-Jul.  The 08-Jul meeting was canceled

 * <feel free to add more>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]