[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device
From: |
Philippe Mathieu-Daudé |
Subject: |
Re: [PATCH] block/nvme: Fix VFIO_MAP_DMA failed: No space left on device |
Date: |
Mon, 14 Jun 2021 18:03:22 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 |
On 6/11/21 1:46 PM, Philippe Mathieu-Daudé wrote:
> When the NVMe block driver was introduced (see commit bdd6a90a9e5,
> January 2018), Linux VFIO_IOMMU_MAP_DMA ioctl was only returning
> -ENOMEM in case of error. The driver was correctly handling the
> error path to recycle its volatile IOVA mappings.
>
> To fix CVE-2019-3882, Linux commit 492855939bdb ("vfio/type1: Limit
> DMA mappings per container", April 2019) added the -ENOSPC error to
> signal the user exhausted the DMA mappings available for a container.
Hmm this commit has been added before v5.1-rc4.
So while this fixes the behavior of v5.1-rc4+ kernels,
older kernels using this fix will have the same problem...
Should I check uname(2)'s utsname.release[]? Is it reliable?
> The block driver started to mis-behave:
>
> qemu-system-x86_64: VFIO_MAP_DMA failed: No space left on device
> (qemu)
> (qemu) info status
> VM status: paused (io-error)
> (qemu) c
> VFIO_MAP_DMA failed: No space left on device
> qemu-system-x86_64: block/block-backend.c:1968: blk_get_aio_context:
> Assertion `ctx == blk->ctx' failed.
>
> Fix by handling the -ENOSPC error when DMA mappings are exhausted;
> other errors (such -ENOMEM) are still handled later in the same
> function.
>
> An easy way to reproduce this bug is to restrict the DMA mapping
> limit (65535 by default) when loading the VFIO IOMMU module:
>
> # modprobe vfio_iommu_type1 dma_entry_limit=666
>
> Cc: qemu-stable@nongnu.org
> Reported-by: Michal Prívozník <mprivozn@redhat.com>
> Fixes: bdd6a90a9e5 ("block: Add VFIO based NVMe driver")
> Buglink: https://bugs.launchpad.net/qemu/+bug/1863333
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/65
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> Michal, is it still possible for you to test this (old bug)?
>
> A functional test using viommu & nested VM is planned (suggested by
> Stefan and Maxim).
> ---
> block/nvme.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index 2b5421e7aa6..12f9dd5cce3 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -1030,7 +1030,7 @@ try_map:
> r = qemu_vfio_dma_map(s->vfio,
> qiov->iov[i].iov_base,
> len, true, &iova);
> - if (r == -ENOMEM && retry) {
> + if (r == -ENOSPC && retry) {
> retry = false;
> trace_nvme_dma_flush_queue_wait(s);
> if (s->dma_map_count) {
>