[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts
From: |
Michael S. Tsirkin |
Subject: |
Re: [PATCH v2 for-6.0?] hw/pci-host/gpex: Don't fault for unmapped parts of MMIO and PIO windows |
Date: |
Thu, 25 Mar 2021 13:03:14 -0400 |
On Thu, Mar 25, 2021 at 04:33:15PM +0000, Peter Maydell wrote:
> Currently the gpex PCI controller implements no special behaviour for
> guest accesses to areas of the PIO and MMIO where it has not mapped
> any PCI devices, which means that for Arm you end up with a CPU
> exception due to a data abort.
>
> Most host OSes expect "like an x86 PC" behaviour, where bad accesses
> like this return -1 for reads and ignore writes. In the interests of
> not being surprising, make host CPU accesses to these windows behave
> as -1/discard where there's no mapped PCI device.
>
> The old behaviour generally didn't cause any problems, because
> almost always the guest OS will map the PCI devices and then only
> access where it has mapped them. One corner case where you will see
> this kind of access is if Linux attempts to probe legacy ISA
> devices via a PIO window access. So far the only case where we've
> seen this has been via the syzkaller fuzzer.
>
> Reported-by: Dmitry Vyukov <dvyukov@google.com>
> Fixes: https://bugs.launchpad.net/qemu/+bug/1918917
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> v1->v2 changes: put in the hw_compat machinery.
>
> Still not sure if I want to put this in 6.0 or not.
>
> include/hw/pci-host/gpex.h | 4 +++
> hw/core/machine.c | 1 +
> hw/pci-host/gpex.c | 56 ++++++++++++++++++++++++++++++++++++--
> 3 files changed, 58 insertions(+), 3 deletions(-)
>
> diff --git a/include/hw/pci-host/gpex.h b/include/hw/pci-host/gpex.h
> index d48a020a952..fcf8b638200 100644
> --- a/include/hw/pci-host/gpex.h
> +++ b/include/hw/pci-host/gpex.h
> @@ -49,8 +49,12 @@ struct GPEXHost {
>
> MemoryRegion io_ioport;
> MemoryRegion io_mmio;
> + MemoryRegion io_ioport_window;
> + MemoryRegion io_mmio_window;
> qemu_irq irq[GPEX_NUM_IRQS];
> int irq_num[GPEX_NUM_IRQS];
> +
> + bool allow_unmapped_accesses;
> };
>
> struct GPEXConfig {
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 257a664ea2e..9750fad7435 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -41,6 +41,7 @@ GlobalProperty hw_compat_5_2[] = {
> { "PIIX4_PM", "smm-compat", "on"},
> { "virtio-blk-device", "report-discard-granularity", "off" },
> { "virtio-net-pci", "vectors", "3"},
> + { "gpex-pcihost", "allow-unmapped-accesses", "false" },
> };
> const size_t hw_compat_5_2_len = G_N_ELEMENTS(hw_compat_5_2);
>
> diff --git a/hw/pci-host/gpex.c b/hw/pci-host/gpex.c
> index 2bdbe7b4561..a6752fac5e8 100644
> --- a/hw/pci-host/gpex.c
> +++ b/hw/pci-host/gpex.c
> @@ -83,12 +83,51 @@ static void gpex_host_realize(DeviceState *dev, Error
> **errp)
> int i;
>
> pcie_host_mmcfg_init(pex, PCIE_MMCFG_SIZE_MAX);
> + sysbus_init_mmio(sbd, &pex->mmio);
> +
> + /*
> + * Note that the MemoryRegions io_mmio and io_ioport that we pass
> + * to pci_register_root_bus() are not the same as the
> + * MemoryRegions io_mmio_window and io_ioport_window that we
> + * expose as SysBus MRs. The difference is in the behaviour of
> + * accesses to addresses where no PCI device has been mapped.
> + *
> + * io_mmio and io_ioport are the underlying PCI view of the PCI
> + * address space, and when a PCI device does a bus master access
> + * to a bad address this is reported back to it as a transaction
> + * failure.
> + *
> + * io_mmio_window and io_ioport_window implement "unmapped
> + * addresses read as -1 and ignore writes"; this is traditional
> + * x86 PC behaviour, which is not mandated by the PCI spec proper
> + * but expected by much PCI-using guest software, including Linux.
> + *
> + * In the interests of not being unnecessarily surprising, we
> + * implement it in the gpex PCI host controller, by providing the
> + * _window MRs, which are containers with io ops that implement
> + * the 'background' behaviour and which hold the real PCI MRs as
> + * subregions.
> + */
> memory_region_init(&s->io_mmio, OBJECT(s), "gpex_mmio", UINT64_MAX);
> memory_region_init(&s->io_ioport, OBJECT(s), "gpex_ioport", 64 * 1024);
>
> - sysbus_init_mmio(sbd, &pex->mmio);
> - sysbus_init_mmio(sbd, &s->io_mmio);
> - sysbus_init_mmio(sbd, &s->io_ioport);
> + if (s->allow_unmapped_accesses) {
> + memory_region_init_io(&s->io_mmio_window, OBJECT(s),
> + &unassigned_io_ops, OBJECT(s),
> + "gpex_mmio_window", UINT64_MAX);
> + memory_region_init_io(&s->io_ioport_window, OBJECT(s),
> + &unassigned_io_ops, OBJECT(s),
> + "gpex_ioport_window", 64 * 1024);
> +
> + memory_region_add_subregion(&s->io_mmio_window, 0, &s->io_mmio);
> + memory_region_add_subregion(&s->io_ioport_window, 0, &s->io_ioport);
> + sysbus_init_mmio(sbd, &s->io_mmio_window);
> + sysbus_init_mmio(sbd, &s->io_ioport_window);
> + } else {
> + sysbus_init_mmio(sbd, &s->io_mmio);
> + sysbus_init_mmio(sbd, &s->io_ioport);
> + }
> +
> for (i = 0; i < GPEX_NUM_IRQS; i++) {
> sysbus_init_irq(sbd, &s->irq[i]);
> s->irq_num[i] = -1;
> @@ -108,6 +147,16 @@ static const char *gpex_host_root_bus_path(PCIHostState
> *host_bridge,
> return "0000:00";
> }
>
> +static Property gpex_host_properties[] = {
> + /*
> + * Permit CPU accesses to unmapped areas of the PIO and MMIO windows
> + * (discarding writes and returning -1 for reads) rather than aborting.
> + */
> + DEFINE_PROP_BOOL("allow-unmapped-accesses", GPEXHost,
> + allow_unmapped_accesses, true),
> + DEFINE_PROP_END_OF_LIST(),
> +};
> +
> static void gpex_host_class_init(ObjectClass *klass, void *data)
> {
> DeviceClass *dc = DEVICE_CLASS(klass);
> @@ -117,6 +166,7 @@ static void gpex_host_class_init(ObjectClass *klass, void
> *data)
> dc->realize = gpex_host_realize;
> set_bit(DEVICE_CATEGORY_BRIDGE, dc->categories);
> dc->fw_name = "pci";
> + device_class_set_props(dc, gpex_host_properties);
> }
>
> static void gpex_host_initfn(Object *obj)
> --
> 2.20.1