qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 0/5] Connect a PCIe host and graphics support


From: Andrea Bolognani
Subject: Re: [Qemu-devel] [PATCH v5 0/5] Connect a PCIe host and graphics support to RISC-V
Date: Tue, 16 Oct 2018 16:11:48 +0200

On Tue, 2018-10-16 at 09:38 +0200, Andrea Bolognani wrote:
> On Mon, 2018-10-15 at 09:59 -0700, Alistair Francis wrote:
> > On Mon, Oct 15, 2018 at 7:39 AM Andrea Bolognani <address@hidden> wrote:
> > > One more thing that I forgot to bring up earlier: at the same time
> > > as PCIe support is added, we should also make sure that the
> > > pcie-root-port device is built into the qemu-system-riscv* binaries
> > > by default, as that device being missing will cause PCI-enabled
> > > libvirt guests to fail to start.
> > 
> > We are dong that aren't we?
> 
> Doesn't look that way:
> 
>   $ riscv64-softmmu/qemu-system-riscv64 -device help 2>&1 | head -5
>   Controller/Bridge/Hub devices:
>   name "pci-bridge", bus PCI, desc "Standard PCI Bridge"
>   name "pci-bridge-seat", bus PCI, desc "Standard PCI Bridge (multiseat)"
>   name "vfio-pci-igd-lpc-bridge", bus PCI, desc "VFIO dummy ISA/LPC bridge 
> for IGD assignment"
> 
>   $

Okay, I've (slow) cooked myself a BBL with CONFIG_PCI_HOST_GENERIC=y,
a QEMU with CONFIG_PCIE_PORT=y and a libvirt with RISC-V PCI support.

With all of the above in place, I could finally define a mmio-less
guest which... Failed to boot pretty much right away:

  error: Failed to start domain riscv
  error: internal error: process exited while connecting to monitor:
  2018-10-16T13:32:20.713064Z qemu-system-riscv64: -device
  
pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1:
  MSI-X is not supported by interrupt controller

Well, okay then. As a second attempt, I manually placed all virtio
devices on pcie.0, overriding libvirt's own address assignment
algorithm and getting rid of pcie-root-ports at the same time. Now
the guest will actually start, but soon enough

  OF: PCI: host bridge /address@hidden ranges:
  OF: PCI:   No bus range found for /address@hidden, using [bus 00-ff]
  OF: PCI:   MEM 0x40000000..0x5fffffff -> 0x40000000
  pci-host-generic 2000000000.pci: ECAM area [mem 0x2000000000-0x2003ffffff] 
can only accommodate [bus 00-3f] (reduced from [bus 00-ff] desired)
  pci-host-generic 2000000000.pci: ECAM at [mem 0x2000000000-0x2003ffffff] for 
[bus 00-3f]
  pci-host-generic 2000000000.pci: PCI host bridge to bus 0000:00
  pci_bus 0000:00: root bus resource [bus 00-ff]
  pci_bus 0000:00: root bus resource [mem 0x40000000-0x5fffffff]
  pci 0000:00:02.0: BAR 6: assigned [mem 0x40000000-0x4003ffff pref]
  pci 0000:00:01.0: BAR 4: assigned [mem 0x40040000-0x40043fff 64bit pref]
  pci 0000:00:02.0: BAR 4: assigned [mem 0x40044000-0x40047fff 64bit pref]
  pci 0000:00:03.0: BAR 4: assigned [mem 0x40048000-0x4004bfff 64bit pref]
  pci 0000:00:04.0: BAR 4: assigned [mem 0x4004c000-0x4004ffff 64bit pref]
  pci 0000:00:01.0: BAR 0: no space for [io  size 0x0040]
  pci 0000:00:01.0: BAR 0: failed to assign [io  size 0x0040]
  pci 0000:00:02.0: BAR 0: no space for [io  size 0x0020]
  pci 0000:00:02.0: BAR 0: failed to assign [io  size 0x0020]
  pci 0000:00:03.0: BAR 0: no space for [io  size 0x0020]
  pci 0000:00:03.0: BAR 0: failed to assign [io  size 0x0020]
  pci 0000:00:04.0: BAR 0: no space for [io  size 0x0020]
  pci 0000:00:04.0: BAR 0: failed to assign [io  size 0x0020]
  virtio-pci 0000:00:01.0: enabling device (0000 -> 0002)
  virtio-pci 0000:00:02.0: enabling device (0000 -> 0002)
  virtio-pci 0000:00:03.0: enabling device (0000 -> 0002)
  virtio-pci 0000:00:04.0: enabling device (0000 -> 0002)

will show up on the console and boot will not progress any further.

I tried making only the disk virtio-pci, leaving all other devices
as virtio-mmio, but that too failed to boot with a similar message
about IO space exaustion. If the network device is the only one
using virtio-pci, though, despite still getting

  pci 0000:00:01.0: BAR 0: no space for [io  size 0x0020]
  pci 0000:00:01.0: BAR 0: failed to assign [io  size 0x0020]

I can get all the way to a prompt, and the device will show up in
the output of lspci:

  00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
        Subsystem: Red Hat, Inc. Device 1100
        Flags: fast devsel
  lspci: Unable to load libkmod resources: error -12

  00:01.0 Ethernet controller: Red Hat, Inc. Virtio network device
        Subsystem: Red Hat, Inc. Device 0001
        Flags: bus master, fast devsel, latency 0, IRQ 1
        I/O ports at <unassigned> [disabled]
        Memory at 40040000 (64-bit, prefetchable) [size=16K]
        [virtual] Expansion ROM at 40000000 [disabled] [size=256K]
        Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
        Capabilities: [70] Vendor Specific Information: VirtIO: Notify
        Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
        Capabilities: [50] Vendor Specific Information: VirtIO: ISR
        Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
        Kernel driver in use: virtio-pci

So it looks like virtio-pci is not quite usable yet; still, this is
definitely some progress over the status quo! Anyone has any ideas on
how to bridge the gap separating us from a pure virtio-pci RISC-V
guest?

-- 
Andrea Bolognani / Red Hat / Virtualization




reply via email to

[Prev in Thread] Current Thread [Next in Thread]