qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-discuss] Best Intel hardware for qemu


From: Friedrich Oslage
Subject: Re: [Qemu-discuss] Best Intel hardware for qemu
Date: Wed, 10 Apr 2019 21:27:50 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.3

These are the "compatibility" lists I know of:
- https://passthroughpo.st/vfio-increments/
- https://docs.google.com/spreadsheets/d/1LnGpTrXalwGVNy0PWJDURhyxa3sgqkGXmvNCIvIMenk/view#gid=2

They are focused on desktop hardware and GPU passthrough, because that's what most people use VFIO for since there is just no other way to get decent 3D performance inside the VM. Maybe virgil3d, 10 years from now.

When configurating your VMs, make sure you use paravirtualized drivers whenever possible to get decent performance. You should use virtio-net for network (comparable to vmxnet3) and virtio-scsi for storage (comparable to pvscsi). Those will give you decent performance without having to deal with VFIO.

Regards
Friedrich

On 4/10/19 9:04 PM, Lars Bonnesen wrote:
Yes, I get it, thanks. Any way to find out if a given hardware supports
VFIO?

regards, Lars

On Wed, Apr 10, 2019, 20:56 Jakob Bohm <address@hidden> wrote:

As I wrote, qemu can pass disks (and disk partitions) through without
passing the disk controller through.  To qemu, a physical disk is just
another virtual disk storage format.

But if you want to pass through an entire PCI disk controller (with all
its disks) for faster I/O, then VFIO is needed.

On 10/04/2019 20:40, Lars Bonnesen wrote:
I want to pass local disks to a VM in order to run freenas or similar.

Regards, Lars.

On Wed, Apr 10, 2019, 20:20 Jakob Bohm <address@hidden
<mailto:address@hidden>> wrote:

     If you pass through the disk access to your SAN partitions as disk
     accesses to block devices (such as SAN client drivers) in the host
     machine, you don't need VFIO for that.  This can handle nearly
     unlimited number of virtual machines without running out of PCI
     slots in the host machine.  This is the equivalent of "passing
     through a SAN disk" in VMWare, but isn't artificially limited to
     SAN disks (for example, you can layer the Linux multipath drivers
     and/or the Linux disk encryption drivers between the virtual
     machine and the actual SAN).

     If you pass through the network access to your (iSCSI or NBD) SAN
     as network traffic via the general qemu/kvm network features, you
     don't need VFIO for that.  This can handle nearly unlimited number
     of virtual machines without running out of PCI slots in the host
     machine.  This is equivalent to using a "VMWare virtual switch".

     If you dedicate a physical SAN adapter (iSCSI, NBD, SAS or fibre
     channel) to each virtual machine and pass it through to that
     virtual machine, you need VFIO for that.  As on VMWare, this will
     limit you to one virtual machine for each PCI slot in the
     motherboard.

     If you dedicate a physical network adapter (NIC) to each individual
     virtual machine and pass it through to PCI drivers in that virtual
     machine, you need VFIO for that.  This too will limit you to one
     virtual machine for each PCI slot in the motherboard.

     As for passing through raw SCSI devices or busses, I don't know
     if the latest qemu versions have the ability to do this at a
     hardware-independent level like VMWare does (VM sends standard
     SCSI requests to qemu virtual SCSI adapter, qemu sends those
     same SCSI requests to real SCSI hardware via something like the
     Linux "SCSI generic" driver, optionally mapping at most the
     SCSI-level bus address).

     On 10/04/2019 19:42, Lars Bonnesen wrote:
      > But for sure I want passthru - for running virtualized SAN and
such
      >
      > Any unofficial list?
      >
      > Regards, Lars.
      >
      > On Wed, Apr 10, 2019, 18:58 Friedrich Oslage
     <address@hidden <mailto:
address@hidden>>
      > wrote:
      >
      >> As long as it's VT-x capable and can run Linux, you're good to
go.
      >>
      >> It only gets tricky once you start using VFIO (direct
passthrough of
      >> host PCI devices to a VM, such as GPUs, NICs or NVMes for
     instance). You
      >> need VT-d support for that and both the CPU and mainboard have to
      >> support it. And not only do they have to support it, they have to
      >> support in a usable way with decent iommu group isolation and
     without
      >> weird bugs. There are no (official) compatibility lists for
     this, it's
      >> still mostly trial and error...
      >>
      >> Regards
      >> Friedrich
      >>
      >> On 4/10/19 2:26 PM, Lars Bonnesen wrote:
      >>> So I am coming from the VMware world (with comprehensive
     compatibillity
      >>> lists) but about to start a project with KVM/Qemu and I would
     like to
      >> setup
      >>> an inexpensive test setup for this purpose.
      >>>
      >>> I am thinking of buying one of SuperMicros IoT-servers like
      >>>
      >>

https://www.supermicro.com/products/system/Mini-ITX/SYS-E300-9D-4CN8TP.cfm
      >>> Will this be a nice pick for Qemu? Or any VT-supportet system
     will work
      >>> fine?
      >>>
      >>> Regards, Lars.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded





reply via email to

[Prev in Thread] Current Thread [Next in Thread]