qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emulating SVE on non-SVE host with qemu-system-aarch64


From: Mitchell Augustin
Subject: Re: Emulating SVE on non-SVE host with qemu-system-aarch64
Date: Wed, 8 Jan 2025 10:48:13 -0600

Hi Peter,

I don't want to ignore it if you think I've found a bug here - but my
reproducer unfortunately is the VM that was launched with
libvirt/virt-install. If you know of a pure QEMU command to spin up a
new QEMU VM that would attempt to use the "neoverse-v1" CPU model with
KVM (run on a host without sve), that is what I would be searching for
if I wanted to find a reproducer without libvirt - otherwise the
libvirt-generated one is all I have since I typically do everything
through libvirt.
I'm not sure if this is inherently useful to you since it is filled
with a ton of libvirt "stuff" that may not be standalone, but here is
the full qemu command that throws that error when I run "virsh start
maugustin":

/usr/bin/qemu-system-aarch64 -name guest=maugustin,debug-threads=on -S
-object 
{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-2-maugustin/master-key.aes"}
-blockdev 
{"driver":"file","filename":"/usr/share/AAVMF/AAVMF_CODE.ms.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}
-blockdev 
{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/maugustin_VARS.fd","node-name":"libvirt-pflash1-storage","read-only":false}
-machine 
virt-9.0,usb=off,gic-version=3,dump-guest-core=off,memory-backend=mach-virt.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-storage,acpi=on
-accel kvm -cpu host -m size=18874368k -object
{"qom-type":"memory-backend-ram","id":"mach-virt.ram","size":19327352832}
-overcommit mem-lock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
4af37efa-506c-484b-b4bf-ba9f6d52bdbe -no-user-config -nodefaults
-chardev socket,id=charmonitor,fd=31,server=on,wait=off -mon
chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device
{"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"}
-device 
{"driver":"pcie-root-port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"}
-device 
{"driver":"pcie-root-port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"}
-device 
{"driver":"pcie-root-port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"}
-device 
{"driver":"pcie-root-port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"}
-device 
{"driver":"pcie-root-port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"}
-device 
{"driver":"pcie-root-port","port":14,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x1.0x6"}
-device 
{"driver":"pcie-root-port","port":15,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x1.0x7"}
-device 
{"driver":"pcie-root-port","port":16,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x2"}
-device 
{"driver":"pcie-root-port","port":17,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x2.0x1"}
-device 
{"driver":"pcie-root-port","port":18,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x2.0x2"}
-device 
{"driver":"pcie-root-port","port":19,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x2.0x3"}
-device 
{"driver":"pcie-root-port","port":20,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x2.0x4"}
-device 
{"driver":"pcie-root-port","port":21,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x2.0x5"}
-device 
{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"}
-device {"driver":"virtio-scsi-pci","id":"scsi0","bus":"pci.3","addr":"0x0"}
-device 
{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.4","addr":"0x0"}
-blockdev 
{"driver":"file","filename":"/vms/noble-server-cloudimg-arm64.img","node-name":"libvirt-4-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-4-format","read-only":true,"driver":"qcow2","file":"libvirt-4-storage","backing":null}
-blockdev 
{"driver":"file","filename":"/vms/maugustin-vda.qcow2","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-3-format","read-only":false,"driver":"qcow2","file":"libvirt-3-storage","backing":"libvirt-4-format"}
-device 
{"driver":"virtio-blk-pci","bus":"pci.5","addr":"0x0","drive":"libvirt-3-format","id":"virtio-disk0","bootindex":1}
-blockdev 
{"driver":"file","filename":"/vms/maugustin-seed.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}
-blockdev 
{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}
-device 
{"driver":"virtio-blk-pci","bus":"pci.6","addr":"0x0","drive":"libvirt-2-format","id":"virtio-disk1"}
-device 
{"driver":"scsi-cd","bus":"scsi0.0","channel":0,"scsi-id":0,"lun":0,"device_id":"drive-scsi0-0-0-0","id":"scsi0-0-0-0"}
-netdev {"type":"tap","fd":"32","vhost":true,"vhostfd":"35","id":"hostnet0"}
-device 
{"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:d2:06:80","bus":"pci.1","addr":"0x0"}
-chardev pty,id=charserial0 -serial chardev:charserial0 -chardev
socket,id=charchannel0,fd=30,server=on,wait=off -device
{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}
-chardev socket,id=chrtpm,path=/run/libvirt/qemu/swtpm/2-maugustin-swtpm.sock
-tpmdev emulator,id=tpm-tpm0,chardev=chrtpm -device
{"driver":"tpm-tis-device","tpmdev":"tpm-tpm0","id":"tpm0"} -device
{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"} -device
{"driver":"usb-kbd","id":"input1","bus":"usb.0","port":"2"} -audiodev
{"id":"audio1","driver":"none"} -vnc 0.0.0.0:0,audiodev=audio1 -device
{"driver":"virtio-gpu-pci","id":"video0","max_outputs":1,"bus":"pci.9","addr":"0x0"}
-device 
{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.7","addr":"0x0"}
-object {"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}
-device 
{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.8","addr":"0x0"}
-cpu neoverse-v1 -sandbox
on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
-msg timestamp=on

If you want to try and reproduce this the exact same way I did, you
can run the following:
#!/bin/bash
sudo apt-get install -y qemu-kvm libvirt-daemon-system libvirt-clients
bridge-utils virt-manager cloud-init cloud-image-utils ovmf python3
expect python3-pip python3-yaml
sudo usermod -aG libvirt ubuntu
echo "Log out and log back in to update kvm permissions"
VM=maugustin

virsh destroy $VM
virsh undefine $VM --nvram
rm -f *.qcow2

cat > user-data <<EOF
#cloud-config
password: passw0rd
chpasswd: { expire: False }
hostname: ${VM}
package_update: true
version: 2
EOF

cloud-localds /vms/${VM}-seed.qcow2 user-data -d qcow2
qemu-img create -b /vms/noble-server-cloudimg-arm64.img -F qcow2 -f
qcow2 /vms/${VM}-vda.qcow2 90G


virt-install --name ${VM} --memory $((18*1024)) --graphics
vnc,listen=0.0.0.0 --noautoconsole \
             --console pty,target_type=serial --vcpus 4,cpuset=0-3 \
             --machine virt --osinfo name=ubuntujammy \
             --cdrom /vms/ubuntu-24.04-live-server-arm64.iso \
             --disk /vms/${VM}-vda.qcow2 --disk /vms/${VM}-seed.qcow2
--import  --qemu-commandline="-cpu max" 2>&1 | tee debug.log #\

where noble-server-cloudimage is from
https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-arm64.img

Then run through the visual installation using the "max" processor,
then shutdown.
Then do "virsh edit maugustin" and change <qemu:arg value='max'/> to
<qemu:arg value='neoverse-v1'/>, save/exit, and run "virsh start
maugustin". At that point, you should see the error I saw.

-Mitchell Augustin

On Tue, Jan 7, 2025 at 4:46 PM Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Tue, 7 Jan 2025 at 22:08, Mitchell Augustin
> <mitchell.augustin@canonical.com> wrote:
> >
> > > You don't say what your command line is..
> >
> > Sorry, meant to include that, although I think I may have figured out
> > my issue after looking through the docs more.
> >
> > I am trying to launch a VM with libvirt/virt-install, using the
> > following options (now with sve-default-vector-length removed):
> > virt-install --name ${VM} --memory $((18*1024)) --graphics
> > vnc,listen=0.0.0.0 --noautoconsole \
> >              --console pty,target_type=serial --vcpus 4,cpuset=0-3 \
> >              --machine virt --osinfo name=ubuntunoble \
> >              --cdrom /vms/ubuntu-24.04-live-server-arm64.iso \
> >              --disk /vms/${VM}-vda.qcow2 --disk /vms/${VM}-seed.qcow2
> > --import  --qemu-commandline="-cpu neoverse-v1"
>
> That's not much help to me because I have no idea what stuff
> virt-install is adding to the QEMU command line... Can
> you give a QEMU command line?
>
> > I forgot to mention that I also am using KVM on the host, which it
> > seems is probably my issue:
> >
> > > If KVM is enabled then only vector lengths that the host CPU type support 
> > > may be enabled. If SVE is not supported by the host, then no sve* 
> > > properties may be enabled
>
> Right, if you're using KVM then the guest gets the same CPU
> that the host has, there's no way to magically give it extra
> features. If you want to run an SVE-using guest program on
> a non-SVE host then you must use TCG emulation (either userspace
> or full-system, depending on what you want to do).
>
> In particular, QEMU does *not* support "use the host CPU to
> accelerate running the bits of guest code that don't have
> feature X and then only emulate the instructions that are
> part of feature X". It's either fully KVM using the host CPU,
> or fully emulated.
>
> > With the above command, I see this when trying to launch the VM:
> > qemu-system-aarch64: target/arm/cpu64.c:72: arm_cpu_sve_finalize:
> > Assertion `!cpu_isar_feature(aa64_sve, cpu)' failed.
>
> We shouldn't assert here, even if you accidentally asked QEMU
> to do something it can't support or that doesn't make sense.
> So if you have a repro case for this (especially if it still
> repros with current QEMU head-of-git) I can look at fixing it
> (probably to print an error message rather than asserting).
> But since this isn't going to be on the path to getting you a
> working setup I understand if you'd rather just ignore it and
> move on in the direction that gets you going :-)
>
> thanks
> -- PMM



-- 
Mitchell Augustin
Software Engineer - Ubuntu Partner Engineering



reply via email to

[Prev in Thread] Current Thread [Next in Thread]