qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 5/5] m68k: add Virtual M68k Machine


From: Max Reitz
Subject: Re: [PULL 5/5] m68k: add Virtual M68k Machine
Date: Thu, 18 Mar 2021 18:28:54 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0

On 18.03.21 17:25, Philippe Mathieu-Daudé wrote:
On 3/18/21 4:56 PM, Laurent Vivier wrote:
Le 18/03/2021 à 16:51, Laurent Vivier a écrit :
Le 18/03/2021 à 16:36, Philippe Mathieu-Daudé a écrit :
On 3/18/21 11:06 AM, Laurent Vivier wrote:
Le 18/03/2021 à 11:02, Philippe Mathieu-Daudé a écrit :
On 3/18/21 10:52 AM, Laurent Vivier wrote:
Le 18/03/2021 à 10:19, Philippe Mathieu-Daudé a écrit :
Hi Laurent,

+Paolo / Thomas

On 3/15/21 9:42 PM, Laurent Vivier wrote:
The machine is based on Goldfish interfaces defined by Google
for Android simulator. It uses Goldfish-rtc (timer and RTC),
Goldfish-pic (PIC) and Goldfish-tty (for serial port and early tty).

The machine is created with 128 virtio-mmio bus, and they can
be used to use serial console, GPU, disk, NIC, HID, ...

Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20210312214145.2936082-6-laurent@vivier.eu>
---
  default-configs/devices/m68k-softmmu.mak      |   1 +
  .../standard-headers/asm-m68k/bootinfo-virt.h |  18 +
  hw/m68k/virt.c                                | 313 ++++++++++++++++++
  MAINTAINERS                                   |  13 +
  hw/m68k/Kconfig                               |   9 +
  hw/m68k/meson.build                           |   1 +
  6 files changed, 355 insertions(+)
  create mode 100644 include/standard-headers/asm-m68k/bootinfo-virt.h
  create mode 100644 hw/m68k/virt.c

diff --git a/hw/m68k/Kconfig b/hw/m68k/Kconfig
index 60d7bcfb8f2b..f839f8a03064 100644
--- a/hw/m68k/Kconfig
+++ b/hw/m68k/Kconfig
@@ -23,3 +23,12 @@ config Q800
      select ESP
      select DP8393X
      select OR_IRQ
+
+config M68K_VIRT
+    bool
+    select M68K_IRQC
+    select VIRT_CTRL
+    select GOLDFISH_PIC
+    select GOLDFISH_TTY
+    select GOLDFISH_RTC
+    select VIRTIO_MMIO

I had this error on gitlab:

(qemu) QEMU_PROG: -drive driver=IMGFMT,file=TEST_DIR/t.IMGFMT,if=virtio:
'virtio-blk-pci' is not a valid device model name
job: check-system-fedora
https://gitlab.com/philmd/qemu/-/jobs/1106469724

I bisected locally to this commit.

check-system-fedora uses build-system-fedora:

build-system-fedora:
     CONFIGURE_ARGS: --disable-gcrypt --enable-nettle --enable-docs
              --enable-fdt=system --enable-slirp=system
              --enable-capstone=system

I'm confused because the machine provides a VIRTIO bus
via MMIO:

config VIRTIO_MMIO
     bool
     select VIRTIO

I remember I tested your machine with virtio-blk-device.

config VIRTIO_BLK
     bool
     default y
     depends on VIRTIO

Ah, this is virtio-blk-pci, which has:

virtio_pci_ss.add(when: 'CONFIG_VIRTIO_BLK', if_true:
files('virtio-blk-pci.c'))
virtio_ss.add_all(when: 'CONFIG_VIRTIO_PCI', if_true: virtio_pci_ss)

And VIRTIO_PCI isn't selected...

This machine doesn't have virtio-pci, but only virtio-mmio buses.

Yes. I meant "VIRTIO_PCI isn't selected, which is the correct config
for this machine". So the problem isn't the m68k-virt machine addition,
but it shows another problem elsewhere.

Are the tests incorrect then?

libqos isn't restricted to PCI:

tests/qtest/libqos/virtio-blk.c:24:#include "virtio-blk.h"
tests/qtest/libqos/virtio-blk.c:29:/* virtio-blk-device */
tests/qtest/libqos/virtio-blk.c:33:    if (!g_strcmp0(interface,
"virtio-blk")) {
tests/qtest/libqos/virtio-blk.c:40:    fprintf(stderr, "%s not present
in virtio-blk-device\n", interface);
tests/qtest/libqos/virtio-blk.c:109:    /* virtio-blk-device */
tests/qtest/libqos/virtio-blk.c:111:
qos_node_create_driver("virtio-blk-device", virtio_blk_device_create);
tests/qtest/libqos/virtio-blk.c:112:
qos_node_consumes("virtio-blk-device", "virtio-bus", &opts);
tests/qtest/libqos/virtio-blk.c:113:
qos_node_produces("virtio-blk-device", "virtio-blk");

But qemu-iotests / qtests do use virtio-blk-pci. Maybe they should
use a generic virtio-blk-device instead, hoping it get plugged correctly
to the virtio bus...

Yes, it's how the machine work: it has 128 virtio-mmio buses and virtio-devices 
are plugged directly
in the first free ones.

I think the fix would be to disable the virtio-blk-pci test for the machines 
without PCI bus.

Why is it executed for now?

This is probably the problem root cause.

Possible fix:

-->8 --
diff --git a/tests/qtest/meson.build b/tests/qtest/meson.build
index 66ee9fbf450..d7f3fad51c1 100644
--- a/tests/qtest/meson.build
+++ b/tests/qtest/meson.build
@@ -217,13 +217,17 @@
    'emc141x-test.c',
    'usb-hcd-ohci-test.c',
    'virtio-test.c',
-  'virtio-blk-test.c',
-  'virtio-net-test.c',
-  'virtio-rng-test.c',
-  'virtio-scsi-test.c',
    'virtio-serial-test.c',
    'vmxnet3-test.c',
  )
+if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
+  qos_test_ss.add(
+    'virtio-blk-test.c',
+    'virtio-net-test.c',
+    'virtio-rng-test.c',
+    'virtio-scsi-test.c',
+  )
+endif
  if have_virtfs
    qos_test_ss.add(files('virtio-9p-test.c'))
  endif
---

I'll test that locally but not on Gitlab.

This approach doesn't work for the iotests.

This also removes the virtio-devices test, I think we should keep the files, 
but in the files to
disable the PCI part when it is not available.
I don't understand how the virtio devices are created, it seems there
is an alias to generic virtio hw that map to the arch virtio bus.

I was not obvious to understand why start the virt machine with
"-device virtio-blk" returns "'virtio-blk-pci' is not a valid device
model name" at first, then I figured out the qdev_alias_table array.

Maybe you need to complete it for your arch? I've been using that:

-- >8 --
diff --git a/softmmu/qdev-monitor.c b/softmmu/qdev-monitor.c
index 8dc656becca..b326bd76c2a 100644
--- a/softmmu/qdev-monitor.c
+++ b/softmmu/qdev-monitor.c
@@ -65,8 +65,10 @@ static const QDevAlias qdev_alias_table[] = {
      { "virtio-balloon-ccw", "virtio-balloon", QEMU_ARCH_S390X },
      { "virtio-balloon-pci", "virtio-balloon",
              QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
+    { "virtio-blk-device", "virtio-blk", QEMU_ARCH_M68K },
      { "virtio-blk-ccw", "virtio-blk", QEMU_ARCH_S390X },
-    { "virtio-blk-pci", "virtio-blk", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
+    { "virtio-blk-pci", "virtio-blk", QEMU_ARCH_ALL
+                                      & ~(QEMU_ARCH_S390X |
QEMU_ARCH_M68K) },
      { "virtio-gpu-ccw", "virtio-gpu", QEMU_ARCH_S390X },
      { "virtio-gpu-pci", "virtio-gpu", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
      { "virtio-input-host-ccw", "virtio-input-host", QEMU_ARCH_S390X },
@@ -84,8 +86,10 @@ static const QDevAlias qdev_alias_table[] = {
      { "virtio-rng-pci", "virtio-rng", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
      { "virtio-scsi-ccw", "virtio-scsi", QEMU_ARCH_S390X },
      { "virtio-scsi-pci", "virtio-scsi", QEMU_ARCH_ALL & ~QEMU_ARCH_S390X },
+    { "virtio-serial-device", "virtio-serial", QEMU_ARCH_M68K },
      { "virtio-serial-ccw", "virtio-serial", QEMU_ARCH_S390X },
-    { "virtio-serial-pci", "virtio-serial", QEMU_ARCH_ALL &
~QEMU_ARCH_S390X },
+    { "virtio-serial-pci", "virtio-serial", QEMU_ARCH_ALL
+                                            & ~(QEMU_ARCH_S390X |
QEMU_ARCH_M68K)},
      { "virtio-tablet-ccw", "virtio-tablet", QEMU_ARCH_S390X },
      { "virtio-tablet-pci", "virtio-tablet", QEMU_ARCH_ALL &
~QEMU_ARCH_S390X },
      { }
---

But this looks ugly, I don't think it should work that way (because
a machine could provide virtio buses over multiple transport, mmio
and pci...).

IMHO, this looks like the solution.

The alias is to define the prefered way, on PCI it's the -pci one otherwise it 
is the -device one.

See:

commit 5f629d943cb0b11c37a891cf4f40a9166aee6f53
Author: Alexander Graf <agraf@csgraf.de>
Date:   Fri May 18 02:36:26 2012 +0200

     s390x: fix s390 virtio aliases

     Some of the virtio devices have the same frontend name, but actually
     implement different devices behind the scenes through aliases.

     The indicator which device type to use is the architecture. On s390, we
     want s390 virtio devices. On everything else, we want PCI devices.

     Reflect this in the alias selection code. This way we fix commands like
     -device virtio-blk on s390x which with this patch applied select the
     correct virtio-blk-s390 device rather than virtio-blk-pci.

     Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
     Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
     Signed-off-by: Alexander Graf <agraf@suse.de>

So now than MMIO is available, we hit the "everything else" limit :)

The other function I had to modify is in tests/qemu-iotests/iotests.py:

   def get_virtio_scsi_device():
       if qemu_default_machine == 's390-ccw-virtio':
           return 'virtio-scsi-ccw'
       return 'virtio-scsi-pci'

But Max said there is no interest in testing the block devices here
(here = archs providing virtio via MMIO such ARM/m68k).

(To elaborate on my perspective)

I’m not exactly sure what you mean by this, i.e. whether you mean that we don’t want to test at all on that target, or that we don’t want to test specific devices.

The former is not entirely true (but in practice kind of), the latter I think is true.

I think we’d like to be able to provide developers the ability to run the iotests whatever target they’re working on, except where it’s just too complicated to do and nobody cares. The obvious problem is that often nobody cares, so there’s a low threshold to disabling stuff. (On s390, there was some pain, but people did care, so that’s a counter-story.) The threshold is especially low if it’s just to silence CI, obviously.

The thing to note is that at least most iotests are not there to test the guest device, but rather the block layer; and the block layer is pretty much the same for every target, so there isn’t a big incentive for the block layer developers to have it run it on different targets. (I can’t rule out the possibility of a target-specific bug in how the block layer is used, which the iotests might reveal, but I don’t remember something like that to have happened so far.)

From that it follows that I don’t see much use in testing specific devices either. Say there’s a platform that provides both virtio-pci and virtio-mmio, the default (say virtio-pci) is fine for the iotests. I see little value in testing virtio-mmio as well. (Perhaps I’m short-sighted, though.)

Max




reply via email to

[Prev in Thread] Current Thread [Next in Thread]