qemu-discuss
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find


From: David Fernandez
Subject: Re: Guest Ubuntu 18.04 fails to boot with -serial mon:stdio, cannot find ttyS0.
Date: Mon, 8 Nov 2021 20:38:04 +0000

On 08/11/2021 20:21, David Fernandez wrote:
> Hi Peter,
>
> Answers in line.
>
> On 08/11/2021 19:59, Peter Maydell wrote:
>> On Mon, 8 Nov 2021 at 18:05, David Fernandez 
>> <david.fernandez@sen.com> wrote:
>>> I am running qemu-system-x86_64 on aarch64 running Ubuntu 18.04 as both
>>> guest and host.
>>>
>>> I couldn't get the stock qemu-system-x86_04 to boot correctly, as it 
>>> was
>>> an old version 2.11.1, I decided to recompile from sources to see if
>>> that would fix the problem, but the problem still persists, using both
>>> top of master and stable-2.12 (currently on that).
>>>
>>> [ TIME ] Timed out waiting for device dev-ttyS0.device.
>> Is there any more error message ? How long does the guest wait on
>> this step before it times out ?
> The guest waits at the end forever... probably because it tries to use 
> the
> normal console instead and that does not get displayed with my options.
>
> These are all the services that fail:
>
> [ TIME ] Timed out waiting for device dev-ttyS0.device.
> [DEPEND] Dependency failed for Serial Getty on ttyS0.
> ...
> [FAILED] Failed to start Dispatcher daemon for systemd-networkd. <== 
> network does start fine though.
> See 'systemctl status networkd-dispatcher.service' for details.
> ...
> [FAILED] Failed to start Wait until snapd is fully seeded. <== snapd 
> runs fine though.
> See 'systemctl status snapd.seeded.service' for details.
> ...
> [FAILED] Failed to start Holds Snappy daemon refresh.
> See 'systemctl status snapd.hold.service' for details.
> [  OK  ] Started Update UTMP about System Runlevel Changes.
> ... waits forever ...
>
>
>>> The problem does not happen when using qemu-system-x86_64 on my Fedora
>>> desktop as host, so I wonder if I need something in my build options or
>>> if I need to rebuild my kernel with some added kernel configuration
>>> options...
>> Are you testing with the exact same:
>>   * command line
>>   * files (guest kernel, initrd, iso, etc)
>>   * QEMU version
>> on both the aarch64 and x86-64 host ?
>
> Yes. -- Correction -- The Fedora version is:
> $ qemu-system-x86_64 -version
> QEMU emulator version 5.2.0 (qemu-5.2.0-8.fc34)
> Copyright (c) 2003-2020 Fabrice Bellard and the QEMU Project developers
>
>
>
>> Does the x86-64 host still work OK if you run it with KVM turned off
>> (ie matching the aarch64 host setup) ?
>
> Have not tried that... is there an easy way/option to run that test? 
> Or do I need
> to compile from sources in Fedora?
>
>
>>
>>> Hopefully, some experts around here can help me with that if it is a
>>> known thing (I google around but other than mentioning that 2.11 is too
>>> old, could not find any clear reason about this problem).
>> For aarch64 host, I would be a bit dubious about running 2.11 or 2.12 --
>> they are both absolutely ancient in QEMU terms.
> Is there a specific branch I should use? Could not see more than 2.12 in
> git.qemu.org regarding stable branches, but happy to compile and try 
> any other.
>
>>
>> What are the specs of the host CPU (in particular, how fast is it)?
>> If it's too underpowered it's possible it just can't run the guest
>> fast enough for it to boot up before the guest's systemd tasks
>> time out (though it would have to be pretty bad for this to be
>> the problem).
> The machine is a Jetson AGX Xavier, uses a "Volta" CPU with 8 cores.
> In theory should be powerful enough, but you tell me, nVidia does not 
> offer a
> lot of information on their systems anyway.
>
>>>     --enable-kvm \ <== does not seem to ba available as an accelerator
>> That is expected -- KVM can only accelerate guests where the
>> host and guest are the same CPU architecture, so it can do
>> aarch64-on-aarch64 and x86-on-x86, but not x86-on-aarch64.
> Good to learn that... here you are the output of virt-host-validate 
> that I
> happened to find about:
>
> $ sudo virt-host-validate
>   QEMU: Checking if device /dev/kvm 
> exists                                   : FAIL (Check that CPU and 
> firmware supports virtualization and kvm module is loaded)
>   QEMU: Checking if device /dev/vhost-net 
> exists                             : WARN (Load the 'vhost_net' module 
> to improve performance of virtio networking)
>   QEMU: Checking if device /dev/net/tun 
> exists                               : PASS
>   QEMU: Checking for cgroup 'memory' controller 
> support                      : PASS
>   QEMU: Checking for cgroup 'memory' controller 
> mount-point                  : PASS
>   QEMU: Checking for cgroup 'cpu' controller 
> support                         : PASS
>   QEMU: Checking for cgroup 'cpu' controller 
> mount-point                     : PASS
>   QEMU: Checking for cgroup 'cpuacct' controller 
> support                     : PASS
>   QEMU: Checking for cgroup 'cpuacct' controller 
> mount-point                 : PASS
>   QEMU: Checking for cgroup 'cpuset' controller 
> support                      : PASS
>   QEMU: Checking for cgroup 'cpuset' controller 
> mount-point                  : PASS
>   QEMU: Checking for cgroup 'devices' controller 
> support                     : PASS
>   QEMU: Checking for cgroup 'devices' controller 
> mount-point                 : PASS
>   QEMU: Checking for cgroup 'blkio' controller 
> support                       : PASS
>   QEMU: Checking for cgroup 'blkio' controller 
> mount-point                   : PASS
> WARN (Unknown if this platform has IOMMU support) <= it does have 
> IOMMU, I know for sure from playing with v4l2...
>    LXC: Checking for Linux >= 
> 2.6.26                                         : PASS
>    LXC: Checking for namespace 
> ipc                                           : PASS
>    LXC: Checking for namespace 
> mnt                                           : PASS
>    LXC: Checking for namespace 
> pid                                           : PASS
>    LXC: Checking for namespace 
> uts                                           : PASS
>    LXC: Checking for namespace 
> net                                           : PASS
>    LXC: Checking for namespace 
> user                                          : PASS
>    LXC: Checking for cgroup 'memory' controller 
> support                      : PASS
>    LXC: Checking for cgroup 'memory' controller 
> mount-point                  : PASS
>    LXC: Checking for cgroup 'cpu' controller 
> support                         : PASS
>    LXC: Checking for cgroup 'cpu' controller 
> mount-point                     : PASS
>    LXC: Checking for cgroup 'cpuacct' controller 
> support                     : PASS
>    LXC: Checking for cgroup 'cpuacct' controller 
> mount-point                 : PASS
>    LXC: Checking for cgroup 'cpuset' controller 
> support                      : PASS
>    LXC: Checking for cgroup 'cpuset' controller 
> mount-point                  : PASS
>    LXC: Checking for cgroup 'devices' controller 
> support                     : PASS
>    LXC: Checking for cgroup 'devices' controller 
> mount-point                 : PASS
>    LXC: Checking for cgroup 'blkio' controller 
> support                       : PASS
>    LXC: Checking for cgroup 'blkio' controller 
> mount-point                   : PASS
>    LXC: Checking if device /sys/fs/fuse/connections 
> exists                   : FAIL (Load the 'fuse' module to enable 
> /proc/ overrides)
>
>>
>> -- PMM
>
> Thanks for helping.
>
> David
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]