qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] target-i386: Enhance the stub for kvm_arch_get_


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] target-i386: Enhance the stub for kvm_arch_get_supported_cpuid()
Date: Wed, 20 Feb 2019 18:29:27 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.4.0

On 20/02/19 12:59, Kamil Rytarowski wrote:
> Ping, still valid.

Sorry, I missed your email.

> On 15.02.2019 00:38, Kamil Rytarowski wrote:
>> I consider it as fragile hack and certainly not something to depend on.
>> Also in some circumstances of such code, especially "if (zero0)" we want
>> to enable disabled code under a debugger.

That's a good objection, but certainly does not apply to KVM on NetBSD.

>> There were also kernel backdoors due to this optimization.

Citation please?

>> Requested cpu.i (hopefully correctly generated)
>>
>> http://netbsd.org/~kamil/qemu/cpu.i.bz2

So, first thing first I can reproduce clang's behavior with this .i file
and also with this reduced test case.

    extern void f(void);
    int i, j;
    int main()
    {
        if (0  && i) f();
        if (j  && 0) f();
   }

The first is eliminated but the second is not, just like in QEMU where
this works:

        if (kvm_enabled() && cpu->enable_pmu) {
            KVMState *s = cs->kvm_state;

            *eax = kvm_arch_get_supported_cpuid(s, 0xA, count, R_EAX);
            *ebx = kvm_arch_get_supported_cpuid(s, 0xA, count, R_EBX);
            *ecx = kvm_arch_get_supported_cpuid(s, 0xA, count, R_ECX);
            *edx = kvm_arch_get_supported_cpuid(s, 0xA, count, R_EDX);
        } else if (hvf_enabled() && cpu->enable_pmu) {
            *eax = hvf_get_supported_cpuid(0xA, count, R_EAX);
            *ebx = hvf_get_supported_cpuid(0xA, count, R_EBX);
            *ecx = hvf_get_supported_cpuid(0xA, count, R_ECX);
            *edx = hvf_get_supported_cpuid(0xA, count, R_EDX);

while this doesn't:

    if ((env->features[FEAT_7_0_EBX] & CPUID_7_0_EBX_INTEL_PT) &&
        kvm_enabled()) {
        KVMState *s = CPU(cpu)->kvm_state;
        uint32_t eax_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_EAX);
        uint32_t ebx_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_EBX);
        uint32_t ecx_0 = kvm_arch_get_supported_cpuid(s, 0x14, 0, R_ECX);
        uint32_t eax_1 = kvm_arch_get_supported_cpuid(s, 0x14, 1, R_EAX);
        uint32_t ebx_1 = kvm_arch_get_supported_cpuid(s, 0x14, 1, R_EBX);

But, that's okay, it's -O0 so we give clang a pass for that  Note that
clang does do the optimization even in more complex cases like

    extern _Bool f(void);
    int main()
    {
        if (!0) return 0;
        if (!f()) return 0;
    }

The problem is that there is a kvm-stub.c entry for that, and in fact
my compilation passes and the symbol is resolved correctly:

$ nm target/i386/cpu.o |grep kvm_.*get_sup
                 U kvm_arch_get_supported_cpuid
$ nm target/i386/kvm-stub.o|grep kvm_.*get_sup
0000000000000030 T kvm_arch_get_supported_cpuid
$ nm qemu-system-x86_64 |grep kvm_.*get_sup
000000000046eab0 T kvm_arch_get_supported_cpuid

As expected, something much less obvious is going on for you, in
particular __OPTIMIZE__seems not to be working properly.  However,
that would also be very surprising.

Please:

1) run the last two "nm" commands on your build (wthout grep).

2) do the same exercise to get a .i for target/i386/kvm-stub.c

3) try removing the "#ifndef __OPTIMIZE__" and leave everything else as is,
see if it works.  No need to play with macros, which also goes to show that
you didn't really understand what's going on---that's fine, but then please
refrain from making summary judgments which only lengthen the discussion.

Thanks,

Paolo

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]