qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC v1 08/12] cputlb: introduce tlb_flush_* async work


From: Sergey Fedorov
Subject: Re: [Qemu-devel] [RFC v1 08/12] cputlb: introduce tlb_flush_* async work.
Date: Mon, 6 Jun 2016 13:04:50 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 06/06/16 11:54, Alex Bennée wrote:
> Sergey Fedorov <address@hidden> writes:
>
>> On 15/04/16 17:23, Alex Bennée wrote:
>>> diff --git a/cputlb.c b/cputlb.c
>>> index 1412049..42a3b07 100644
>>> --- a/cputlb.c
>>> +++ b/cputlb.c
(snip)
>>> @@ -89,6 +81,34 @@ void tlb_flush(CPUState *cpu, int flush_global)
>>>      env->tlb_flush_addr = -1;
>>>      env->tlb_flush_mask = 0;
>>>      tlb_flush_count++;
>>> +    /* atomic_mb_set(&cpu->pending_tlb_flush, 0); */
>>> +}
>>> +
>>> +static void tlb_flush_global_async_work(CPUState *cpu, void *opaque)
>>> +{
>>> +    tlb_flush_nocheck(cpu, GPOINTER_TO_INT(opaque));
>>> +}
>>> +
>>> +/* NOTE:
>>> + * If flush_global is true (the usual case), flush all tlb entries.
>>> + * If flush_global is false, flush (at least) all tlb entries not
>>> + * marked global.
>>> + *
>>> + * Since QEMU doesn't currently implement a global/not-global flag
>>> + * for tlb entries, at the moment tlb_flush() will also flush all
>>> + * tlb entries in the flush_global == false case. This is OK because
>>> + * CPU architectures generally permit an implementation to drop
>>> + * entries from the TLB at any time, so flushing more entries than
>>> + * required is only an efficiency issue, not a correctness issue.
>>> + */
>>> +void tlb_flush(CPUState *cpu, int flush_global)
>>> +{
>>> +    if (cpu->created) {
>> Why do we check for 'cpu->created' here? Any why don't do that in
>> tlb_flush_page_all()?
> A bunch of random stuff gets kicked off at start-up which was getting in
> the way (c.f. arm_cpu_reset and watch/breakpoints). tlb_flush() is
> rather liberally sprinkled around the init code of various CPUs.

Wouldn't we race if we call tlb_flush() with no BQL held? We could just
queue an async job since we force each CPU thread to flush its work
queue before starting execution anyway.

Kind regards,
Sergey

>
>>> +        async_run_on_cpu(cpu, tlb_flush_global_async_work,
>>> +                         GINT_TO_POINTER(flush_global));
>>> +    } else {
>>> +        tlb_flush_nocheck(cpu, flush_global);
>>> +    }
>>>  }
>>>
>>>  static inline void v_tlb_flush_by_mmuidx(CPUState *cpu, va_list argp)
>>>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]