qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH RFC 1/4] spapr-hcall: take iothread l


From: Nikunj A Dadhania
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH RFC 1/4] spapr-hcall: take iothread lock during handler call
Date: Sat, 03 Sep 2016 22:01:18 +0530
User-agent: Notmuch/0.21 (https://notmuchmail.org) Emacs/25.0.94.1 (x86_64-redhat-linux-gnu)

Greg Kurz <address@hidden> writes:

> On Fri, 02 Sep 2016 14:58:12 +0530
> Nikunj A Dadhania <address@hidden> wrote:
>
>> Greg Kurz <address@hidden> writes:
>> 
>> > On Fri,  2 Sep 2016 12:02:53 +0530
>> > Nikunj A Dadhania <address@hidden> wrote:
>> >  
>> >> Signed-off-by: Nikunj A Dadhania <address@hidden>
>> >> ---
>> >>  hw/ppc/spapr_hcall.c | 11 +++++++++--
>> >>  1 file changed, 9 insertions(+), 2 deletions(-)
>> >> 
>> >> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> >> index e5eca67..daea7a0 100644
>> >> --- a/hw/ppc/spapr_hcall.c
>> >> +++ b/hw/ppc/spapr_hcall.c
>> >> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, 
>> >> target_ulong opcode,
>> >>                               target_ulong *args)
>> >>  {
>> >>      sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
>> >> +    target_ulong ret;
>> >>  
>> >>      if ((opcode <= MAX_HCALL_OPCODE)
>> >>          && ((opcode & 0x3) == 0)) {
>> >>          spapr_hcall_fn fn = papr_hypercall_table[opcode / 4];
>> >>  
>> >>          if (fn) {
>> >> -            return fn(cpu, spapr, opcode, args);
>> >> +            qemu_mutex_lock_iothread();
>> >> +            ret = fn(cpu, spapr, opcode, args);
>> >> +            qemu_mutex_unlock_iothread();
>> >> +            return ret;
>> >>          }
>> >>      } else if ((opcode >= KVMPPC_HCALL_BASE) &&
>> >>                 (opcode <= KVMPPC_HCALL_MAX)) {
>> >>          spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - 
>> >> KVMPPC_HCALL_BASE];
>> >>  
>> >>          if (fn) {
>> >> -            return fn(cpu, spapr, opcode, args);
>> >> +            qemu_mutex_lock_iothread();
>> >> +            ret = fn(cpu, spapr, opcode, args);
>> >> +            qemu_mutex_unlock_iothread();
>> >> +            return ret;
>> >>          }
>> >>      }
>> >>    
>> >
>> > This will serialize all hypercalls, even when it is not needed... Isn't 
>> > that
>> > too much coarse grain locking ?  
>> 
>> You are right, I was thinking to do this only for emulation case, as
>> this is not needed for hardware acceleration.
>> 
>
> Yes, at the very least. And even in the MTTCG case, shouldn't we serialize 
> only
> when we know I/O will actually happen ?

Yes, haven't figured out what all would need protection apart from I/O.
I have started with coarse grain locking and will start fine tuning,
once other issues are sorted out.

Regards,
Nikunj




reply via email to

[Prev in Thread] Current Thread [Next in Thread]