qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/3] target/m68k: convert to transaction_failed ho


From: Peter Maydell
Subject: Re: [Qemu-devel] [RFC 0/3] target/m68k: convert to transaction_failed hook
Date: Tue, 11 Dec 2018 19:29:13 +0000

On Tue, 11 Dec 2018 at 19:13, Mark Cave-Ayland
<address@hidden> wrote:
> On 10/12/2018 16:56, Peter Maydell wrote:
> > Anyway, I send it out as a skeleton for comments, because
> > it would be nice to get rid of the old unassigned_access
> > hook, which is fundamentally broken (it's still used by m68k,
> > microblaze, mips and sparc).
>
> Laurent is really the expert here (my work on the q800 was purely on the 
> device
> side), however is this also a nudge to see if the unassigned_access hook can 
> be
> eliminated from sparc too? ;)

It would certainly be great to convert sparc too;
it and mips are a little more complicated than these
ones, but the principle is the same:
 * helper functions in target/sparc which call
   cpu_unassigned_access() should be changed to call
   some sparc-internal function to raise the right
   exception
 * callsites in target/sparc which do loads or stores
   by physical address should be checked to ensure they
   do the right thing when a bus error is detected;
   this usually means changing them to use address_space_*
   functions and check they return MEMTX_OK. (With the
   old unassigned_access hook these would result in calls
   to the hook, which was often the wrong thing anyway.
   The transaction_failed hook is called only for accesses
   via the TCG MMU.) The docs/devel/loads-stores.rst docs
   have some handy regexes for use with 'git grep'; for sparc
   these catch everything:
     git grep '\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>' target/sparc/
     git grep '\<st[bwlq]\(_[bl]e\)\?_phys\>' target/sparc/
 * convert the hook itself: this requires a little fiddling
   of parameters, and the addition of the cpu_restore_state()
   call

(MIPS has some odd board-specific handling on top of that
which will need to be fixed too.)

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]