[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Improve QEMU performance with LLVM codegen and other te
From: |
Stefan Hajnoczi |
Subject: |
Re: [Qemu-devel] Improve QEMU performance with LLVM codegen and other techniques |
Date: |
Thu, 1 Dec 2011 07:46:57 +0000 |
User-agent: |
Mutt/1.5.21 (2010-09-15) |
On Thu, Dec 01, 2011 at 11:50:24AM +0800, 陳韋任 wrote:
> > I don't see any better approach to debugging this than the one you're
> > already taking. Try to run as many workloads as you can and see if they
> > break :). Oh and always make the optimization optional, so that you can
> > narrow it down to it and know you didn't hit a generic QEMU bug.
>
> You mean make the trace optimization optional? We have tested our framework
> in
> LLVM-only mode. which means we replace TCG with LLVM entirely. It's _very_
> slow
> but works.
It would be interesting to use an optimized interpreter instead of TCG,
then go to LLVM for hot traces. This is more HotSpot-like with the idea
being that the interpreter runs through initialization and rarely
executed code without a translation overhead. For the hot paths LLVM
kicks in and high-quality translated code is executed.
Stefan