qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at d


From: Igor Mammedov
Subject: Re: [edk2-devel] [Qemu-devel] [PATCH 1/2] q35: implement 128K SMRAM at default SMBASE address
Date: Tue, 24 Sep 2019 13:19:36 +0200

On Mon, 23 Sep 2019 20:35:02 +0200
"Laszlo Ersek" <address@hidden> wrote:

> On 09/20/19 11:28, Laszlo Ersek wrote:
> > On 09/20/19 10:28, Igor Mammedov wrote:  
> >> On Thu, 19 Sep 2019 19:02:07 +0200
> >> "Laszlo Ersek" <address@hidden> wrote:
> >>  
> >>> Hi Igor,
> >>>
> >>> (+Brijesh)
> >>>
> >>> long-ish pondering ahead, with a question at the end.  
> >> [...]
> >>  
> >>> Finally: can you please remind me why we lock down 128KB (32 pages) at
> >>> 0x3_0000, and not just half of that? What do we need the range at
> >>> [0x4_0000..0x4_FFFF] for?  
> >>
> >>
> >> If I recall correctly, CPU consumes 64K of save/restore area.
> >> The rest 64K are temporary RAM for using in SMI relocation handler,
> >> if it's possible to get away without it then we can drop it and
> >> lock only 64K required for CPU state. It won't help with SEV
> >> conflict though as it's in the first 64K.  
> > 
> > OK. Let's go with 128KB for now. Shrinking the area is always easier
> > than growing it.
> >   
> >> On QEMU side,  we can drop black-hole approach and allocate
> >> dedicated SMRAM region, which explicitly gets mapped into
> >> RAM address space and after SMI hanlder initialization, gets
> >> unmapped (locked). So that SMRAM would be accessible only
> >> from SMM context. That way RAM at 0x30000 could be used as
> >> normal when SMRAM is unmapped.  
> > 
> > I prefer the black-hole approach, introduced in your current patch
> > series, if it can work. Way less opportunity for confusion.
> > 
> > I've started work on the counterpart OVMF patches; I'll report back.  
> 
> I've got good results. For this (1/2) QEMU patch:
> 
> Tested-by: Laszlo Ersek <address@hidden>
> 
> I tested the following scenarios. In every case, I verified the OVMF
> log, and also the "info mtree" monitor command's result (i.e. whether
> "smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly,
> I diffed these text files between the test scenarios (looking for
> desired / undesired differences). In the Linux guests, I checked /
> compared the dmesg too (wrt. the UEFI memmap).
> 
> - unpatched OVMF (regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, but feature disabled with "-global mch.smbase-smram=off"
> (another regression test), Fedora guest, normal boot and S3
> 
> - patched OVMF, feature enabled, Fedora and various Windows guests
> (win7, win8, win10 families, client/server), normal boot and S3
> 
> - a subset of the above guests, with S3 disabled (-global
>   ICH9-LPC.disable_s3=1), and obviously S3 resume not tested
> 
> SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV
> for that now):
> 
> - unpatched OVMF (regression test), normal boot
> 
> - patched OVMF but feature disabled on the QEMU cmdline (another
> regression test), normal boot
> 
> - patched OVMF, feature enabled, normal boot.
> 
> I plan to post the OVMF patches tomorrow, for discussion.
> 
> (It's likely too early to push these QEMU / edk2 patches right now -- we
> don't know yet if this path will take us to the destination. For now, it
> certainly looks great.)

Laszlo, thanks for trying it out.
It's nice to hear that approach is somewhat usable.
Hopefully we won't have to invent 'paused' cpu mode.

Pls CC me on your patches
(not that I qualify for reviewing,
but may be I could learn a thing or two from it)

> Thanks
> Laszlo
> 
> -=-=-=-=-=-=-=-=-=-=-=-
> Groups.io Links: You receive all messages sent to this group.
> 
> View/Reply Online (#47864): https://edk2.groups.io/g/devel/message/47864
> Mute This Topic: https://groups.io/mt/34201782/1958639
> Group Owner: address@hidden
> Unsubscribe: https://edk2.groups.io/g/devel/unsub  [address@hidden]
> -=-=-=-=-=-=-=-=-=-=-=-
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]