qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] Expand ECAM region in machvirt 2_13?


From: Auger Eric
Subject: Re: [Qemu-arm] [Qemu-devel] Expand ECAM region in machvirt 2_13?
Date: Wed, 2 May 2018 16:38:54 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0

Hi Laszlo, Ard,

On 05/02/2018 04:23 PM, Ard Biesheuvel wrote:
> On 2 May 2018 at 15:54, Laszlo Ersek <address@hidden> wrote:
>> On 05/02/18 14:34, Ard Biesheuvel wrote:
>>> On 2 May 2018 at 13:31, Laszlo Ersek <address@hidden> wrote:
>>>> On 05/01/18 17:59, Auger Eric wrote:
>>>>> Hi,
>>>>>
>>>>> I would like to resume the discussion on extending the number of PCI
>>>>> buses to 256 (as in Q35) as a follow-up of past discussions:
>>>>> https://lists.gnu.org/archive/html/qemu-devel/2018-01/msg03631.html.
>>>>>
>>>>> With the current 16MB ECAM region we are limited to 16 PCIe busses.
>>>>>
>>>>> Could we envision to have a 256MB ECAM region and move it to another
>>>>> location beyond 256GB, in virt2_13 machine type?
>>>>>
>>>>> Current ECAM range within [0x3f000000, 0x40000000] would be kept
>>>>> unchanged for legacy and when vms->highmem is set to false.
>>>>> Migration from <2_13 to >=2_13 would be allowed whereas migration
>>>>> from >=2.13 to <2.13 wouldn't.
>>>>
>>>> If I understand correctly, the idea is to *move* the current one
>>>> range, if the virt machine type is >= 2.13 and highmem is set to true
>>>> (which is the default IIUC, from 2.12 onward).
>>>>
>>>> For 64-bit (AARCH64) ArmVirtQemu, that should work fine. The firmware
>>>> takes the ECAM base and size from the "pci-host-ecam-generic" DT
>>>> node, property "reg", uint64_t elements #0 and #1. (Sorry if this
>>>> isn't exact DT lingo, I'm paraphrasing the firmware source code.) If
>>>> the QEMU patch just changes the values, that should work
>>>> transparently.
>>>>
>>>> For 32-bit (ARM) ArmVirtQemu, this change (the new ECAM default)
>>>> could be a problem. PCI stuff in the firmware wouldn't work unless
>>>> people specified highmem=off on the QEMU command line.
>>>>
>>>> Now, I notice highmen defaults to "on" starting with 2.12 even for
>>>> "qemu-system-arm -M virt", not just "qemu-system-aarch64 -M virt", so
>>>> why doesn't that already cause a problem with PCI in the 32-bit guest
>>>> fw?
>>>>
>>>> Because, currently "highmen" only controls the presence of the 64-bit
>>>> PCI MMIO aperture for BAR allocation; it has no effect on config
>>>> space. And if the 64-bit PCI MMIO aperture is exposed to the 32-bit
>>>> guest firmware, the latter simply ignores the former, and works with
>>>> the 32-bit aperture solely (which is always there).
>>>>
>>>> So, for "qemu-system-arm -M virt" compatibility, I think we might
>>>> need a separate machine type property, which should default to "on"
>>>> only on qemu-system-aarch64 (if such distinctions are allowed).
>>>>
>>>> Of course, I can't tell if the 32-bit ArmVirtQemu firmware is
>>>> possible to run on "qemu-system-aarch64 -M virt". (I think it is; I
>>>> recall something something about ARMv8 having ARMv7 compat, but I
>>>> don't remember ever trying.) If that's the case, then even the above
>>>> suggestion won't work, because it would break 32-bit guest fw that
>>>> the user has run (for whatever reason) on "qemu-system-aarch64 -M
>>>> virt". In this case, I believe we can't just change the contents of
>>>> the current "pci-host-ecam-generic" node, but we should implement
>>>> some structural DTB addition that old firmware will simply not
>>>> notice, while new (64-bit) firmware will specifically look for (and
>>>> prefer over the old DT stuff).
>>>>
>>>> Ard, what's your take? (Sorry if you've already followed up, my email
>>>> processing lags.)
>>>>
>>>
>>> Do we have any examples of ACPI platforms where the config space is
>>> mapped above 4 GB? I'd like to make sure that all existing code copes
>>> with that before even considering it.
>>
>> Well, we could consider this virtual machine feature a way to root out
>> any 64-bit bugs that lurk in code that consumes ECAM :) That would help
>> physical platforms. It means that we shouldn't enable the feature by
>> default, in 2.13 at least.
>>
>> Anyway, I've just checked my oldie A3 Mustang for this (it uses UEFI and
>> ACPI), and surprisingly, it does put the ECAM range above 4GB:
>>
>> [    0.000000] ACPI: MCFG 0x00000043FA690000 00003C (v01 APM    XGENE    
>> 00000002 INTL 20140724)
>> [    0.088654] ACPI: MCFG table detected, 1 entries
>> [    0.126613] acpi PNP0A08:00: MCFG quirk: ECAM at [mem 
>> 0xe0d0000000-0xe0dfffffff] for [bus 00-ff] with xgene_v1_pcie_ecam_ops
>> [    0.127552] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 
>> 0xe0d0000000-0xe0dfffffff] not reserved in ACPI namespace
>> [    0.127601] acpi PNP0A08:00: ECAM at [mem 0xe0d0000000-0xe0dfffffff] for 
>> [bus 00-ff]
>>
>> The base address is 899 GB + 256 MB.
>>
>> My kernel is 4.11.0-44.6.1.el7a.aarch64.
>>
> 
> Interesting. So Linux deals with that fine. How about the missing
> PNP0C02 device:
> 
> Device (RES0)
> {
>    Name (_CID, "PNP0C02")
>    Name (_CRS, ResourceTemplate () {
>      Memory32Fixed (ReadWrite, 0x... , 0x1000000)
>    })
> }
> 
> Anyone care to venture a guess how one expresses this, given that
> Memory64Fixed does not appear to exist?
> 
> (Perhaps our QEMU code only needs a minor tweak here, but I honestly don't 
> know)

Thank you for your inputs,

Maybe we can use aml_dword_memory(), as it is done for highmem MMIO? I
will give this a try.

Thanks

Eric
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]