qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 02/16] accel/tcg: memory access from CPU will pass access


From: Jim Shu
Subject: Re: [RFC PATCH 02/16] accel/tcg: memory access from CPU will pass access_type to IOMMU
Date: Thu, 13 Jun 2024 17:52:18 +0800

Hi Ethan,

Yes, this mechanism could handle such situations.

The important part is that the translate() function of
IOMMUMemoryRegion only returns the correct permission of the section
related to CPU access type.
For example, if wgChecker only permits RO permission, it will return
"downstream_as" with RO perm or "blocked_io_as" with WO perm.
(depending on CPU access type).

When the CPU access type is different from the previous one, it will
get TLB miss due to mismatched permissions.
Then, it will try to translate again, find the section of another
access type, and fill into iotlb. (also kick out the previous iotlb
entry).

This mechanism has poor performance if alternatively doing read/write
access from CPU to IOMMUMemoryRegion with RO/WO perm due to TLB miss.
I think it is the limitation that CPUTLBEntry doesn't support having
different sections of each permission. At least this mechanism has the
correct behavior.

Thanks,
Jim



On Thu, Jun 13, 2024 at 1:34 PM Ethan Chen <ethan84@andestech.com> wrote:
>
> On Wed, Jun 12, 2024 at 04:14:02PM +0800, Jim Shu wrote:
> > [EXTERNAL MAIL]
> >
> > It is the preparation patch for upcoming RISC-V wgChecker device.
> >
> > Since RISC-V wgChecker could permit access in RO/WO permission, the
> > IOMMUMemoryRegion could return different section for read & write
> > access. The memory access from CPU should also pass the access_type to
> > IOMMU translate function so that IOMMU could return the correct section
> > of specified access_type.
> >
>
> Hi Jim,
>
> Does this method take into account the situation where the CPU access type is
> different from the access type when creating iotlb? I think the section
> might be wrong in this situation.
>
> Thanks,
> Ethan
> >
> >



reply via email to

[Prev in Thread] Current Thread [Next in Thread]