qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hvf: arm: Add simple dirty bitmap tracking


From: Peter Maydell
Subject: Re: [PATCH] hvf: arm: Add simple dirty bitmap tracking
Date: Tue, 15 Feb 2022 19:27:44 +0000

On Thu, 3 Feb 2022 at 14:23, Alexander Graf <agraf@csgraf.de> wrote:
>
> The actual tracking of dirty bitmap updates is happening in architecture code.
> So far, the aarch64 hvf code has not updated QEMU's dirty bitmap at all. The
> net result of that is that the VGA device's framebuffer would not update.
>
> This patch adds simplistic dirty bitmap updates. Unfortunately hvf can only 
> set
> permissions per full region, so we have to mark the complete region as dirty
> when only a single byte was modified inside.
>
> We also handle the write protect update logic before we handle any writes.
> This allows us to even handle non-ISV instructions on dirty logging enabled
> memory regions: Once we flip the region to writable again, we just rerun
> the trapping instruction.
>
> Signed-off-by: Alexander Graf <agraf@csgraf.de>
> ---
>  target/arm/hvf/hvf.c | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
>
> diff --git a/target/arm/hvf/hvf.c b/target/arm/hvf/hvf.c
> index 0dc96560d3..92ad0d29c4 100644
> --- a/target/arm/hvf/hvf.c
> +++ b/target/arm/hvf/hvf.c
> @@ -1163,6 +1163,28 @@ int hvf_vcpu_exec(CPUState *cpu)
>              break;
>          }
>
> +        /*
> +         * Dirty log updates work without isv as well. We just run the write
> +         * again with write permissions set. So handle them before the 
> assert.
> +         */

I'm not 100% sure, but I think that this check should come before
the "is this fault caused by a cache maintenance operation, if so
skip it" check, not after. Some cache operations (eg DC IVAC)
require write permission (as opposed to merely announcing themselves
in the syndrome WnR as writes), and we want to really execute them,
not skip them, if we're doing a cache op to RAM rather than to MMIO space.
(My somewhat shaky reading of the KVM kernel sources is that it
effectively does this too.)

> +        if (iswrite) {
> +            uint64_t gpa = hvf_exit->exception.physical_address;
> +            hvf_slot *slot = hvf_find_overlap_slot(gpa, 1);
> +
> +            if (slot && slot->flags & HVF_SLOT_LOG) {
> +                /*
> +                 * HVF can only set a full region's permissions, so let's 
> just
> +                 * mark the full region as dirty.
> +                 */
> +                memory_region_set_dirty(slot->region, 0, slot->size);
> +                hv_vm_protect(slot->start, slot->size, HV_MEMORY_READ |
> +                              HV_MEMORY_WRITE | HV_MEMORY_EXEC);

This looks OK I think, but it is also not the same as the x86 hvf
code. There we have:

    if (write && slot) {
        if (slot->flags & HVF_SLOT_LOG) {
            memory_region_set_dirty(slot->region, gpa - slot->start, 1);
            hv_vm_protect((hv_gpaddr_t)slot->start, (size_t)slot->size,
                          HV_MEMORY_READ | HV_MEMORY_WRITE);
        }
    }

which only marks 'dirty' the 1 byte at the address being accessed,
not the entire slot. Is that a bug in the x86 hvf code, or something
we should also be doing here ?

Also, when we initially set up slots, we pick the permissions like this:
    if (area->readonly ||
        (!memory_region_is_ram(area) && memory_region_is_romd(area))) {
        flags = HV_MEMORY_READ | HV_MEMORY_EXEC;
    } else {
        flags = HV_MEMORY_READ | HV_MEMORY_WRITE | HV_MEMORY_EXEC;
    }

Are we guaranteed that we never try to mark the non-writeable
(because readonly or romd) slots for dirty-logging, or do we need
to somehow track whether this is a read-only slot so we don't
incorrectly enable writes for it?

(It does make sense that dirty-logging on a read-only area is a bit
pointless, so maybe we do guarantee never to do it.)

> +
> +                /* Run the same instruction again, without write faulting */
> +                break;
> +            }
> +        }

thanks
-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]