qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm


From: Aneesh Kumar K.V
Subject: Re: [PATCH v4 0/3] nvdimm: Enable sync-dax property for nvdimm
Date: Tue, 4 May 2021 14:32:50 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1

On 5/4/21 11:13 AM, Pankaj Gupta wrote:
....


What this patch series did was to express that property via a device
tree node and guest driver enables a hypercall based flush mechanism to
ensure persistence.

Would VIRTIO (entirely asynchronous, no trap at host side) based
mechanism is better
than hyper-call based? Registering memory can be done any way. We
implemented virtio-pmem
flush mechanisms with below considerations:

- Proper semantic for guest flush requests.
- Efficient mechanism for performance pov.


sure, virio-pmem can be used as an alternative.

I am just asking myself if we have platform agnostic mechanism already
there, maybe
we can extend it to suit our needs? Maybe I am missing some points here.


What is being attempted in this series is to indicate to the guest OS that the backing device/file used for emulated nvdimm device cannot guarantee the persistence via cpu cache flush instructions.


On PPC, the default is "sync-dax=writeback" - so the ND_REGION_ASYNC

is set for the region and the guest makes hcalls to issue fsync on the host.


Are you suggesting me to keep it "unsafe" as default for all architectures

including PPC and a user can set it to "writeback" if desired.

No, I am suggesting that "sync-dax" is insufficient to convey this
property. This behavior warrants its own device type, not an ambiguous
property of the memory-backend-file with implicit architecture
assumptions attached.


Why is it insufficient?  Is it because other architectures don't have an
ability express this detail to guest OS? Isn't that an arch limitations?

-aneesh



reply via email to

[Prev in Thread] Current Thread [Next in Thread]