> # qemu-nbd -c /dev/nbd0 file.qcow2
> # mount /dev/nbd0p1 /mnt -o uid=me
> $ # do some changes in /mnt/...
>
Are you sure the changes have made it to the underlying file?
If you do an umount here, a sync() is guaranteed.
If you do not umount, at this point you may have some dirty
writeback.
So that the following QEMU instance may find the filesystem in a
"factually out of date" state. I dare not say "inconsistent state",
because this should theoretically be taken care of... maybe...
I manually flush the system caches to make sure my `file.qcow2` file is up to date .
> $ qemu-system-x86_64 -snapshot -hda file.qcow2 # with locking=off
Read-only access is one thing. Maybe the QEMU instance will cope with
that.
Do you want the QEMU instance to *write* to the filesystem, by any
chance?
no, I don't need it.
`-snapshot` is there to prevent corruption of the assembly left in place.
If you manage to force-mount the filesystem for writing, and you
write some changes to it, how does your "outer host-side instance of
the mounted FS" get to know?
This is a recipe for filesystem breakage :-)
You know, these are exactly the sort of problems, that get avoided by
the locking :-)
They also get avoided by "shared access filesystems" such as GFS2 or
OCFS2, or network filesystems such as NFS or CIFS=SMBFS. Maybe CEPH
is also remotely in this vein, although this is more of a distributed
clustered FS, and an overkill for your scenario.
Frank
in any case, to avoid this kind of problem and make sure I have the right data, I manually clear the system caches with `sudo bash -c “sync && sysctl -q vm.drop_caches=3”` before calling qemu.
`-snapshot` is there to prevent corruption of the assembly left in place.
regards, lacsaP.