[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PATCH 42/60] linux-user: Split out mmap_h_lt_g
From: |
Richard Henderson |
Subject: |
[PATCH 42/60] linux-user: Split out mmap_h_lt_g |
Date: |
Fri, 1 Mar 2024 13:06:01 -1000 |
Work much harder to get alignment and mapping beyond the end
of the file correct. Both of which are excercised by our
test-mmap for alpha (8k pages) on any 4k page host.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Acked-by: Helge Deller <deller@gmx.de>
Message-Id: <20240102015808.132373-23-richard.henderson@linaro.org>
---
linux-user/mmap.c | 184 ++++++++++++++++++++++++++++++++++++++--------
1 file changed, 153 insertions(+), 31 deletions(-)
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index d3556bcc14..ff8f9f7ed0 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -569,6 +569,156 @@ static abi_long mmap_h_eq_g(abi_ulong start, abi_ulong
len,
return mmap_end(start, last, start, last, flags, page_flags);
}
+/*
+ * Special case host page size < target page size.
+ *
+ * The two special cases are increased guest alignment, and mapping
+ * past the end of a file.
+ *
+ * When mapping files into a memory area larger than the file,
+ * accesses to pages beyond the file size will cause a SIGBUS.
+ *
+ * For example, if mmaping a file of 100 bytes on a host with 4K
+ * pages emulating a target with 8K pages, the target expects to
+ * be able to access the first 8K. But the host will trap us on
+ * any access beyond 4K.
+ *
+ * When emulating a target with a larger page-size than the hosts,
+ * we may need to truncate file maps at EOF and add extra anonymous
+ * pages up to the targets page boundary.
+ *
+ * This workaround only works for files that do not change.
+ * If the file is later extended (e.g. ftruncate), the SIGBUS
+ * vanishes and the proper behaviour is that changes within the
+ * anon page should be reflected in the file.
+ *
+ * However, this case is rather common with executable images,
+ * so the workaround is important for even trivial tests, whereas
+ * the mmap of of a file being extended is less common.
+ */
+static abi_long mmap_h_lt_g(abi_ulong start, abi_ulong len, int host_prot,
+ int mmap_flags, int page_flags, int fd,
+ off_t offset, int host_page_size)
+{
+ void *p, *want_p = g2h_untagged(start);
+ off_t fileend_adj = 0;
+ int flags = mmap_flags;
+ abi_ulong last, pass_last;
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ struct stat sb;
+
+ if (fstat(fd, &sb) == -1) {
+ return -1;
+ }
+ if (offset >= sb.st_size) {
+ /*
+ * The entire map is beyond the end of the file.
+ * Transform it to an anonymous mapping.
+ */
+ flags |= MAP_ANONYMOUS;
+ fd = -1;
+ offset = 0;
+ } else if (offset + len > sb.st_size) {
+ /*
+ * A portion of the map is beyond the end of the file.
+ * Truncate the file portion of the allocation.
+ */
+ fileend_adj = offset + len - sb.st_size;
+ }
+ }
+
+ if (flags & (MAP_FIXED | MAP_FIXED_NOREPLACE)) {
+ if (fileend_adj) {
+ p = mmap(want_p, len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ } else {
+ p = mmap(want_p, len, host_prot, flags, fd, offset);
+ }
+ if (p != want_p) {
+ if (p != MAP_FAILED) {
+ /* Host does not support MAP_FIXED_NOREPLACE: emulate. */
+ do_munmap(p, len);
+ errno = EEXIST;
+ }
+ return -1;
+ }
+
+ if (fileend_adj) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ (flags & ~MAP_FIXED_NOREPLACE) | MAP_FIXED,
+ fd, offset);
+
+ if (t == MAP_FAILED) {
+ int save_errno = errno;
+
+ /*
+ * We failed a map over the top of the successful anonymous
+ * mapping above. The only failure mode is running out of VMAs,
+ * and there's nothing that we can do to detect that earlier.
+ * If we have replaced an existing mapping with MAP_FIXED,
+ * then we cannot properly recover. It's a coin toss whether
+ * it would be better to exit or continue here.
+ */
+ if (!(flags & MAP_FIXED_NOREPLACE) &&
+ !page_check_range_empty(start, start + len - 1)) {
+ qemu_log("QEMU target_mmap late failure: %s",
+ strerror(save_errno));
+ }
+
+ do_munmap(want_p, len);
+ errno = save_errno;
+ return -1;
+ }
+ }
+ } else {
+ size_t host_len, part_len;
+
+ /*
+ * Take care to align the host memory. Perform a larger anonymous
+ * allocation and extract the aligned portion. Remap the file on
+ * top of that.
+ */
+ host_len = len + TARGET_PAGE_SIZE - host_page_size;
+ p = mmap(want_p, host_len, host_prot, flags | MAP_ANONYMOUS, -1, 0);
+ if (p == MAP_FAILED) {
+ return -1;
+ }
+
+ part_len = (uintptr_t)p & (TARGET_PAGE_SIZE - 1);
+ if (part_len) {
+ part_len = TARGET_PAGE_SIZE - part_len;
+ do_munmap(p, part_len);
+ p += part_len;
+ host_len -= part_len;
+ }
+ if (len < host_len) {
+ do_munmap(p + len, host_len - len);
+ }
+
+ if (!(flags & MAP_ANONYMOUS)) {
+ void *t = mmap(p, len - fileend_adj, host_prot,
+ flags | MAP_FIXED, fd, offset);
+
+ if (t == MAP_FAILED) {
+ int save_errno = errno;
+ do_munmap(p, len);
+ errno = save_errno;
+ return -1;
+ }
+ }
+
+ start = h2g(p);
+ }
+
+ last = start + len - 1;
+ if (fileend_adj) {
+ pass_last = ROUND_UP(last - fileend_adj, host_page_size) - 1;
+ } else {
+ pass_last = last;
+ }
+ return mmap_end(start, last, start, pass_last, mmap_flags, page_flags);
+}
+
static abi_long target_mmap__locked(abi_ulong start, abi_ulong len,
int target_prot, int flags, int page_flags,
int fd, off_t offset)
@@ -613,37 +763,9 @@ static abi_long target_mmap__locked(abi_ulong start,
abi_ulong len,
if (host_page_size == TARGET_PAGE_SIZE) {
return mmap_h_eq_g(start, len, host_prot, flags,
page_flags, fd, offset);
- }
-
- /*
- * When mapping files into a memory area larger than the file, accesses
- * to pages beyond the file size will cause a SIGBUS.
- *
- * For example, if mmaping a file of 100 bytes on a host with 4K pages
- * emulating a target with 8K pages, the target expects to be able to
- * access the first 8K. But the host will trap us on any access beyond
- * 4K.
- *
- * When emulating a target with a larger page-size than the hosts, we
- * may need to truncate file maps at EOF and add extra anonymous pages
- * up to the targets page boundary.
- */
- if (host_page_size < TARGET_PAGE_SIZE && !(flags & MAP_ANONYMOUS)) {
- struct stat sb;
-
- if (fstat(fd, &sb) == -1) {
- return -1;
- }
-
- /* Are we trying to create a map beyond EOF?. */
- if (offset + len > sb.st_size) {
- /*
- * If so, truncate the file map at eof aligned with
- * the hosts real pagesize. Additional anonymous maps
- * will be created beyond EOF.
- */
- len = ROUND_UP(sb.st_size - offset, host_page_size);
- }
+ } else if (host_page_size < TARGET_PAGE_SIZE) {
+ return mmap_h_lt_g(start, len, host_prot, flags,
+ page_flags, fd, offset, host_page_size);
}
if (!(flags & (MAP_FIXED | MAP_FIXED_NOREPLACE))) {
--
2.34.1
- [PATCH 32/60] softmmu/physmem: Remove HOST_PAGE_ALIGN, (continued)
- [PATCH 32/60] softmmu/physmem: Remove HOST_PAGE_ALIGN, Richard Henderson, 2024/03/01
- [PATCH 10/60] linux-user/elfload: Write corefile elf header in one block, Richard Henderson, 2024/03/01
- [PATCH 17/60] tcg: Avoid double lock if page tables happen to be in mmio memory., Richard Henderson, 2024/03/01
- [PATCH 29/60] migration: Remove qemu_host_page_size, Richard Henderson, 2024/03/01
- [PATCH 31/60] softmmu/physmem: Remove qemu_host_page_size, Richard Henderson, 2024/03/01
- [PATCH 36/60] linux-user: Fix sub-host-page mmap, Richard Henderson, 2024/03/01
- [PATCH 38/60] linux-user: Do early mmap placement only for reserved_va, Richard Henderson, 2024/03/01
- [PATCH 39/60] linux-user: Split out do_munmap, Richard Henderson, 2024/03/01
- [PATCH 41/60] linux-user: Split out mmap_h_eq_g, Richard Henderson, 2024/03/01
- [PATCH 45/60] tests/tcg: Extend file in linux-madvise.c, Richard Henderson, 2024/03/01
- [PATCH 42/60] linux-user: Split out mmap_h_lt_g,
Richard Henderson <=
- [PATCH 50/60] target/arm: Enable TARGET_PAGE_BITS_VARY for AArch64 user-only, Richard Henderson, 2024/03/01
- [PATCH 47/60] cpu: Remove page_size_init, Richard Henderson, 2024/03/01
- [PATCH 40/60] linux-user: Use do_munmap for target_mmap failure, Richard Henderson, 2024/03/01
- [PATCH 51/60] linux-user: Bound mmap_min_addr by host page size, Richard Henderson, 2024/03/01
- [PATCH 57/60] linux-user/loongarch64: Remove TARGET_FORCE_SHMLBA, Richard Henderson, 2024/03/01
- [PATCH 55/60] tcg/optimize: fix uninitialized variable, Richard Henderson, 2024/03/01
- [PATCH 59/60] linux-user: Rewrite target_shmat, Richard Henderson, 2024/03/01
- [PATCH 48/60] accel/tcg: Disconnect TargetPageDataNode from page size, Richard Henderson, 2024/03/01
- [PATCH 53/60] target/alpha: Enable TARGET_PAGE_BITS_VARY for user-only, Richard Henderson, 2024/03/01
- [PATCH 60/60] tests/tcg: Check that shmat() does not break /proc/self/maps, Richard Henderson, 2024/03/01