qemu-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-commits] [qemu/qemu] 158c87: ppc: Fix xsrdpi, xvrdpi and xvrspi ro


From: GitHub
Subject: [Qemu-commits] [qemu/qemu] 158c87: ppc: Fix xsrdpi, xvrdpi and xvrspi rounding
Date: Tue, 05 Jul 2016 04:00:04 -0700

  Branch: refs/heads/master
  Home:   https://github.com/qemu/qemu
  Commit: 158c87e5de4f34840bf8115789f09806e7e14b94
      
https://github.com/qemu/qemu/commit/158c87e5de4f34840bf8115789f09806e7e14b94
  Author: Anton Blanchard <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/fpu_helper.c

  Log Message:
  -----------
  ppc: Fix xsrdpi, xvrdpi and xvrspi rounding

xsrdpi, xvrdpi and xvrspi use the round ties away method, not round
nearest even.

Signed-off-by: Anton Blanchard <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 7093645a843e5da1a750bc451dd8c9107d595c61
      
https://github.com/qemu/qemu/commit/7093645a843e5da1a750bc451dd8c9107d595c61
  Author: Bharata B Rao <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/spapr_cpu_core.c

  Log Message:
  -----------
  spapr: Ensure thread0 of CPU core is always realized first

During CPU core realization, we create all the thread objects and parent
them to the core object in a loop. However, the realization of thread
objects is done separately by walking the threads of a core using
object_child_foreach(). With this, there is no guarantee on the order
in which the child thread objects get realized. Since CPU device tree
properties are currently derived from the CPU thread object, we assume
thread0 of the core to be the representative thread of the core when
creating device tree properties for the core. If thread0 is not the
first thread that gets realized, then we would end up having an
incorrect dt_id for the core and this causes hotplug failures from
the guest.

Fix this by realizing each thread object by walking the core's thread
object list thereby ensuring that thread0 and other threads are always
realized in the correct order.

Future TODO: CPU DT nodes are per-core properties and we should
ideally base the creation of CPU DT nodes on core objects rather than
the thread objects.

Signed-off-by: Bharata B Rao <address@hidden>
Reviewed-by: Greg Kurz <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: c4e6c42353fe735add45b790f8d3a323590f7cab
      
https://github.com/qemu/qemu/commit/c4e6c42353fe735add45b790f8d3a323590f7cab
  Author: Greg Kurz <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/translate_init.c

  Log Message:
  -----------
  ppc: simplify max_smt initialization in ppc_cpu_realizefn()

kvmppc_smt_threads() returns 1 if KVM is not enabled.

Signed-off-by: Greg Kurz <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 606b54986df4e3964eee2d74460bd06ed2f384e5
      
https://github.com/qemu/qemu/commit/606b54986df4e3964eee2d74460bd06ed2f384e5
  Author: Alexey Kardashevskiy <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/spapr_iommu.c
    M hw/ppc/spapr_pci.c

  Log Message:
  -----------
  spapr_iommu: Realloc guest visible TCE table when starting/stopping listening

The sPAPR TCE tables manage 2 copies when VFIO is using an IOMMU -
a guest view of the table and a hardware TCE table. If there is no VFIO
presense in the address space, then just the guest view is used, if
this is the case, it is allocated in the KVM. However since there is no
support yet for VFIO in KVM TCE hypercalls, when we start using VFIO,
we need to move the guest view from KVM to the userspace; and we need
to do this for every IOMMU on a bus with VFIO devices.

This implements the callbacks for the sPAPR IOMMU - notify_started()
reallocated the guest view to the user space, notify_stopped() does
the opposite.

This removes explicit spapr_tce_set_need_vfio() call from PCI hotplug
path as the new callbacks do this better - they notify IOMMU at
the exact moment when the configuration is changed, and this also
includes the case of PCI hot unplug.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
Reviewed-by: David Gibson <address@hidden>
Acked-by: Alex Williamson <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 318f67ce13710a09c6dcf34da7b6b0ebc845c5c9
      
https://github.com/qemu/qemu/commit/318f67ce13710a09c6dcf34da7b6b0ebc845c5c9
  Author: Alexey Kardashevskiy <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/vfio/Makefile.objs
    M hw/vfio/common.c
    A hw/vfio/spapr.c
    M hw/vfio/trace-events
    M include/hw/vfio/vfio-common.h

  Log Message:
  -----------
  vfio: spapr: Add DMA memory preregistering (SPAPR IOMMU v2)

This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not spent time on doing that in real time with
possible failures which cannot be handled nicely in some cases.

This adds a prereg memory listener which listens on address_space_memory
and notifies a VFIO container about memory which needs to be
pinned/unpinned. VFIO MMIO regions (i.e. "skip dump" regions) are skipped.

The feature is only enabled for SPAPR IOMMU v2. The host kernel changes
are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does
not call it when v2 is detected and enabled.

This enforces guest RAM blocks to be host page size aligned; however
this is not new as KVM already requires memory slots to be host page
size aligned.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
[dwg: Fix compile error on 32-bit host]
Signed-off-by: David Gibson <address@hidden>


  Commit: f4ec5e26edbd4c7509623ec882c344dc334bc1b2
      
https://github.com/qemu/qemu/commit/f4ec5e26edbd4c7509623ec882c344dc334bc1b2
  Author: Alexey Kardashevskiy <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/vfio/common.c
    M include/hw/vfio/vfio-common.h

  Log Message:
  -----------
  vfio: Add host side DMA window capabilities

There are going to be multiple IOMMUs per a container. This moves
the single host IOMMU parameter set to a list of VFIOHostDMAWindow.

This should cause no behavioral change and will be used later by
the SPAPR TCE IOMMU v2 which will also add a vfio_host_win_del() helper.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
Reviewed-by: David Gibson <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 2e4109de8e589beecd69996ee14f24021b991c0d
      
https://github.com/qemu/qemu/commit/2e4109de8e589beecd69996ee14f24021b991c0d
  Author: Alexey Kardashevskiy <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/vfio/common.c
    M hw/vfio/spapr.c
    M hw/vfio/trace-events
    M include/hw/vfio/vfio-common.h

  Log Message:
  -----------
  vfio/spapr: Create DMA window dynamically (SPAPR IOMMU v2)

New VFIO_SPAPR_TCE_v2_IOMMU type supports dynamic DMA window management.
This adds ability to VFIO common code to dynamically allocate/remove
DMA windows in the host kernel when new VFIO container is added/removed.

This adds a helper to vfio_listener_region_add which makes
VFIO_IOMMU_SPAPR_TCE_CREATE ioctl and adds just created IOMMU into
the host IOMMU list; the opposite action is taken in
vfio_listener_region_del.

When creating a new window, this uses heuristic to decide on the TCE table
levels number.

This should cause no guest visible change in behavior.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
Reviewed-by: David Gibson <address@hidden>
[dwg: Added some casts to prevent printf() warnings on certain targets
 where the kernel headers' __u64 doesn't match uint64_t or PRIx64]
Signed-off-by: David Gibson <address@hidden>


  Commit: ae4de14cd36b6a899d83df9595be3971ac0802d4
      
https://github.com/qemu/qemu/commit/ae4de14cd36b6a899d83df9595be3971ac0802d4
  Author: Alexey Kardashevskiy <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/Makefile.objs
    M hw/ppc/spapr.c
    M hw/ppc/spapr_pci.c
    A hw/ppc/spapr_rtas_ddw.c
    M hw/ppc/trace-events
    M include/hw/pci-host/spapr.h
    M include/hw/ppc/spapr.h

  Log Message:
  -----------
  spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)

This adds support for Dynamic DMA Windows (DDW) option defined by
the SPAPR specification which allows to have additional DMA window(s)

The "ddw" property is enabled by default on a PHB but for compatibility
the pseries-2.6 machine and older disable it.
This also creates a single DMA window for the older machines to
maintain backward migration.

This implements DDW for PHB with emulated and VFIO devices. The host
kernel support is required. The advertised IOMMU page sizes are 4K and
64K; 16M pages are supported but not advertised by default, in order to
enable them, the user has to specify "pgsz" property for PHB and
enable huge pages for RAM.

The existing linux guests try creating one additional huge DMA window
with 64K or 16MB pages and map the entire guest RAM to. If succeeded,
the guest switches to dma_direct_ops and never calls TCE hypercalls
(H_PUT_TCE,...) again. This enables VFIO devices to use the entire RAM
and not waste time on map/unmap later. This adds a "dma64_win_addr"
property which is a bus address for the 64bit window and by default
set to 0x800.0000.0000.0000 as this is what the modern POWER8 hardware
uses and this allows having emulated and VFIO devices on the same bus.

This adds 4 RTAS handlers:
* ibm,query-pe-dma-window
* ibm,create-pe-dma-window
* ibm,remove-pe-dma-window
* ibm,reset-pe-dma-window
These are registered from type_init() callback.

These RTAS handlers are implemented in a separate file to avoid polluting
spapr_iommu.c with PCI.

This changes sPAPRPHBState::dma_liobn to an array to allow 2 LIOBNs
and updates all references to dma_liobn. However this does not add
64bit LIOBN to the migration stream as in fact even 32bit LIOBN is
rather pointless there (as it is a PHB property and the management
software can/should pass LIOBNs via CLI) but we keep it for the backward
migration support.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 1f0252e66e76f0b5967419e2a1e53a1f1398bf7b
      
https://github.com/qemu/qemu/commit/1f0252e66e76f0b5967419e2a1e53a1f1398bf7b
  Author: Cédric Le Goater <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/spapr_hcall.c
    M target-ppc/mmu-hash64.c
    M target-ppc/mmu-hash64.h

  Log Message:
  -----------
  ppc: simplify ppc_hash64_hpte_page_shift_noslb()

The segment page shift parameter is never used. Let's remove it.

Signed-off-by: Cédric Le Goater <address@hidden>
Signed-off-by: David Gibson <address@hidden>


  Commit: 651060aba79dc9d0cc77ac3921948ea78dba7409
      
https://github.com/qemu/qemu/commit/651060aba79dc9d0cc77ac3921948ea78dba7409
  Author: David Gibson <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/mmu-hash64.c

  Log Message:
  -----------
  target-ppc: Correct page size decoding in ppc_hash64_pteg_search()

The architecture specifies that when searching a PTEG for PTEs, entries
with a page size encoding that's not valid for the current segment should
be ignored, continuing the search.

The current implementation does this with ppc_hash64_pte_size_decode()
which is a very incomplete implementation of this check.  We already have
code to do a full and correct page size decode in hpte_page_shift().

This patch moves hpte_page_shift() so it can be used in
ppc_hash64_pteg_search() and adjusts the latter's parameters to include
a full SLBE instead of just a segment page shift.

Signed-off-by: David Gibson <address@hidden>
Reviewed-by: Benjamin Herrenschmidt <address@hidden>


  Commit: 073de86aa934d46d596a2367e7501da5500e5b86
      
https://github.com/qemu/qemu/commit/073de86aa934d46d596a2367e7501da5500e5b86
  Author: David Gibson <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/mmu-hash64.c
    M target-ppc/mmu-hash64.h

  Log Message:
  -----------
  target-ppc: Simplify HPTE matching

ppc_hash64_pteg_search() explicitly checks each HPTE's VALID and
SECONDARY bits, then uses the HPTE64_V_COMPARE() macro to check the B field
and AVPN.  However, a small tweak to HPTE64_V_COMPARE() means we can check
all of these bits at once with a suitable ptem value.  So, consolidate all
the comparisons for simplicity.

Signed-off-by: David Gibson <address@hidden>
Reviewed-by: Benjamin Herrenschmidt <address@hidden>


  Commit: 949868633f0454715af1781c0f377413b6ab000e
      
https://github.com/qemu/qemu/commit/949868633f0454715af1781c0f377413b6ab000e
  Author: David Gibson <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/mmu-hash64.c

  Log Message:
  -----------
  target-ppc: Return page shift from PTEG search

ppc_hash64_pteg_search() now decodes a PTEs page size encoding, which it
didn't previously do.  This means we're now double decoding the page size
because we check it int he fault path after ppc64_hash64_htab_lookup()
returns.

To avoid this duplication have ppc_hash64_pteg_search() and
ppc_hash64_htab_lookup() return the page size from the PTE and use that in
the callers instead of decoding again.

Signed-off-by: David Gibson <address@hidden>
Reviewed-by: Benjamin Herrenschmidt <address@hidden>


  Commit: 912acdf487a3c8c0083b904fdb917fe6d79f87a7
      
https://github.com/qemu/qemu/commit/912acdf487a3c8c0083b904fdb917fe6d79f87a7
  Author: Benjamin Herrenschmidt <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/spapr.c
    M target-ppc/cpu.h
    M target-ppc/mmu-hash64.c
    M target-ppc/mmu-hash64.h
    M target-ppc/translate_init.c

  Log Message:
  -----------
  ppc/hash64: Add proper real mode translation support

This adds proper support for translating real mode addresses based
on the combination of HV and LPCR bits. This handles HRMOR offset
for hypervisor real mode, and both RMA and VRMA modes for guest
real mode. PAPR mode adjusts the offsets appropriately to match the
RMA used in TCG, but we need to limit to the max supported by the
implementation (16G).

This includes some fixes by Cédric Le Goater <address@hidden>

Signed-off-by: Benjamin Herrenschmidt <address@hidden>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <address@hidden>


  Commit: 2c7ad80443e9747eb85b508be01cded958191bad
      
https://github.com/qemu/qemu/commit/2c7ad80443e9747eb85b508be01cded958191bad
  Author: Benjamin Herrenschmidt <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M target-ppc/mmu-hash64.c

  Log Message:
  -----------
  ppc/hash64: Fix support for LPCR:ISL

We need to ignore the segment page size and essentially treat
all pages as coming from a 4K segment.

Signed-off-by: Benjamin Herrenschmidt <address@hidden>
[dwg: Adjusted for differences in my version of the prereq patches]
Signed-off-by: David Gibson <address@hidden>


  Commit: 8662d7db392f906c7808014051b278ad1542db93
      
https://github.com/qemu/qemu/commit/8662d7db392f906c7808014051b278ad1542db93
  Author: Peter Maydell <address@hidden>
  Date:   2016-07-05 (Tue, 05 Jul 2016)

  Changed paths:
    M hw/ppc/Makefile.objs
    M hw/ppc/spapr.c
    M hw/ppc/spapr_cpu_core.c
    M hw/ppc/spapr_hcall.c
    M hw/ppc/spapr_iommu.c
    M hw/ppc/spapr_pci.c
    A hw/ppc/spapr_rtas_ddw.c
    M hw/ppc/trace-events
    M hw/vfio/Makefile.objs
    M hw/vfio/common.c
    A hw/vfio/spapr.c
    M hw/vfio/trace-events
    M include/hw/pci-host/spapr.h
    M include/hw/ppc/spapr.h
    M include/hw/vfio/vfio-common.h
    M target-ppc/cpu.h
    M target-ppc/fpu_helper.c
    M target-ppc/mmu-hash64.c
    M target-ppc/mmu-hash64.h
    M target-ppc/translate_init.c

  Log Message:
  -----------
  Merge remote-tracking branch 'remotes/dgibson/tags/ppc-for-2.7-20160705' into 
staging

ppc patch queue for 2016-07-05

Here's the current ppc, sPAPR and related drivers patch queue.

  * The big addition is dynamic DMA window support (this includes some
    core VFIO changes)
  * There are also several fixes to the MMU emulation for bugs
    introduced with the HV mode patches
  * Several other bugfixes and cleanups

Changes in v2:
  I messed up and forgot to make a fix in the last patch which BenH
  pointed out (introduced by my rebasing).  That's fixed in this
  version, and I'm replacing the tag in place with the revised
  version.

# gpg: Signature made Tue 05 Jul 2016 06:28:58 BST
# gpg:                using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <address@hidden>"
# gpg:                 aka "David Gibson (Red Hat) <address@hidden>"
# gpg:                 aka "David Gibson (ozlabs.org) <address@hidden>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg:          It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E  87DC 6C38 CACA 20D9 B392

* remotes/dgibson/tags/ppc-for-2.7-20160705:
  ppc/hash64: Fix support for LPCR:ISL
  ppc/hash64: Add proper real mode translation support
  target-ppc: Return page shift from PTEG search
  target-ppc: Simplify HPTE matching
  target-ppc: Correct page size decoding in ppc_hash64_pteg_search()
  ppc: simplify ppc_hash64_hpte_page_shift_noslb()
  spapr_pci/spapr_pci_vfio: Support Dynamic DMA Windows (DDW)
  vfio/spapr: Create DMA window dynamically (SPAPR IOMMU v2)
  vfio: Add host side DMA window capabilities
  vfio: spapr: Add DMA memory preregistering (SPAPR IOMMU v2)
  spapr_iommu: Realloc guest visible TCE table when starting/stopping listening
  ppc: simplify max_smt initialization in ppc_cpu_realizefn()
  spapr: Ensure thread0 of CPU core is always realized first
  ppc: Fix xsrdpi, xvrdpi and xvrspi rounding

Signed-off-by: Peter Maydell <address@hidden>


Compare: https://github.com/qemu/qemu/compare/11659423113d...8662d7db392f

reply via email to

[Prev in Thread] Current Thread [Next in Thread]