[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL v2 38/53] hw/cxl: Support 4 HDM decoders at all levels of topo
From: |
Peter Maydell |
Subject: |
Re: [PULL v2 38/53] hw/cxl: Support 4 HDM decoders at all levels of topology |
Date: |
Thu, 19 Oct 2023 13:31:05 +0100 |
On Thu, 5 Oct 2023 at 04:45, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> Support these decoders in CXL host bridges (pxb-cxl), CXL Switch USP
> and CXL Type 3 end points.
>
> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
> -/* TODO: Support multiple HDM decoders and DPA skip */
> static bool cxl_type3_dpa(CXLType3Dev *ct3d, hwaddr host_addr, uint64_t *dpa)
> {
> int hdm_inc = R_CXL_HDM_DECODER1_BASE_LO - R_CXL_HDM_DECODER0_BASE_LO;
> uint32_t *cache_mem = ct3d->cxl_cstate.crb.cache_mem_registers;
> - uint64_t decoder_base, decoder_size, hpa_offset;
> - uint32_t hdm0_ctrl;
> - int ig, iw;
> - int i = 0;
> + unsigned int hdm_count;
> + uint32_t cap;
> + uint64_t dpa_base = 0;
> + int i;
>
> - decoder_base =
> - (((uint64_t)cache_mem[R_CXL_HDM_DECODER0_BASE_HI + i * hdm_inc] <<
> 32) |
> - cache_mem[R_CXL_HDM_DECODER0_BASE_LO + i * hdm_inc]);
> - if ((uint64_t)host_addr < decoder_base) {
> - return false;
> + cap = ldl_le_p(cache_mem + R_CXL_HDM_DECODER_CAPABILITY);
> + hdm_count = cxl_decoder_count_dec(FIELD_EX32(cap,
> + CXL_HDM_DECODER_CAPABILITY,
> + DECODER_COUNT));
> +
> + for (i = 0; i < hdm_count; i++) {
> + uint64_t decoder_base, decoder_size, hpa_offset, skip;
> + uint32_t hdm_ctrl, low, high;
> + int ig, iw;
> +
> + low = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_BASE_LO + i * hdm_inc);
> + high = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_BASE_HI + i *
> hdm_inc);
> + decoder_base = ((uint64_t)high << 32) | (low & 0xf0000000);
> +
> + low = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_SIZE_LO + i * hdm_inc);
> + high = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_SIZE_HI + i *
> hdm_inc);
> + decoder_size = ((uint64_t)high << 32) | (low & 0xf0000000);
> +
> + low = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_DPA_SKIP_LO +
> + i * hdm_inc);
> + high = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_DPA_SKIP_HI +
> + i * hdm_inc);
> + skip = ((uint64_t)high << 32) | (low & 0xf0000000);
> + dpa_base += skip;
> +
> + hpa_offset = (uint64_t)host_addr - decoder_base;
> +
> + hdm_ctrl = ldl_le_p(cache_mem + R_CXL_HDM_DECODER0_CTRL + i *
> hdm_inc);
> + iw = FIELD_EX32(hdm_ctrl, CXL_HDM_DECODER0_CTRL, IW);
> + ig = FIELD_EX32(hdm_ctrl, CXL_HDM_DECODER0_CTRL, IG);
> + if (!FIELD_EX32(hdm_ctrl, CXL_HDM_DECODER0_CTRL, COMMITTED)) {
> + return false;
> + }
> + if (((uint64_t)host_addr < decoder_base) ||
> + (hpa_offset >= decoder_size)) {
> + dpa_base += decoder_size /
> + cxl_interleave_ways_dec(iw, &error_fatal);
I noticed this because of a Coverity false-positive, but should
this really be using error_fatal? It looks like a guest program
writing bogus values to registers could trip this, and generally
we don't like to let the guest make QEMU exit.
thanks
-- PMM
- [PULL v2 34/53] vdpa net: zero vhost_vdpa iova_tree pointer at cleanup, (continued)
- [PULL v2 34/53] vdpa net: zero vhost_vdpa iova_tree pointer at cleanup, Michael S. Tsirkin, 2023/10/04
- [PULL v2 33/53] vdpa: fix gcc cvq_isolated uninitialized variable warning, Michael S. Tsirkin, 2023/10/04
- [PULL v2 40/53] vdpa net: fix error message setting virtio status, Michael S. Tsirkin, 2023/10/04
- [PULL v2 45/53] pcie_sriov: unregister_vfs(): fix error path, Michael S. Tsirkin, 2023/10/04
- [PULL v2 39/53] hw/pci-bridge/cxl-upstream: Add serial number extended capability support, Michael S. Tsirkin, 2023/10/04
- [PULL v2 42/53] vdpa net: follow VirtIO initialization properly at cvq isolation probing, Michael S. Tsirkin, 2023/10/04
- [PULL v2 36/53] hw/cxl: Add utility functions decoder interleave ways and target count., Michael S. Tsirkin, 2023/10/04
- [PULL v2 37/53] hw/cxl: Fix and use same calculation for HDM decoder block size everywhere, Michael S. Tsirkin, 2023/10/04
- [PULL v2 35/53] hw/cxl: Push cxl_decoder_count_enc() and cxl_decode_ig() into .c, Michael S. Tsirkin, 2023/10/04
- [PULL v2 38/53] hw/cxl: Support 4 HDM decoders at all levels of topology, Michael S. Tsirkin, 2023/10/04
- Re: [PULL v2 38/53] hw/cxl: Support 4 HDM decoders at all levels of topology,
Peter Maydell <=
- [PULL v2 41/53] vdpa net: stop probing if cannot set features, Michael S. Tsirkin, 2023/10/04
- [PULL v2 46/53] libvhost-user.c: add assertion to vu_message_read_default, Michael S. Tsirkin, 2023/10/04
- [PULL v2 43/53] amd_iommu: Fix APIC address check, Michael S. Tsirkin, 2023/10/04
- [PULL v2 44/53] hw/i386/pc: improve physical address space bound check for 32-bit x86 systems, Michael S. Tsirkin, 2023/10/04
- [PULL v2 47/53] virtio: use shadow_avail_idx while checking number of heads, Michael S. Tsirkin, 2023/10/04
- [PULL v2 50/53] util/uuid: add a hash function, Michael S. Tsirkin, 2023/10/04
- [PULL v2 48/53] virtio: remove unnecessary thread fence while reading next descriptor, Michael S. Tsirkin, 2023/10/04
- [PULL v2 52/53] vhost-user: add shared_object msg, Michael S. Tsirkin, 2023/10/04
- [PULL v2 53/53] libvhost-user: handle shared_object msg, Michael S. Tsirkin, 2023/10/04