qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH for-8.1] hw/pci-bridge/cxl_upstream.c: Use g_new0() in build_cdat


From: Peter Maydell
Subject: [PATCH for-8.1] hw/pci-bridge/cxl_upstream.c: Use g_new0() in build_cdat_table()
Date: Tue, 18 Jul 2023 11:13:27 +0100

In build_cdat_table() we do:
 *cdat_table = g_malloc0(sizeof(*cdat_table) * CXL_USP_CDAT_NUM_ENTRIES);
This is wrong because:
 - cdat_table has type CDATSubHeader ***
 - so *cdat_table has type CDATSubHeader **
 - so the array we're allocating here should be items of type CDATSubHeader *
 - but we pass sizeof(*cdat_table), which is sizeof(CDATSubHeader **),
   implying that we're allocating an array of CDATSubHeader **

It happens that sizeof(CDATSubHeader **) == sizeof(CDATSubHeader *)
so nothing blows up, but this should be sizeof(**cdat_table).

Avoid this excessively hard-to-understand code by using
g_new0() instead, which will do the type checking for us.
While we're here, we can drop the useless check against failure,
as g_malloc0() and g_new0() never fail.

This fixes Coverity issue CID 1508120.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
---
Disclaimer: I have not tested this beyond any testing you
get from 'make check' and 'make check-avocado'.
---
 hw/pci-bridge/cxl_upstream.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/hw/pci-bridge/cxl_upstream.c b/hw/pci-bridge/cxl_upstream.c
index ef47e5d6253..9159f48a8cb 100644
--- a/hw/pci-bridge/cxl_upstream.c
+++ b/hw/pci-bridge/cxl_upstream.c
@@ -274,10 +274,7 @@ static int build_cdat_table(CDATSubHeader ***cdat_table, 
void *priv)
         };
     }
 
-    *cdat_table = g_malloc0(sizeof(*cdat_table) * CXL_USP_CDAT_NUM_ENTRIES);
-    if (!*cdat_table) {
-        return -ENOMEM;
-    }
+    *cdat_table = g_new0(CDATSubHeader *, CXL_USP_CDAT_NUM_ENTRIES);
 
     /* Header always at start of structure */
     (*cdat_table)[CXL_USP_CDAT_SSLBIS_LAT] = g_steal_pointer(&sslbis_latency);
-- 
2.34.1




reply via email to

[Prev in Thread] Current Thread [Next in Thread]