qemu-riscv
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH] hw/intc: sifive_plic: Avoid overflowing the addr_config buff


From: limingwang (A)
Subject: RE: [PATCH] hw/intc: sifive_plic: Avoid overflowing the addr_config buffer
Date: Wed, 1 Jun 2022 03:11:27 +0000

> 
> From: Alistair Francis <alistair.francis@wdc.com>
> 
> Since commit ad40be27 "target/riscv: Support start kernel directly by KVM" we
> have been overflowing the addr_config on "M,MS..."
> configurations, as reported 
> https://gitlab.com/qemu-project/qemu/-/issues/1050.
> 
> This commit changes the loop in sifive_plic_create() from iterating over the 
> number
> of harts to just iterating over the addr_config. The addr_config is based on 
> the
> hart_config, and will contain interrup details for all harts. This way we 
> can't iterate
> past the end of addr_config.
> 
> Fixes: ad40be27084536 ("target/riscv: Support start kernel directly by KVM")
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1050
> Signed-off-by: Alistair Francis <alistair.francis@wdc.com>

Reviewed-by: Mingwang Li <limingwang@huawei.com>

Mingwang
> ---
>  hw/intc/sifive_plic.c | 19 +++++++++----------
>  1 file changed, 9 insertions(+), 10 deletions(-)
> 
> diff --git a/hw/intc/sifive_plic.c b/hw/intc/sifive_plic.c index
> eebbcf33d4..56d60e9ac9 100644
> --- a/hw/intc/sifive_plic.c
> +++ b/hw/intc/sifive_plic.c
> @@ -431,7 +431,7 @@ DeviceState *sifive_plic_create(hwaddr addr, char
> *hart_config,
>      uint32_t context_stride, uint32_t aperture_size)  {
>      DeviceState *dev = qdev_new(TYPE_SIFIVE_PLIC);
> -    int i, j = 0;
> +    int i;
>      SiFivePLICState *plic;
> 
>      assert(enable_stride == (enable_stride & -enable_stride)); @@ -451,18
> +451,17 @@ DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
>      sysbus_mmio_map(SYS_BUS_DEVICE(dev), 0, addr);
> 
>      plic = SIFIVE_PLIC(dev);
> -    for (i = 0; i < num_harts; i++) {
> -        CPUState *cpu = qemu_get_cpu(hartid_base + i);
> 
> -        if (plic->addr_config[j].mode == PLICMode_M) {
> -            j++;
> -            qdev_connect_gpio_out(dev, num_harts + i,
> +    for (i = 0; i < plic->num_addrs; i++) {
> +        int cpu_num = plic->addr_config[i].hartid;
> +        CPUState *cpu = qemu_get_cpu(hartid_base + cpu_num);
> +
> +        if (plic->addr_config[i].mode == PLICMode_M) {
> +            qdev_connect_gpio_out(dev, num_harts + cpu_num,
>                                    qdev_get_gpio_in(DEVICE(cpu),
> IRQ_M_EXT));
>          }
> -
> -        if (plic->addr_config[j].mode == PLICMode_S) {
> -            j++;
> -            qdev_connect_gpio_out(dev, i,
> +        if (plic->addr_config[i].mode == PLICMode_S) {
> +            qdev_connect_gpio_out(dev, cpu_num,
>                                    qdev_get_gpio_in(DEVICE(cpu),
> IRQ_S_EXT));
>          }
>      }
> --
> 2.35.3




reply via email to

[Prev in Thread] Current Thread [Next in Thread]