qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] QEMU xen coverity issues


From: Paul Durrant
Subject: Re: [Qemu-devel] QEMU xen coverity issues
Date: Fri, 15 Feb 2019 15:36:00 +0000

> -----Original Message-----
> From: Peter Maydell [mailto:address@hidden
> Sent: 14 February 2019 18:29
> To: QEMU Developers <address@hidden>
> Cc: Paul Durrant <address@hidden>; Anthony Perard
> <address@hidden>
> Subject: QEMU xen coverity issues
> 
> Hi; we've just done another Coverity run, and it's pulled up some
> issues in the recently changed Xen code. Rather than track them
> back to exactly which patches in the recent refactorings resulted
> in them, I figured I'd just list them here. Could you take a
> look at them, please ?
> 
> (1) CID 1398635: xen_block_complete_aio(): identical code in two branches
> 
> In hw/block/dataplane/xen_block_complete_aio():
> 
> the first switch has this code:
>     case BLKIF_OP_WRITE:
>     case BLKIF_OP_FLUSH_DISKCACHE:
>         if (!request->req.nr_segments) {
>             break;
>         }

I think this situation arose in the old xen_disk.c when grant map/unmap was 
removed. There used to be cleanup to do on the write path (i.e. when 
nr_segments != 0) but that went away with the move the copy-only. I'll get rid 
of the if().

>         break;
> 
> so the if() doesn't do anything. What was this supposed to be?
> 
> (2) Not spotted by coverity, but in a later switch in the same function:
> 
>     switch (request->req.operation) {
>     case BLKIF_OP_WRITE:
>     case BLKIF_OP_FLUSH_DISKCACHE:
>         if (!request->req.nr_segments) {
>             break;
>         }
>     case BLKIF_OP_READ:
> 
> If a switch case is supposed to fall through it should be
> explicitly marked with a "/* fall through */" comment.

Yes, this was an oversight copied from the old code too.

> 
> (3) CID 1398638: unused value in xen_block_set_vdev():
> 
> In hw/block/xen-block.c xen_block_set_vdev():
> 
>     if (vdev->type == XEN_BLOCK_VDEV_TYPE_DP) {
>         if (qemu_strtoul(p, &end, 10, &vdev->disk)) {
>             goto invalid;
>         }
> 
>         if (*end == 'p') {
>             p = (char *) ++end;    /* this assignment is unused */
>             if (*end == '\0') {
>                 goto invalid;
>             }
>         }
>     } else {
>         vdev->disk = vbd_name_to_disk(p, &end);
>     }
> 
>     if (*end != '\0') {
>         p = (char *)end;
>         [...]
> 
> The assignment to p which I've marked with a comment above
> is never used, because we will either goto 'invalid' (which never
> uses 'p') or we will take the "if (*end != '\0')" path which
> overwrites 'p'. What is the intention here ?

It's sufficient to simply increment end before testing against '\0' so I'll 
fold the pre-increment into the if().

> 
> (4) CID 1398640: vbd_name_to_disk() integer overflow:
> 
> In hw/block/xen-block.c vbd_name_to_disk(), if the name string
> passed in is empty or doesn't start with a lowercase alphabetic
> character, then we end the while loop with disk == 0. Then
> we return "disk - 1" which underflows to UINT_MAX. This isn't
> documented as being an error return for the function and the
> caller doesn't check for it.

Ok, I'll re-work the function to check for such an error condition.

> 
> (5) CID 1398649: resource leak in xen_block_drive_create():
> 
> In hw/block/xen-block.c xen_block_drive_create() Coverity
> complains that the call "driver_layer = qdict_new()" allocates
> memory that's leaked because we don't save the pointer anywhere
> but don't deallocate it before the end of the function either.
> Coverity is not great at understanding our refcounting objects,
> but this does look like either we're missing a qobject_unref()
> or something should be keeping hold of the dictionary. Probably
> best to ask a block layer expert.

AFAICT nothing will consume the dictionary so it does appear that we're missing 
an unref here.

I'll prepare a patch with fixes for all these and send it shortly.

  Paul

> 
> thanks
> -- PMM

reply via email to

[Prev in Thread] Current Thread [Next in Thread]