qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 08/15] block/nvme: Use union of NvmeIdCtrl / NvmeIdNs stru


From: Stefano Garzarella
Subject: Re: [PATCH v5 08/15] block/nvme: Use union of NvmeIdCtrl / NvmeIdNs structures
Date: Fri, 21 Aug 2020 15:52:02 +0200

On Fri, Aug 21, 2020 at 03:27:15PM +0200, Philippe Mathieu-Daudé wrote:
> On 8/21/20 12:03 PM, Stefano Garzarella wrote:
> > On Thu, Aug 20, 2020 at 06:58:54PM +0200, Philippe Mathieu-Daudé wrote:
> >> We allocate an unique chunk of memory then use it for two
> >> different structures. By using an union, we make it clear
> >> the data is overlapping (and we can remove the casts).
> >>
> >> Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
> >> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> >> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> >> ---
> >>  block/nvme.c | 31 +++++++++++++++----------------
> >>  1 file changed, 15 insertions(+), 16 deletions(-)
> >>
> >> diff --git a/block/nvme.c b/block/nvme.c
> >> index 99822d9fd36..2bd1935f951 100644
> >> --- a/block/nvme.c
> >> +++ b/block/nvme.c
> >> @@ -508,9 +508,10 @@ static int nvme_cmd_sync(BlockDriverState *bs, 
> >> NVMeQueuePair *q,
> >>  static void nvme_identify(BlockDriverState *bs, int namespace, Error 
> >> **errp)
> >>  {
> >>      BDRVNVMeState *s = bs->opaque;
> >> -    NvmeIdCtrl *idctrl;
> >> -    NvmeIdNs *idns;
> >> -    uint8_t *id;
> >> +    union {
> >> +        NvmeIdCtrl ctrl;
> >> +        NvmeIdNs ns;
> >> +    } *id;
> > 
> > What about defining a new 'NvmeId' type with this union?
> 
> I'd rather not, these are different command responses, it
> just happens to make this code simpler as the same response
> packet is used for the 2 requests.
> 
> See previous discussion:
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg716858.html

Yeah, if it is useful only in this part of the code, never mind defining a
new type.

> 
> > 
> >>      NvmeLBAF *lbaf;
> >>      uint16_t oncs;
> >>      int r;
> >> @@ -520,14 +521,12 @@ static void nvme_identify(BlockDriverState *bs, int 
> >> namespace, Error **errp)
> >>          .cdw10 = cpu_to_le32(0x1),
> >>      };
> >>  
> >> -    id = qemu_try_blockalign0(bs, sizeof(NvmeIdCtrl));
> >> +    id = qemu_try_blockalign0(bs, sizeof(*id));
> >>      if (!id) {
> >>          error_setg(errp, "Cannot allocate buffer for identify response");
> >>          goto out;
> >>      }
> >> -    idctrl = (NvmeIdCtrl *)id;
> >> -    idns = (NvmeIdNs *)id;
> >> -    r = qemu_vfio_dma_map(s->vfio, id, sizeof(NvmeIdCtrl), true, &iova);
> >> +    r = qemu_vfio_dma_map(s->vfio, id, sizeof(*id), true, &iova);
> >>      if (r) {
> >>          error_setg(errp, "Cannot map buffer for DMA");
> >>          goto out;
> >> @@ -539,22 +538,22 @@ static void nvme_identify(BlockDriverState *bs, int 
> >> namespace, Error **errp)
> >>          goto out;
> >>      }
> >>  
> >> -    if (le32_to_cpu(idctrl->nn) < namespace) {
> >> +    if (le32_to_cpu(id->ctrl.nn) < namespace) {
> >>          error_setg(errp, "Invalid namespace");
> >>          goto out;
> >>      }
> >> -    s->write_cache_supported = le32_to_cpu(idctrl->vwc) & 0x1;
> >> -    s->max_transfer = (idctrl->mdts ? 1 << idctrl->mdts : 0) * 
> >> s->page_size;
> >> +    s->write_cache_supported = le32_to_cpu(id->ctrl.vwc) & 0x1;
> >> +    s->max_transfer = (id->ctrl.mdts ? 1 << id->ctrl.mdts : 0) * 
> >> s->page_size;
> >>      /* For now the page list buffer per command is one page, to hold at 
> >> most
> >>       * s->page_size / sizeof(uint64_t) entries. */
> >>      s->max_transfer = MIN_NON_ZERO(s->max_transfer,
> >>                            s->page_size / sizeof(uint64_t) * s->page_size);
> >>  
> >> -    oncs = le16_to_cpu(idctrl->oncs);
> >> +    oncs = le16_to_cpu(id->ctrl.oncs);
> >>      s->supports_write_zeroes = !!(oncs & NVME_ONCS_WRITE_ZEROS);
> >>      s->supports_discard = !!(oncs & NVME_ONCS_DSM);
> >>  
> >> -    memset(id, 0, 4096);
> >> +    memset(id, 0, sizeof(*id));
> >>      cmd.cdw10 = 0;
> >>      cmd.nsid = cpu_to_le32(namespace);
> >>      if (nvme_cmd_sync(bs, s->queues[INDEX_ADMIN], &cmd)) {
> >> @@ -562,11 +561,11 @@ static void nvme_identify(BlockDriverState *bs, int 
> >> namespace, Error **errp)
> >>          goto out;
> >>      }
> >>  
> >> -    s->nsze = le64_to_cpu(idns->nsze);
> >> -    lbaf = &idns->lbaf[NVME_ID_NS_FLBAS_INDEX(idns->flbas)];
> >> +    s->nsze = le64_to_cpu(id->ns.nsze);
> >> +    lbaf = &id->ns.lbaf[NVME_ID_NS_FLBAS_INDEX(id->ns.flbas)];
> >>  
> >> -    if (NVME_ID_NS_DLFEAT_WRITE_ZEROES(idns->dlfeat) &&
> >> -            NVME_ID_NS_DLFEAT_READ_BEHAVIOR(idns->dlfeat) ==
> >> +    if (NVME_ID_NS_DLFEAT_WRITE_ZEROES(id->ns.dlfeat) &&
> >> +            NVME_ID_NS_DLFEAT_READ_BEHAVIOR(id->ns.dlfeat) ==
> >>                      NVME_ID_NS_DLFEAT_READ_BEHAVIOR_ZEROES) {
> >>          bs->supported_write_flags |= BDRV_REQ_MAY_UNMAP;
> >>      }
> >> -- 
> >> 2.26.2
> >>
> >>
> > 
> > With or without the new tyoe, the patch looks good to me:
> > 
> > Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
> 
> Thanks!
> 
> Phil.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]