[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 20/24] nvme: add support for scatter gather lists
From: |
Klaus Birkelund Jensen |
Subject: |
Re: [PATCH v4 20/24] nvme: add support for scatter gather lists |
Date: |
Mon, 13 Jan 2020 10:28:31 +0100 |
On Jan 9 11:44, Beata Michalska wrote:
> Hi Klaus,
>
> On Thu, 19 Dec 2019 at 13:09, Klaus Jensen <address@hidden> wrote:
> > @@ -73,7 +73,12 @@ static inline bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr
> > addr)
> >
> > static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
> > {
> > - if (n->cmbsz && nvme_addr_is_cmb(n, addr)) {
> > + hwaddr hi = addr + size;
> > + if (hi < addr) {
>
> What is the actual use case for that ?
This was for detecting wrap around in the unsigned addition. I found
that nvme_map_sgl does not check if addr + size is out of bounds (which
it should). With that in place this check is belt and braces, so I might
remove it.
> > +static uint16_t nvme_map_sgl_data(NvmeCtrl *n, QEMUSGList *qsg,
> > + QEMUIOVector *iov, NvmeSglDescriptor *segment, uint64_t nsgld,
> > + uint32_t *len, bool is_cmb, NvmeRequest *req)
> > +{
> > + dma_addr_t addr, trans_len;
> > + uint16_t status;
> > +
> > + for (int i = 0; i < nsgld; i++) {
> > + if (NVME_SGL_TYPE(segment[i].type) != SGL_DESCR_TYPE_DATA_BLOCK) {
> > + trace_nvme_dev_err_invalid_sgl_descriptor(nvme_cid(req),
> > + NVME_SGL_TYPE(segment[i].type));
> > + return NVME_SGL_DESCRIPTOR_TYPE_INVALID | NVME_DNR;
> > + }
> > +
> > + if (*len == 0) {
> > + if (!NVME_CTRL_SGLS_EXCESS_LENGTH(n->id_ctrl.sgls)) {
> > +
> > trace_nvme_dev_err_invalid_sgl_excess_length(nvme_cid(req));
> > + return NVME_DATA_SGL_LENGTH_INVALID | NVME_DNR;
> > + }
> > +
> > + break;
> > + }
> > +
> > + addr = le64_to_cpu(segment[i].addr);
> > + trans_len = MIN(*len, le64_to_cpu(segment[i].len));
> > +
> > + if (nvme_addr_is_cmb(n, addr)) {
> > + /*
> > + * All data and metadata, if any, associated with a particular
> > + * command shall be located in either the CMB or host memory.
> > Thus,
> > + * if an address if found to be in the CMB and we have already
>
> s/address if/address is ?
Fixed, thanks.
> > +static uint16_t nvme_map_sgl(NvmeCtrl *n, QEMUSGList *qsg, QEMUIOVector
> > *iov,
> > + NvmeSglDescriptor sgl, uint32_t len, NvmeRequest *req)
> > +{
> > + const int MAX_NSGLD = 256;
> > +
> > + NvmeSglDescriptor segment[MAX_NSGLD];
> > + uint64_t nsgld;
> > + uint16_t status;
> > + bool is_cmb = false;
> > + bool sgl_in_cmb = false;
> > + hwaddr addr = le64_to_cpu(sgl.addr);
> > +
> > + trace_nvme_dev_map_sgl(nvme_cid(req), NVME_SGL_TYPE(sgl.type),
> > req->nlb, len);
> > +
> > + if (nvme_addr_is_cmb(n, addr)) {
> > + is_cmb = true;
> > +
> > + qemu_iovec_init(iov, 1);
> > + } else {
> > + pci_dma_sglist_init(qsg, &n->parent_obj, 1);
> > + }
> > +
> > + /*
> > + * If the entire transfer can be described with a single data block it
> > can
> > + * be mapped directly.
> > + */
> > + if (NVME_SGL_TYPE(sgl.type) == SGL_DESCR_TYPE_DATA_BLOCK) {
> > + status = nvme_map_sgl_data(n, qsg, iov, &sgl, 1, &len, is_cmb,
> > req);
> > + if (status) {
> > + goto unmap;
> > + }
> > +
> > + goto out;
> > + }
> > +
> > + /*
> > + * If the segment is located in the CMB, the submission queue of the
> > + * request must also reside there.
> > + */
> > + if (nvme_addr_is_cmb(n, addr)) {
> > + if (!nvme_addr_is_cmb(n, req->sq->dma_addr)) {
> > + return NVME_INVALID_USE_OF_CMB | NVME_DNR;
> > + }
> > +
> > + sgl_in_cmb = true;
>
> Why not combining this with the condition few lines above
> for the nvme_addr_is_cmb ? Also is the sgl_in_cmb really needed ?
> If the address is from CMB, that implies the queue is also there,
> otherwise we wouldn't progress beyond this point. Isn't is_cmb sufficient ?
>
You are right, there is no need for sgl_in_cmb.
But checking if the queue is in the cmb only needs to be done if the
descriptor in DPTR is *not* a "singleton" data block. But I think I can
refactor it to be slightly nicer, or at least be more specific in the
comments.
> > + }
> > +
> > + while (NVME_SGL_TYPE(sgl.type) == SGL_DESCR_TYPE_SEGMENT) {
> > + bool addr_is_cmb;
> > +
> > + nsgld = le64_to_cpu(sgl.len) / sizeof(NvmeSglDescriptor);
> > +
> > + /* read the segment in chunks of 256 descriptors (4k) */
> > + while (nsgld > MAX_NSGLD) {
> > + if (nvme_addr_read(n, addr, segment, sizeof(segment))) {
> > + trace_nvme_dev_err_addr_read(addr);
> > + status = NVME_DATA_TRANSFER_ERROR;
> > + goto unmap;
> > + }
> > +
> > + status = nvme_map_sgl_data(n, qsg, iov, segment, MAX_NSGLD,
> > &len,
> > + is_cmb, req);
>
> This will probably fail if there is a BitBucket Descriptor on the way (?)
>
nvme_map_sgl_data will error out on any descriptors different from
"DATA_BLOCK". So I think we are good.
> > + while (nsgld > MAX_NSGLD) {
> > + if (nvme_addr_read(n, addr, segment, sizeof(segment))) {
> > + trace_nvme_dev_err_addr_read(addr);
> > + status = NVME_DATA_TRANSFER_ERROR;
> > + goto unmap;
> > + }
> > +
> > + status = nvme_map_sgl_data(n, qsg, iov, segment, MAX_NSGLD, &len,
> > + is_cmb, req);
> > + if (status) {
> > + goto unmap;
> > + }
> > +
> > + nsgld -= MAX_NSGLD;
> > + addr += MAX_NSGLD * sizeof(NvmeSglDescriptor);
> > + }
> > +
>
> This seems to be the same as the while loop above. Why not making it common ?
>
Yeah, this should probably be refactored.
Thanks again for your reviews!
Klaus