qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] vhost-user: add separate memslot counter for vhost-user


From: Igor Mammedov
Subject: Re: [PATCH] vhost-user: add separate memslot counter for vhost-user
Date: Wed, 21 Oct 2020 16:34:22 +0200

On Wed, 14 Oct 2020 12:11:34 -0400
Raphael Norwitz <raphael.s.norwitz@gmail.com> wrote:

> On Wed, Oct 14, 2020 at 3:08 AM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Tue, Oct 13, 2020 at 08:58:59PM -0400, Raphael Norwitz wrote:  
> > > On Tue, Oct 6, 2020 at 5:48 AM Igor Mammedov <imammedo@redhat.com> wrote: 
> > >  
> > > >
> > > > On Mon, 28 Sep 2020 21:17:31 +0800
> > > > Jiajun Chen <chenjiajun8@huawei.com> wrote:
> > > >  
> > > > > Used_memslots is equal to dev->mem->nregions now, it is true for
> > > > > vhost kernel, but not for vhost user, which uses the memory regions
> > > > > that have file descriptor. In fact, not all of the memory regions
> > > > > have file descriptor.

> > > > > It is usefully in some scenarios, e.g. used_memslots is 8, and only
> > > > > 5 memory slots can be used by vhost user, it is failed to hot plug
> > > > > a new memory RAM because vhost_has_free_slot just returned false,
> > > > > but we can hot plug it safely in fact.  
can you find out what are these extra 3 memory regions are and why they are
filtered out from regions that are passed to vhost-user?

> > > >
> > > > I had an impression that all guest RAM has to be shared with vhost,
> > > > so combination of anon and fd based RAM couldn't work.
> > > > Am I wrong?  
> > >
> > > I'm not sure about the kernel backend, but I've tested adding anon
> > > memory to a VM with a vhost-user-scsi device and it works (eventually
> > > the VM crashed, but I could see the guest recognized the anon RAM).
> > > The vhost-user code is designed to work with both. I'm not sure I see
> > > a use case, but if there is one, this would be a valid issue. Maybe
> > > Jiajun or Jianjay can elaborate.  
> >
> > Hmm does not vhost-user skip all regions that do not have an fd?
> >
> >
> >         mr = vhost_user_get_mr_data(reg->userspace_addr, &offset, &fd);
> >         if (fd > 0) {
> >             if (track_ramblocks) {
> >                 assert(*fd_num < VHOST_MEMORY_BASELINE_NREGIONS);
> >                 trace_vhost_user_set_mem_table_withfd(*fd_num, mr->name,
> >                                                       reg->memory_size,
> >                                                       reg->guest_phys_addr,
> >                                                       reg->userspace_addr,
> >                                                       offset);
> >                 u->region_rb_offset[i] = offset;
> >                 u->region_rb[i] = mr->ram_block;
> >             } else if (*fd_num == VHOST_MEMORY_BASELINE_NREGIONS) {
> >                 error_report("Failed preparing vhost-user memory table 
> > msg");
> >                 return -1;
> >             }
> >             vhost_user_fill_msg_region(&region_buffer, reg, offset);
> >             msg->payload.memory.regions[*fd_num] = region_buffer;
> >             fds[(*fd_num)++] = fd;
> >         } else if (track_ramblocks) {
> >             u->region_rb_offset[i] = 0;
> >             u->region_rb[i] = NULL;
> >         }
> >
> >
> >
> > In your test, is it possible that you were lucky and guest did not send
> > any data from anon memory to the device?  
> 
> Yes - vhost-user skips the region and does not send anon memory to the
> device, but it does not fail the hot-add operation.
> 
> In my test the fd > 0 check definitely failed and went on to add the
> memory without sending it to the backend. I understand why this can be
> problematic (it did eventually crash the VM), but it seems like we
> allow it as of today. I can't think of a valid reason why you would
> want anon and FD ram together, but I figured there may be a reason
> since the vhost-user code allows for it. Should we maybe block that
> path altogether instead of patching it up?

I'm more inclined to disabling mixed (provided that's really the case)
anon and FD RAM whenever vhost (user) is used or disable hot plugging
vhost-user device in case machine has mixed RAM.
Otherwise it's just a time bomb, waiting till guest OS tries to transmit
data that it just allocated from anon RAM.


> > > >  
> > > > >
> > > > > --
> > > > > ChangeList:
> > > > > v3:
> > > > > -make used_memslots a member of struct vhost_dev instead of a global 
> > > > > static value  
> > > > it's global resource, so why?  
> > >
> > > I suggested it because I thought it made the code a little cleaner.
> > > I'm not opposed to changing it back, or having it stored at the
> > > vhost_user level.  
> >  
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]