[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Intention to work on GSoC project
From: |
Sahil |
Subject: |
Re: Intention to work on GSoC project |
Date: |
Fri, 05 Apr 2024 00:36:02 +0530 |
Hi,
On Thursday, April 4, 2024 12:07:49 AM IST Eugenio Perez Martin wrote:
> On Wed, Apr 3, 2024 at 4:36 PM Sahil <icegambit91@gmail.com> wrote:
> [...]
> > I would like to clarify one thing in the figure "Full two-entries
> > descriptor table". The driver can only overwrite a used descriptor in the
> > descriptor ring, right?
>
> Except for the first round, the driver can only write to used entries
> in the descriptor table. In other words, their avail and used flags
> must be equal.
>
> > And likewise for the device?
>
> Yes, but with avail descs. I think you got this already, but I want to
> be as complete as possible here.
>
> > So in the figure, the driver will have to wait until descriptor[1] is
> > used before it can overwrite it?
>
> Yes, but I think it is easier to think that both descriptor id 0 and 1
> are available already. The descriptor id must be less than virtqueue
> size.
>
> An entry with a valid buffer and length must be invalid because of the
> descriptor id in that situation, either because it is a number > vq
> length or because it is a descriptor already available.
I didn't think of it in that way. This makes sense now.
> > Suppose the device marks descriptor[0] as used. I think the driver will
> > not be able to overwrite that descriptor entry because it has to go in
> > order and is at descriptor[1]. Is that correct?
>
> The device must write one descriptor as used, either 0 or 1, at
> descriptors[0] as all the descriptors are available.
>
> Now, it does not matter if the device marks as used one or the two
> descriptors: the driver must write its next available descriptor at
> descriptor[1]. This is not because descriptor[1] contains a special
> field or data, but because the driver must write the avail descriptors
> sequentially, so the device knows the address to poll or check after a
> notification.
>
> In other words, descriptor[1] is just a buffer space from the driver
> to communicate an available descriptor to the device. It does not
> matter what it contained before the writing, as the driver must
> process that information before writing the new available descriptor.
>
> > Is it possible for the driver
> > to go "backwards" in the descriptor ring?
>
> Nope, under any circumstance.
Understood. Thank you for the clarification.
> [...]
> > Q1.
> > In the paragraph just above Figure 6, there is the following line:
> > > the vhost kernel thread and QEMU may run in different CPU threads,
> > > so these writes must be synchronized with QEMU cleaning of the dirty
> > > bitmap, and this write must be seen strictly after the modifications of
> > > the guest memory by the QEMU thread.
> >
> > I am not clear on the last part of the statement. The modification of
> > guest memory is being done by the vhost device and not by the QEMU
> > thread, right?
>
> QEMU also writes to the bitmap cleaning it, so it knows the memory
> does not need to be resent.
Oh, I thought, from figure 6, the bitmap is a part of QEMU's memory but is
separate from the guest's memory.
> Feel free to ask questions about this, but you don't need to interact
> with the dirty bitmap in the project.
Understood, I won't go off on a tangent in that case.
> [...]
> > Regarding the implementation of this project, can the project be broken
> > down into two parts:
> > 1. implementing packed virtqueues in QEMU, and
>
> Right, but let me expand on this: QEMU already supports packed
> virtqueue in an emulated device (hw/virtio/virtio.c). The missing part
> is the "driver" one, to be able to communicate with a vDPA device, at
> hw/virtio/vhost-shadow-virtqueue.c.
Got it. I'll take a look at "hw/virtio/virtio.c".
> [...]
> > My plan is to also understand how split virtqueue has been implemented
> > in QEMU. I think that'll be helpful when moving the kernel's implementation
> > to QEMU.
>
> Sure, the split virtqueue is implemented in the same file
> vhost_shadow_virtqueue.c. If you deploy vhost_vdpa +vdpa_sim or
> vp_vdpa [1][2], you can:
> * Run QEMU with -netdev type=vhost-vdpa,x-svq=on
> * Set GDB breakpoint in interesting functions like
> vhost_handle_guest_kick and vhost_svq_flush.
I'll set up this environment as well.
Thanks,
Sahil
- Re: Intention to work on GSoC project, daleyoung4242, 2024/04/01
- Re: Intention to work on GSoC project, Sahil, 2024/04/02
- Re: Intention to work on GSoC project, Eugenio Perez Martin, 2024/04/02
- Re: Intention to work on GSoC project, Sahil, 2024/04/03
- Re: Intention to work on GSoC project, Eugenio Perez Martin, 2024/04/03
- Re: Intention to work on GSoC project,
Sahil <=
- Re: Intention to work on GSoC project, Sahil, 2024/04/14
- Re: Intention to work on GSoC project, Eugenio Perez Martin, 2024/04/15
- Re: Intention to work on GSoC project, Sahil, 2024/04/15
- Re: Intention to work on GSoC project, Eugenio Perez Martin, 2024/04/16
- Re: Intention to work on GSoC project, Sahil, 2024/04/17