bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GSoC project about virtualization using Hurd mechanisms


From: olafBuddenhagen
Subject: Re: GSoC project about virtualization using Hurd mechanisms
Date: Sun, 13 Apr 2008 10:53:50 +0200
User-agent: Mutt/1.5.17+20080114 (2008-01-14)

Hi,

On Fri, Apr 11, 2008 at 10:50:22AM +0200, zhengda wrote:

> I have a question: why is it not safe for two pfinets to share one
> device? the communication between the device driver and pfinet is IPC.
> As long as IPC communication is safe, the device driver should be able
> to get all packets from pfinets, and pfinets should be able to get the
> packets dispatched by the driver.

Not "safe" in the sense of not secure: Any server that has access to the
device can set arbitrary rules, and thus easily sniff packets not
intended for it, and even actively interfere with others'
communication...

There are probably other problems with the current approach as well,
like not being able to prevent one user (pfinet or whatever) to suck up
all network bandwith.

It is good enough for many use cases, but not really suitable for
some...

> Every pfinet or other users talk to the BPF translator first, Does it
> mean only the BPF translator can talk to the network driver  directly?

No, not really. Unlike the full network hypervisor, which I mentioned as
an alternative design, the BPF translator is only responsible for
setting the filter rules; the actual packets are still sent/received
through the kernel device directly...

But being in charge of setting the filters, the BPF translator could
also enact some (limited) policy, by deciding what kind of filter rules
individual pfinets and other users can set... If BPF is made the only
way for setting the filters, the policy could actually be enforced this
way -- though I don't know whether this was originally part of the plan
for the BPF translator.

> The other problem is who gives the filter rule. If it's pfinet, the
> filter rule of every pfinet should be sent to the  translator first?

Yes, that's pretty much the idea. Though I don't know whether the
clients sent pre-cooked filter rules to the BPF translator, or whether
it presents some more abstract interface, and assembles the actual rules
from that...

> But the current filter rules in pfinet is hard coded, so no matter how
> many filters are in the packet filter in the kernel, the  result is
> the same, the packet filter cannot help the BPF translator dispatch
> packets. If the BPF translator deicdes the filter rules for every
> pfinet, I still  haven't found a simple way for the translator to
> dispatch packets. I think I'm quite confused by the BPF translator.

I see, there was some misunderstanding here. Does the above explanation
clear it up?

> The other question is: do we need to consider about the issue of the
> performance since the packet goes between the kernel and the user
> space so many times.

Indeed, IPC can pose a considerable overhead in extreme cases, like
gigabit ethernet on moderate hardware when using lots of small packets
-- especially on Mach where IPC performance is very poor...

IHMO that shouldn't be considered a killer criterium, though: Unless
performance is degraded really badly, creating visible problems, I
generally prefer a more powerful and elegant design at the expense of a
bit of efficiency...

Newer microkernels show that much better IPC performance can be
achieved. If IPC really proves a serious overhead in many situations, we
should work an improving it, rather then compromising our design in an
attempt to work around the performace issues by avoiding IPC...

-antrik-




reply via email to

[Prev in Thread] Current Thread [Next in Thread]