bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Fwd: Re: pfinet


From: Hisham Kotry
Subject: Fwd: Re: pfinet
Date: Sun, 4 Aug 2002 12:37:52 -0700 (PDT)

I've been hacking on hurd-net lately, and I sent this
e-mail to marcus, he said that I should redirect it to
bug-hurd for discussion...

This e-mail may seem a bit offensive to either Jeff or
Roland (I've had private discussions with jeff about
adding ACLs and NAT to the current pfinet, and they're
flowing well so far, work should be expected soon) I
hope neither of you guys feel attacked/insulted, I
wrote this e-mail in a time where getting input from
Jeff seemed impossible and I had the feeling that he
was ignoring me (this was resolved). 

Roland, I respect your code and your ideas, but then
Jeff and I feel troublesomed by the modifications you
made to 2.2's stack in pfinet. I've hesitated in
asking for your input, but marcus told me to post,
again, you have my apologiezes if you felt this e-mail
is a bit aggresive to you...

After such a long and un-related introduction, here
goes nothing....


--- Hisham Kotry <etsh_cucu@yahoo.com> wrote:
> Date: Mon, 29 Jul 2002 09:44:48 -0700 (PDT)
> From: Hisham Kotry <etsh_cucu@yahoo.com>
> Subject: Re: pfinet
> To: Marcus Brinkmann
> <Marcus.Brinkmann@ruhr-uni-bochum.de>
> 
> Marcus,
> 
> I've seen your post to hurd-dev about the Hurd's
> current status. So, I thought about informing you of
> the current network stack's status.
> 
> Currently, we have no more than a broken
> implementation of an IPv4 stack. Jeff and Roland
> McGrath have both had intention in porting a
> monolothic -in every scense of the word- network
> stack
> to user-land, and they've failed.
> 
> Roland hacked linuux's IPv4 stack to get it working
> with mach -and the Hurd in geenral- while ignoring
> linux's design by incorprating
> managment/configuration
> code into the stack. Now, why would this present a
> problem you may ask? Actually, it dos. As Jeff said,
> pfinet must have master control over the device's
> port
> to prevent anyone from pubshing packets -that were
> supposed to be blocked by the firewall- directly
> into
> the NIC. Such a thing would mean that whomever wants
> to work on pfinet6 will have to arrange for his
> traffic to pass thru pfinet. This means an ugly and
> nonscense -yet doable- hack is required.
> 
> Back to Jeff. Jeff thinks that being ukernel
> dependent
> and doing our best to achive zero-copy networking
> would help increase performance, but this thought is
> somewhat limited. Usually, the guys on netdev -the
> linux network stack development list- are worried
> about zero-copy networking because moving data frfom
> kernel-space to user-space and the other way around
> involves expensive copying, so they do their best to
> achive zero-copy while processiong the packet inside
> the kernel -wich was also the reason for including
> all
> the performance critical socket codee into the
> kernel,
> consider tux as a good example-. We don't suffer
> from
> this atm [1].
> 
> What we suffer is bad thoughts taken from learning
> from a design that doesn't suite the Hurd -that is,
> from linux and *bsd-. For example, Jeff's insistence
> on depending on Mach means that we'll have to use
> its
> packet classifier, MPF. The problem isn't wether we
> can have MPF run on l4-hurd or not, it's that MPF
> sucks. MPF relies on interpreting its classifiers
> and
> turning them into instructions for its virtual
> machine
> wich pushes the compared bits into a stack-based
> engine. This thing was state-of-art back in 1993 and
> its ability to classify 12k packets/s amazed
> everyone.
> Today, this limit is no where near half the amount
> of
> packets that enter a fast-ethernet NIC in a slightly
> busy network.
> 
> So, the dream of better performance cause we rely on
> Mach and zero-copy isn't true in the least. Of
> course
> zero-copy is important for performance reasons, but
> it
> comes last in a long chain of elements that affect
> performance.
> 
> I proposed a solution to this problem by using an
> abstrat/virtual interface -instead of giving
> pfinet(6)
> direct acceess to the NIC- that implements linux's
> virtual-NIC API, and does all the packet
> classification by itself (instead of by stacks) and
> of
> course this classifier wont use MPF :-). Thus each
> stack would only see the traffic that it had
> registered its interest in with the
> virttual/abstract
> interface (for example, pfinet would register itself
> with the expression "Ethertype == 0x800" assuming
> that
> this VI is for ethernet) then you'd only have to
> auth
> each stack with the VI to rest assure that no
> unn-wanted party could send any traffic that it
> doesn't have the right to send -ie. as in the
> firewall
> problem above where we needed to have master control
> over the device's port-.
> 
> Simply, this eases portting of linux's code, allows
> multiple stack-translators to co-exsist happliy
> (un-like the current case) and it fixes a nunmber of
> stuff that we suffer.
> 
> Anyway, I think that using linux's codee wont be
> useful on the long-run. So, I guess the next logical
> step is to make sure pfinet's substitute makes it
> into
> l4-hurd instead of pfinet[2].
> 
> I just wanted to make it clear that pfinet wont
> evolve
> into anything bigger than it currently is. So, ACLs
> and  NAT are probably the last thing that will make
> it
> into pfinet as an attempt to improve it's
> performance
> under Oskit-Mach[3].
> 
> BTW, when will Oskit-Mach be out? I'll try to finish
> the ACLs and NAT stuff soon, seems like Jeff is
> busier
> with the auto-builder, glibc, debian or whatever, so
> I'll do it alone. I've already done some work on the
> ACLs (source, source port, dest, dest port and
> protocol), the idea of the NAt codee is in my head,
> it
> just needs implementation, and state maintainence is
> somewhat working -I just need to clean this thing-.
> Tell me what else should be there beside those? I
> don't think there's a wishlist somewhere :-)
> 
> Cheers,
> kotry
> 
> [1] the Hurd prooves that User-Level Networking
> stacks
> are faster than in-kernel stacks (this was the case
> with all previous ULNs but although they achived
> higher speeds than regular in-kernel stacks, they
> have
> always had trouble because of in-kernel socket
> programing and copying data from kernel-space
> drivers
> to user-space). In fact, Mattyg on irc beleaves the
> Hurd's networking performance is faster than that of
> Linunx 2.2 on the same box -2.4 makes extensive use
> of
> signals, thus we could only compare both when we
> have
> faster IPC, ie. when the port to l4-hurd is done-
> I've
> asked him to do some tests and I'm expecting the
> results soon. Maybe then the Hurd would be good at
> something un-like Roland said :-)
> 
> [2] Hurd-net, as I named it, is my idea of a
> user-space multi-layered network stack that
> overcomes
> all the previous design mistakes in ULNs -ULNs on
> micro-kernels were never studied befor- and shows
> some
> of the Hurd's advantages. I'll post the design to
> bug-hurd soon after I feel comfortable about it.
> 
> [3] Some other minor changes that relate to
> code-logic
> rather than network knowlege could be made. For
> example, in tcp.c our listen() is a special case of
> select() while the changes in 2.4 made it a special
> case of poll() -and you sure know that network
> stacks
> handle pretty much file descriptors due to the high
> number of packets passing thru it- so, the
> performance
> impact is seriously non-nelgigable. We also suffer a
> nuumber of race conditions -namely in the
> retrransmission timer's code- so some basic tweaks
> would result in somewhat better performance.
> 
> __________________________________________________
> Do You Yahoo!?
> Yahoo! Health - Feel better, live better
> http://health.yahoo.com


__________________________________________________
Do You Yahoo!?
Yahoo! Health - Feel better, live better
http://health.yahoo.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]