grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: UEFI/PXE and emulating grub-legacy-uefi-hacked behaviour


From: Seth Goldberg
Subject: Re: RFC: UEFI/PXE and emulating grub-legacy-uefi-hacked behaviour
Date: Wed, 25 Apr 2012 21:10:53 -0700

On Apr 25, 2012, at 7:23 PM, Bean wrote:

> 2012/4/26 Vladimir 'φ-coder/phcoder' Serbinenko <address@hidden>:
>> On 25.04.2012 22:21, Bean wrote:
>>> On Thu, Apr 26, 2012 at 1:57 AM, Seth Goldberg <address@hidden> wrote:
>>>>  Just to chime in here with some data -- I've found numerous UEFI systems' 
>>>> network functionality to be buggy (what a shock, right).  Specifically, 
>>>> using the TFTP APIs allow files to be retrieved, but using GRUB 2's TFTP 
>>>> stack, those same files fail to download, with the failure lying somewhere 
>>>> within the network driver.  In other words, there is close coupling 
>>>> between the network driver and the TFTP implementation in some vendors' 
>>>> UEFI implementations such that when you try to just use SNP, you end up 
>>>> with random timeouts that kill performance or dropped packets.  So, 
>>>> supporting use of UEFI's TFTP APIs seems like a good thing to do to deal 
>>>> with those types of systems.
>>> Hi,
>>> 
>>> Actually I believe the problem is not in snp, but in timeout handling
>>> mechanism. I once implemented a tftp service using udp, and found its
>>> performance very bad compared to the native driver. After some
>>> debugging, I found out that it set an event which is signaled by snp
>>> while udp set the timeout to 0 so that it always returned whether or
>>> not there is available packet. When I use similar technique, my own
>>> tftp run as fast as the native service.
>> It's very good that you found the real reason for this brain damage. I'm
>> happy that someone did. Do you have this in code/as patch?
> 
> Hi,
> 
> This requires significant modification to the driver interface, change
> the definition of get_card_packet from:
> 
> static struct grub_net_buff *
> get_card_packet (const struct grub_net_card *dev)
> 
> to something like this:
> static struct grub_net_buff *
> get_card_packet (const struct grub_net_card *dev, int timeout)
> 
> The former uses async mode and return as soon as possible, while the
> later uses sync mode and wait for the packet for timeout before
> returning. Perhaps you can contact the network stack author to see if
> such transformation is possible.
> 
> PS: the reason why async mode doesn't work very well is because it
> spent too much time in upper layers, which increase the chance of
> packet loss. Each lost packet would need to be re-transmitted by
> server which is a major performance killer.

  How does this work around the issue?  I'm not seeing it -- we call SNP 
directly.  We don't go through UDP or any other upper layers in efinet.  When I 
did the investigation, I removed ALL other consumers of SNP manually via the 
efi shell before loading GRUB 2 and still saw packet loss.

 --S

> 
> -- 
> Best wishes
> Bean
> 
> _______________________________________________
> Grub-devel mailing list
> address@hidden
> https://lists.gnu.org/mailman/listinfo/grub-devel




reply via email to

[Prev in Thread] Current Thread [Next in Thread]