lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lwip-users] LWIP configuration to maximize TCP throughputgivenRAM c


From: Mike Kleshov
Subject: Re: [lwip-users] LWIP configuration to maximize TCP throughputgivenRAM constraints
Date: Tue, 21 Oct 2008 09:41:00 +0400

> I would like to change PBUF_POOL_BUFSIZE from the default of TCP_MSS + 40 +
> 14, to Piero's value of 128, then increase PBUF_POOL_SIZE as appropriate.

In my application I chose to go with a small PBUF_POOL_BUFSIZE and
increased PBUF_POOL_SIZE too. In theory, this should decrease memory
use when you have many small incoming packets. When you have large
incoming packets, extra processing power will be required for chained
pbufs, and memory use will increase due to the overhead of pbuf
headers.

> However, when I do this, I find that incoming TCP packets are being
> truncated to 74 bytes of data (128 - (40 + 14)).

They are not truncated. The packets contain Ethernet headers, IP
headers, TCP headers. So there will be less data in the first pbuf of
a packet.

> In my stream receive callback function:
>
> err_t StreamRecvCallback(void *arg, struct tcp_pcb *tpcb, struct pbuf* p,
> err_t err);
>
> I always receive only a single buffer in variable p. The p->next field is
> always null, although it should point to the next portion of the data. Do I
> have an issue in my configuration, or is it likely in my code? Configuration
> pasted below.

You should look at your Ethernet driver. The pbufs are filled there.
Apparently, your driver expects that pbufs from the pbuf pool are
large enough to hold a complete packet. With smaller pbufs, the driver
should chain them when storing incoming packets.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]