lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] tcp_write() errors on snd_queuelen


From: Kieran Mansley
Subject: RE: [lwip-users] tcp_write() errors on snd_queuelen
Date: Thu, 17 Mar 2011 13:04:02 +0000

On Thu, 2011-03-17 at 12:23 +0000, Tim Lambrix wrote:

> A few more questions: What are the performance tradeoffs of a larger
> TCP_MSS versus a smaller one when calling tcp_write with a typical 40
> byte write?  Obviously, I want to keep the memory requirements as
> small as possible but be able to handle the load in (practically) any
> network. 

A large MSS allows you to send and receive larger segments.  Typically
there is a per-byte overhead and a per-packet overhead, so
sending/receiving fewer segments will mean less overhead in total.
However if all the packets you send and receive are less than the MSS
anyway then you won't see any difference.  Note that the size you call
tcp_write() with isn't the size of the packets - they may be batched
together by the stack to form larger packets (up to the MSS).

On the other hand a larger MSS can mean that you need to commit more
memory, and that potentially some of that will be wasted.  E.g. if you
have configured a large MSS but only receive small packets and each
packet goes into an MSS-sized buffer then most of the buffer will be
unused.

To counter that lwIP allows (if your driver also supports it) the
splitting of packets across buffers.  This means you can have small
buffers, and append together as many as you need to hold the packet.
Therefore there isn't the waste in unused memory, but this in turn has
the downside that there is now some extra overhead in dealing with all
these chained buffers (it is more complex than just a single buffer per
packet).

In summary, there is no right way; you need to understand the trade-offs
and choose what is best for your application and network.  I personally
would start with a standard sized MSS and have lots of small pbufs
chained together. 

> I see the default for MEM_SIZE is only 1600.  I have mine set to 22K
> but is this really necessary?  How should one go about picking
> reasonable values for memory and TCP requirements that are sufficient?
> Is there a formula or calculation for any of these based on the
> frequency that tcp_write is called?

The default is very conservative.  Probably too small.  But lwIP is
supposed to work in such small amount of memory.  It won't get very good
performance though.  With a more sensible amount of memory (such as you
have allocated) it will start to be able to stream data more
efficiently.  The best approach for getting the allocation of all the
different pools and structures is unfortunately quite iterative: choose
a starting point, run with your expected workload, use LWIP_STATS to see
what it is running out of (or not using much of), adjust appropriately
and repeat.

> While running yesterday on with the TCP_QLEN_DEBUG enabled, I saw two
> times that over a second went by that no tcp_recieve was called and
> therefore the snd_queuelen value reached the TCP_SND_QUEUELEN limit I
> have set in the options file.  Is this typical

It depends on your workload.  If you didn't receive anything from the
network for a second then I wouldn't expect the stack to pass anything
to the application for a second.  But I'm guessing that you do have
packets delivered from the network more often than that.  In that case
there are few reasons why you might see gaps: (i) loss requires
retransmissions (and associated round-trip-times to detect and fix);
(ii) lack of free buffers may mean the stack isn't able to handle
received packets, so it will drop them, and there will be loss - see
(i); (iii) some problem with your port could mean that lwIP just didn't
get called in that time, and so couldn't do any work.  I would guess (i)
or (ii) - a packet capture would show the retransmissions, and
LWIP_STATS will highlight if you're running out of buffers (possible if
they're all on the send queue). 

> and what approach would you recommend such that this doesn't happen -
> keep growing the TCP_SND_QUEUELEN or is there something else not
> configured correctly (perhaps the TCP_WND not being at least 2* or 4*
> the TCP_MSS)?

I think growing the send queue length is unlikely to be helpful - it
will just increase the amount of memory in use.  What you need to do is
work out why the stack is unable to currently send data and solve that.
I.e. something is limiting your outgoing bandwidth (or you have a bug
that is resulting in an ever-increasing-send-queue).  If we can increase
your outgoing bandwidth then the send queue won't grow.  If we can't
increase your outgoing bandwidth then increasing the send queue length
will just delay the time it takes to overflow.  The send queue is only
useful for smoothing bursts in application writes and network sends;
once it is big enough to do that make it larger will be harmful.  Also,
and more fundamentally, if you don't want to drop data your application
needs to cope with the send queue becoming full either by blocking or by
having its own buffering scheme.

Kieran




reply via email to

[Prev in Thread] Current Thread [Next in Thread]