lwip-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lwip-users] enqueing problem


From: Noam weissman
Subject: RE: [lwip-users] enqueing problem
Date: Sun, 27 Mar 2011 15:40:28 +0200

Well,

I am using lwip 1.3.2 

I do understand that on every call to tcp_write the snd_queuelen is  
incremented but I do not understand why it is not 
decremented when I call  tcp_output. Also take into consideration that I have 
closed NGLE so it should have worked ??
 
Thanks,
Noam.


-----Original Message-----
From: address@hidden [mailto:address@hidden On Behalf Of address@hidden
Sent: א 27 מרץ 2011 15:22
To: Mailing list for lwIP users
Subject: Re: [lwip-users] enqueing problem

Noam weissman:
> I have a problem that I have seen lots or users straggling with, but 
> without any real solution.
>
> I am trying to send data in a loop. I have triad closing NAGLE as follows:
>
> // this should shut down the NAGLE algorithm
>
> pcb->flags |= TF_NODELAY | TF_ACK_NOW;
>
Please don't use stack-internal variables and defines like this. 
Instead, use the TCP API function tcp_nagle_disable() to disable the nagle 
algorithm.
>
> I am calling tcp_output() on every tcp_write() but this does not help 
> as well. I got ERR_MEM
>
> after the 20^th something call to tcp_write()
>
That suggests your memory settings are too low.
>
> I managed to find in one of the answers here that I should use a 
> smaller window, meaning
>
> change the settings in lwipopts.h… So I did, and it was a bit better.
>
> /* TCP Maximum segment size. */
>
> //#define TCP_MSS 1460
>
> #define TCP_MSS 512
>
> /* TCP sender buffer space (bytes). */
>
> //#define TCP_SND_BUF (3*TCP_MSS)
>
> #define TCP_SND_BUF (8*TCP_MSS)
>
Well, these two defines didn't change the window directly... But there's a good 
chance TCP_WND gets changed when changing TCP_MSS if you are using the default 
define from opt.h...
Besides TCP_SND_BUF, you might want to increase TCP_SND_QUEUELEN and maybe also 
MEMP_NUM_TCP_SEG. Also, you might just run out of RAM in the heap (MEM_SIZE).
>
> Well the above helped and I was able to send more small packets but 
> after 32 instead of 20 it again
>
> It stopped sending and I got ERR_MEM.
>
> The most important thing is when I check wireshark I see that the 
> stack is sending all my data in one
>
> Frame ???
>
That's a little weired. Which version of lwIP are you using, anyway?
>
> Now can someone explain what is going on ?
>
> For every call to tcp_write the enqueue mechanism is advancing 
> snd_queulen by one !
>
Of course it does. tcp_write cannot know in advance that the next thing you do 
is to call tcp_output. snd_queuelen is always incremented in tcp_write and 
decremented in tcp_output. The only thing changed by disabling nagle is that 
tcp_output always sends everything that is enqueued.
>
> I see two problems here. If the stack eventually sends all these in 
> one TCP packet why is snd_queulen incremented at all ??
>
Because every call to tcp_write creates a new pbuf. Although there is only one 
segment enqueued, this segments consists of multiple pbufs (unless you queue 
the data at application layer and pass it to tcp_write with one single call).
>
> Secondly if I use pcb->flags |= TF_NODELAY why is the stack adding all 
> the data into one packet ??
>
Dunno.

Simon

_______________________________________________
lwip-users mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/lwip-users

 
 
************************************************************************************
This footnote confirms that this email message has been scanned by PineApp 
Mail-SeCure for the presence of malicious code, vandals & computer viruses.
************************************************************************************






************************************************************************************
This footnote confirms that this email message has been scanned by
PineApp Mail-SeCure for the presence of malicious code, vandals & computer 
viruses.
************************************************************************************






reply via email to

[Prev in Thread] Current Thread [Next in Thread]