|
From: | David Empson |
Subject: | Re: [lwip-users] Compensating for longer latency connections |
Date: | Wed, 03 Mar 2010 17:05:39 +1300 |
It may be necessary to use Wireshark to see what is
actually happening.
There are three likely factors with international traffic:
1. Latency. Mean round trip time may be in the order of 250 ms or
higher. (I see more than this for major sites in some distant countries.)
2. Packet delivery time can fluctuate depending on congestion,
alternate routing, etc., resulting in somewhat variable RTT.
3. Packet loss is more likely due to a higher number of hops and
routers, and greater chance of congestion.
Some things to check:
1. How much audio data are you buffering in the
application?
If a data packet from the server is lost, the
server will retransmit based on its retransmit timer. This is dependent on the
implementation used by the server, but is typicaly the mean round trip
time plus 4 times the standard deviation of recent round trip time measurements.
This means the retransmission delay is always at least the mean RTT. If the
standard deviation is 25% of the mean RTT (quite possible for
international traffic), the retransmission delay may be in the order of (2
* mean_RTT).
There will also be interaction with the timer
mechanism used by the server, e.g. if it uses fast and slow timers then a
retransmission will only happen on the next tick of its fast timer after the
retransmission timer expires.
This suggests you should be buffering at least (2 *
mean_RTT) plus a safety margin. To allow for 250 ms mean RTT and typical fast
timer implementations, you probably need to be buffering in the order of 1
second of audio data.
2. Do you have TCP_QUEUE_OOSEQ
enabled?
If not, then a single data packet loss will result
in all subsequently received data being discarded until the
server retransmits everything starting at the packet which was lost. This
is wasteful of bandwidth but should otherwise be dealt with by having a big
enough audio buffer.
3. Do you have a particularly small TCP_WND
(receive window)?
If TCP_WND < (4 * actual_MSS) then loss of a
single ack packet will cause a delay in the data stream.
If TCP_WND < (required_date_rate * mean_RTT)
then the server will not be able to send data fast enough. e.g. for 24000
bytes/sec and round trip time of 250 ms then TCP_WND must be greater than
6000.
Set TCP_WND well above these thresholds to avoid
unnecessary delays.
(Comments from others on my analysis are
welcome!) ----- Original Message -----
|
[Prev in Thread] | Current Thread | [Next in Thread] |