mldonkey-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: AW: [Mldonkey-users] 2.02-20 working great!


From: Pierre Etchemaite
Subject: Re: AW: [Mldonkey-users] 2.02-20 working great!
Date: Thu, 27 Feb 2003 04:52:15 +0100

Le Wed, 26 Feb 2003 21:01:34 -0500, Brett Dikeman <address@hidden> a écrit
:

> Nope- I ran your patches all afternoon and got so-so results and a 
> bunch of problems.  For example, next to no emule clients in my 
> upload list, only 5 uploaders(I have two enormously popular files 
> partially downloaded),

Pending slots benefits to clients that stay connected longer. Maybe I made
that even a bit worse because I increase the socket timeout when a client
enters in pending slots.

The idea was to make the pending slots more fair (when a client is not
connected when it's at the head of pending slots, it gets discarded from
the pending slots!), but the results seems to be the opposite :(

Anyway, that could be the reason for the increasing number of used sockets,
I was about to remove that set_rtimeout line from my patch.

> and LOTS of messages about queueing clients 
> twice.

My patch *prevents* that bug of the CVS version from happening. The message
is just there for debugging.

> Could you explain a little better what the various patchfiles do in 
> your readme?   I found some of it very ambiguous.

It's often hard to explain concisely what a patch does, but I'll give it a
try:

* Better default parameters: removes obsolete parameters (initialized,
  strict_bandwidth, retry_delay, good_sources_threshold, reward_power) that
  aren't used in the code; Also sets:
    server_connection_timeout  5 -> 15 (help connecting to servers)
    max_clients_per_second 30 -> 10 (can *really* eMule test that many
      clients per second? that should create a lot of overhead)
    random_order_download false -> true (better for files propagation)
    ban_period 6 -> 1 (1 hour should be enough)

* new_chunks_scheduling: fixes sources_per_file so that it really works.
  When values above 1 are used, scales the number of sources to the number
  of bytes still missing.
  First check for chunks with the less known number of sources, instead of
  just chunks with one known source.
  Only increase first and last chunks download priority if they have at
  least 10 known sources (to avoid flooding releasers with requests for
  those chunks only)
  Tighten exception handling in chunk selection algorithm: declares a
  custom  exception instead of using the standard Not_found exception, that
  could be  triggered inadvertedly.

* connection_state_fix: in commonGlobals.ml, the connection_next_try
  formulae is:

let connection_next_try cc =
  cc.control_last_try + mini (!!min_reask_delay * cc.control_state)
  !!max_reask_delay

  It looks like calling this function with control_state = 0 makes no sense
  (connection_next_try = control_last_try ? that defeats min_reask_delay,
  for one). So control_state should be reinitialized to *1* instead of 0 on
  several occasions.

  BTW, since sources/clients are now scheduled differently, this may only
  affect eDonkey servers connections, or non eDonkey network supports.

* no_emule_quota: name says everything. The downloads and uploads look more
  or less in balance with the current system without the help of quotas.

* wait_for_id_before_publishing: an old patch that I retrieved in
  mldonkeyworld forum. Lugdunum servers ban clients that send their share
  list before they received their client ID. That should almost never happen
  (sending a client ID is what any server should do quickly), but it's just
  a question of correctness of the code.

* revert_optimize_chunks_display: my javascript inlining patch breaks table
  sorting under Mozilla, so I reverted it. Instead, I propose an optimized
  chunks HTML code. Doesn't beat Transfer-Encoding support, but better than 
  nothing ;)

* fix_file_unbound_argument: URL parsing code is buggy, when no
  "CGI" argument is  available it finds a spurious empty field with an empty
  value (that in turn generates the "FILE: Unbound argument /" console
  message). Fixed.

* fix_ocl_parsing: in examples I've seen, OCL files are some kind of CSV
  format, with all fields between "". The MLdonkey OCL parsing does handle
  those double-quotes. (BTW, can OCLs be loaded now ? I'll have to check
  again)

* can_download_from_uploaders: when a sources needs to be contacted, it is
  converted into a "client", then the connection is established. It looks
  (tell me if I'm wrong) that the case where the client may already be
  connected is not handled correctly, by aborting the file query. This patch
  sends the file query using the connection already established.
  The patch is a bit hackish, I don't know if there's a better way to work
  around the forbidden cross-referencing between the donkeyClient and
  donkeySources1 (oops! Did I forget Sources2 and Sources3 ? :( )

* one_client_per_zone: experimental, I limited the number of allowed source
  for downloading a chunk to one at a time, trying to avoid any possible
  cause of data corruption (even if the code looks fine). Also, I don't
  think using more than one source to download 180kB is very useful.
  The source will certainly be more useful for other chunks.
  Oh, well.

* disconnect_after_a_chunk: only allows receiving a chunk worth of data
  (9500kB) with a slot. For more, slot reacquiring is necessary.
  BTW, I've seen reports of Exception End_of_file in send_small_block on IRC
  that could be the result of that patch. I think the test (and the
  disconnection) should be moved to outer functions.

* fix_duplicates_in_pending_slots: pending slots are managed with two
  structures, a FIFO (to keep the order of slots) and a map (hash table to
  quickly find if a client is in pending slots).
  Upon allocation, clients are added to both structures, but upon
  disconnection clients are only removed from the map. During next
  connection, MLdonkey, checking the map only, will add the client again to
  the FIFO. I've also seen (official ?) clients being granted a pending slot
  while they already at an upload slot!
  As I already said, my patch is not efficient, it's just a demonstration of
  the bug, with a workaround.

* upload_slots_dynamic_patch: for file propagation, it's better if chunks
  are sent quickly; That means using as few upload slots as possible at
  once. The patch contains a bandwith estimator (currently, the maximum
  observed usage in the recent history). Slots are then allocated when:
  * less than 3 slots are allocated. Keeping several connections at once is
    needed to nicely saturate the uplink.
  * more than 2.5kB/s of bandwidth is unused (think of it that way, it tries
    to allocate 2.5kB/s slots). That must happen twice in a row, to prevent
    allocating a slot just for a very temporary glitch.
  To avoid allocation overshoots, slots are not allocated directly; instead,
  all slot requests are queued first.

* upload_slots_can_decrease: followup of the previous patch; When an upload
  slot is freed, the above allocation algorithm is called instead of
  unconditionally allocation a slot. That way, the number of slots can
  decrease, if the remaining slots are fast enough to saturate the uplink.

* trickle slot: something read a while ago in an eMule forum (thanks to jkl
  on #mldonkey for the link). Since initializing an upload takes time, we
  keep a additionnal "slow" upload slot, that is turned into a normal "fast"
  slot to fill the gap.

* emule_protocol_version: devein on #mldonkey noticed a discrepancy in
  hardcoded eMule protocol version tag in the code (sometimes 0x24,
  sometimes 0x26). I added a emule_protocol_version setting to fix that.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]