chicken-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Chicken-users] Very slow keep-alive behaviour on Spiffy


From: Graham Fawcett
Subject: [Chicken-users] Very slow keep-alive behaviour on Spiffy
Date: Fri, 19 May 2006 12:23:32 -0400

Hi folks,

I'm seeing strange behaviour in Spiffy regarding Keep-alive
connections. On my Linux server, subsequent requests on Keep-alive
connections are taking much longer to complete than requests on new
connections -- the wall-clock time is more than 10x greater (CPU usage
on client and server is almost identical). Tests and results are below.

I've also seen a very similar behaviour in a Web stack I'm building,
which is based on tcp-server, but not on http-server. Either we've
both got a similar bug, or perhaps there's a lower-level problem at
hand?

Can anyone reproduce this, and/or have any clue what the underlying
issue might be? Or, any tips on how to trace the problem? Since CPU
time isn't any different, I'm not sure that chicken-profile can
help. Packet sniffing has not turned up any tremendous insights yet.
I'm stumped.

Thanks,

Graham

------------------------

Here's the demo server, running as a csi script (compilation makes no
difference to the behaviour):

;; demo-server.scm
(use spiffy)
(define-http-resource (hello)
 (respond "hello"
          code: 200
          description: "OK"
          type: "text/plain"
          headers: '(("Connection" . "keep-alive")))) ;; needed for ab
(start-server port: 8081 debug: #f root: ".")


Using ApacheBench to test the server (similar results whether done
locally or from a different host, larger responses yield similar
results as well). I've included only snippets of the results.

Test 1: 200 requests, no keep-alive
-----------------------------------

$ ab -n 200 http://myserver:8081/hello

Concurrency Level:      1
Time taken for tests:   0.396657 seconds
Complete requests:      200
Failed requests:        0
Write errors:           0
Total transferred:      33000 bytes
HTML transferred:       1000 bytes
Requests per second:    504.21 [#/sec] (mean)
Time per request:       1.983 [ms] (mean)
Time per request:       1.983 [ms] (mean, across all concurrent requests)
Transfer rate:          80.67 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     1    1   4.1      1      39
Waiting:        0    0   4.2      0      39
Total:          1    1   4.1      1      39


Test 1: 200 requests, with keep-alive
-------------------------------------

ab -k -H "Connection: Keep-Alive" -n 200 http://myserver:8081/hello

Concurrency Level:      1
Time taken for tests:   8.274352 seconds
Complete requests:      200
Failed requests:        0
Write errors:           0
Keep-Alive requests:    200
Total transferred:      33000 bytes
HTML transferred:       1000 bytes
Requests per second:    24.17 [#/sec] (mean)
Time per request:       41.372 [ms] (mean)
Time per request:       41.372 [ms] (mean, across all concurrent requests)
Transfer rate:          3.87 [Kbytes/sec] received

Connection Times (ms)
             min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    40   40   1.8     40      53
Waiting:        0    0   4.0      0      52
Total:         40   40   1.8     40      53


I thought it might be ApacheBench at fault, so I wrote a quick test
script in Python (available at http://tinyurl.com/pr6ln) that gave
similar results:

Total time for 300 requests (keep-alive=True):
   cpu time:  0.34 seconds.
   real time: 12.66 seconds.

Total time for 300 requests (keep-alive=False):
   cpu time:  0.53 seconds.
   real time: 0.98 seconds.

--G




reply via email to

[Prev in Thread] Current Thread [Next in Thread]