[Twisted-Python] profiling twisted
Markus Schiltknecht
markus at bluegap.ch
Wed Jun 27 10:32:07 EDT 2007
Hi,
Mike C. Fletcher wrote:
> http://blog.vrplumber.com/1353
Thanks, nice article. And I've tried that code snipped, but it didn't
help much in my case, as response times are below 1 second.
Please see these measurements, taken with apache bench. I'm testing
response times on a simple status resource, which doesn't take long to
calculate (i.e. doesn't even throw a deferred, but returns a response
immediately):
Document Path: /status
Document Length: 202 bytes
Concurrency Level: 10
Time taken for tests: 0.202266 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 38400 bytes
HTML transferred: 20200 bytes
Requests per second: 494.40 [#/sec] (mean)
Time per request: 20.227 [ms] (mean)
Time per request: 2.023 [ms] (mean, across all concurrent requests)
Transfer rate: 182.93 [Kbytes/sec] received
Now, measured while the server is under very small load:
Document Path: /status
Document Length: 202 bytes
Concurrency Level: 10
Time taken for tests: 4.103465 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Total transferred: 38400 bytes
HTML transferred: 20200 bytes
Requests per second: 24.37 [#/sec] (mean)
Time per request: 410.347 [ms] (mean)
Time per request: 41.035 [ms] (mean, across all concurrent requests)
Transfer rate: 9.02 [Kbytes/sec] received
When putting the server under real load, those response times climb up
to two seconds, so there must be something wrong.
Can I somehow get the reactor's state, i.e. how many deferreds are
waiting in the queue, how many threads are running concurrently, etc?
How good is the idea of deferring File I/O to threads, i.e.
threads.deferToThread(self.filehandle.write, data)? Another possibly
blocking module might be the database api, but I'm using twisted's
enterprise adbapi, which should be async, AFAICT.
Maybe copying data around takes time. I'm sending around chunks of 64k
size (streaming from the database to an external programm). Reducing
chunk size to 1k helps somewhat (i.e. response time is seldom over
150ms, but can still climb up to > 0.5 seconds).
Hum... external program.... maybe it's the self.transport.write() call
which blocks several 100ms? Is it safe to write:
d = threads.deferToThread(self.transport.write, dataChunk)
(i.e. call transport.write from a thread?)
How much resources do these deferToThread() deferreds eat? AFAICT, the
reactor prepares a thread pool, which leads me to think that it's a well
optimized implementation...
Regards
Markus
More information about the Twisted-Python
mailing list