[Twisted-Python] Throttling outgoing client requests
Jp Calderone
exarkun at divmod.com
Mon Sep 12 21:27:42 MDT 2005
On Tue, 13 Sep 2005 02:55:31 +1000, Andrew Bennetts <andrew-twisted at puzzling.org> wrote:
>On Mon, Sep 12, 2005 at 09:30:51AM -0700, David E. Konerding wrote:
>> Hi,
>>
>> I am writing a client usign Twisted that makes a lot of XMLRPC requests
>> simultaneously (using twisted.web.xmlrpc.Proxy). There are a bunch of
>> them, all to the same site,
>> and when I run on MacOSX, I start getting bind errors-- I think OS X
>> must have more restricted outgoing network connections that the linux
>> box I normally work on.
>>
>> My goal is to throttle the number of outgoing requests. I could do this
>> by submitting just one request at a time, and having a deferred callback
>> registered such that when the request completes (successfully or not)
>> the next request is submitting. But this sort of code flow is pretty
>> contorted. My other thought would be to just schedule all the calls
>> with a callLater and a randomized delay for each call. Again, not so clean.
>>
>> Is there a 'clean' way to throttle back the number of ongoing client
>> requests-- perhaps through the xmlrpc.Proxy itself, or through some pattern?
>
>Use twisted.internet.defer.DeferredSemaphore (added in Twisted 2.0).
>
>Rough sketch of how to use it:
>
> sem = DeferredSemaphore(10) # maximum of 10 jobs at once
> for job in joblist:
> d = sem.acquire()
> d.addCallback(job.run) # do the work
> d.addErrback(log.err) # handle any errors by logging them
> d.addBoth(lambda x: sem.release()) # trigger the next job
ITYM,
sem = DeferredSemaphore(10)
for job in joblist:
sem.run(job.run).addErrback(log.err)
Note that this isn't the ideal way to queue up a large number of jobs. The ideal way avoids constructing many more Deferreds than there are outstanding jobs, as well as avoids adding more than a fixed number of callbacks to each Deferred.
Jp
More information about the Twisted-Python
mailing list