[Twisted-web] how to send parallel requests to the dbms while
rendering html with Cheetah
Andrea Arcangeli
andrea at cpushare.com
Sun Jan 22 17:03:42 MST 2006
Hello,
With Cheetah I have full control on the serialization or parallelism of
my queries and thanks to a real dbms I can execute very expensive
queries in parallel on multiple cpus with just a small change:
--- server/web.py 20 Jan 2006 03:16:08 -0000 1.113
+++ server/web.py 22 Jan 2006 23:24:05 -0000
@@ -907,6 +907,17 @@ class root_page_class(cached_basepage_cl
t.title = 'KLive: Linux Kernel Live Usage Monitor'
s.write(t.block_title())
+ render_list = [
+ self.render_nr_sessions(req),
+ defer.maybeDeferred(archs_class().render, req),
+ defer.maybeDeferred(kernels_class().render, req),
+ defer.maybeDeferred(fs_class().render, req),
+ defer.maybeDeferred(mod_class().render, req),
+ defer.maybeDeferred(pci_class().render, req),
+ defer.maybeDeferred(branch_class().render, req),
+ defer.maybeDeferred(vendor_class().render, req),
+ ]
+
def finish(result):
t.latest_install = 'klive.sh'
t.latest_tac = 'klive.tac'
@@ -916,35 +927,21 @@ class root_page_class(cached_basepage_cl
s.write(self.render_copyright(req))
s.finish()
- def render_branch(result):
- d = defer.maybeDeferred(branch_class().render, req)
- d = d.addCallback(s.write)
- d = d.addCallback(finish)
- def render_pci(result):
- d = defer.maybeDeferred(pci_class().render, req)
- d = d.addCallback(s.write)
- d = d.addCallback(render_branch)
- def render_vendor(result):
- d = defer.maybeDeferred(vendor_class().render, req)
- d = d.addCallback(s.write)
- d = d.addCallback(render_pci)
- def render_mod(result):
- d = defer.maybeDeferred(mod_class().render, req)
- d = d.addCallback(s.write)
- d = d.addCallback(render_vendor)
- def render_fs(result):
- d = defer.maybeDeferred(fs_class().render, req)
- d = d.addCallback(s.write)
- d = d.addCallback(render_mod)
-
+ def render_list_callback(result):
+ try:
+ d = render_list.pop(0)
+ except IndexError:
+ finish(result)
+ else:
+ d = d.addCallback(s.write)
+ d = d.addCallback(render_list_callback)
def back_archs(result):
t.archs = result
s.write(t.block_archs())
- d = defer.maybeDeferred(kernels_class().render, req)
+ d = render_list.pop(0)
d = d.addCallback(s.write)
- d = d.addCallback(render_fs)
-
+ d = d.addCallback(render_list_callback)
def back_nr_sessions(result):
t.nr_sessions = result
t.live = live_class().render(req)
@@ -954,9 +951,11 @@ class root_page_class(cached_basepage_cl
t.ip = ip_class().render(req)
s.write(t.block_desc1())
- d = defer.maybeDeferred(archs_class().render, req).addCallback(back_archs)
+ d = render_list.pop(0)
+ d.addCallback(back_archs)
- self.render_nr_sessions(req).addCallback(back_nr_sessions)
+ d = render_list.pop(0)
+ d.addCallback(back_nr_sessions)
return http.Response(responsecode.OK,
{'content-type': http_headers.MimeType('text', 'html')},
I've an old (slow) 2-way smp server, and by sending (readonly) queries
to postgresql in parallel I just reduce the time it takes to render the
usual klive pages from 2.9sec to 1.49sec (2 cpus working at 100%
executes the queries in parallel and so the results now arrives in
_exactly_ half the time, postgresql is really scaling perfectly in this
test). This new smp improvement comes after the huge boost I had after
moving from Nevow to Cheetah for the rendering of the html (from >5.9sec
to 2.9sec). On my 4-way dualcore opteron desktop the rendering time
changes from ~1sec to ~370msec.
So the overall since I started migrating to Cheetah, the time it takes
to render the klive homepage went down from 5.9sec (old
nevow+twisted.web code) to 1.5sec (new cheetah+twisted.web2 code). A
~300% performance improvement in rendering pages isn't bad. The code is
almost unchanged, just the html rendering model has changed.
This is just to share my real life experience and perhaps this could be
useful to others.
Now the 1.5sec are almost all spent waiting the sql queries and anyway
both cpus are fully utilized (Cheetah renders the first #blocks while
the other cpu calculates the results for the next block in parallel), so
there's probably not much left to optimize (nothing as easy as the
changes I did in the last days to move from 5.9sec to 1.5sec at least).
Parallel expensive queries to the db, is something that the axiom model
will never be able to achieve (as long as the api isn't changed to
return deferreds to allow real dbms instead of sqllite, which means all
code written today for the current axiom API will have to changed
significantly). Having to schedule to another context while waiting the
deferred to fire, may open a window for for race conditions too,
and have fun writing unit-test verifying race conditions.
While you can scale the network load by running multiple twisted
servers, you can't scale the db queries coming from different twisted
servers in parallel with the axiom API. This ignoring the fact nevow
doesn't contemplates the possibility of sending all queries in parallel.
More information about the Twisted-web
mailing list