Phillip Pearson - web + electronics notes

tech notes and web hackery from a new zealander who was vaguely useful on the web back in 2002 (see: python community server, the blogging ecosystem, the new zealand coffee review, the internet topic exchange).

2003-10-7

MetaKit crash

Damn ... maybe compacting the database wasn't such a good idea ... ???

-su-2.05b$ gdb -c python2.2.core /pycs/bin/python2.2
GNU gdb 4.18 (FreeBSD)
[...]
This GDB was configured as "i386-unknown-freebsd"...[...]

Core was generated by `python2.2'.
Program terminated with signal 11, Segmentation fault.
Reading symbols from /usr/lib/libc_r.so.4...done.
[...]
#0 0x281889e8 in memcpy () from /usr/lib/libc_r.so.4
(gdb) bt
#0 0x281889e8 in memcpy () from /usr/lib/libc_r.so.4
#1 0xca01259 in ?? ()
#2 0x283d4d0b in c4_FormatB::Define () from /pycs/lib/python2.2/site-packages/Mk4py.so
#3 0x283d9a6c in c4_HandlerSeq::Prepare () from /pycs/lib/python2.2/site-packages/Mk4py.so
#4 0x283d6912 in c4_FormatV::SetupAllSubviews () from /pycs/lib/python2.2/site-packages/Mk4py.so
#5 0x283d7186 in c4_FormatV::HasSubview () from /pycs/lib/python2.2/site-packages/Mk4py.so
#6 0x283d92b7 in c4_HandlerSeq::Restructure () from /pycs/lib/python2.2/site-packages/Mk4py.so
#7 0x283d8ea6 in c4_HandlerSeq::DetachFromParent () from /pycs/lib/python2.2/site-packages/Mk4py.so
#8 0x283d8c09 in c4_HandlerSeq::~c4_HandlerSeq () from /pycs/lib/python2.2/site-packages/Mk4py.so
#9 0x283eaee8 in c4_Sequence::DecRef () from /pycs/lib/python2.2/site-packages/Mk4py.so
#10 0x283e4bc5 in c4_Storage::~c4_Storage () from /pycs/lib/python2.2/site-packages/Mk4py.so
#11 0x283a2e31 in c4_PyStream::Write () from /pycs/lib/python2.2/site-packages/Mk4py.so
#12 0x80c4580 in dict_dealloc (mp=0x86bf30c) at Objects/dictobject.c:703
#13 0x80ad273 in instance_dealloc (inst=0x86c33ec) at Objects/classobject.c:656
#14 0x80c428f in PyDict_SetItem (op=0x86bf10c, key=0x81d6500, value=0x80e085c) at Objects/dictobject.c:373
#15 0x80c7061 in _PyModule_Clear (m=0x86c330c) at Objects/moduleobject.c:136
#16 0x808b227 in PyImport_Cleanup () at Python/import.c:352
#17 0x8094189 in Py_Exit (sts=0) at Python/pythonrun.c:218
#18 0x80931fe in handle_system_exit () at Python/pythonrun.c:838
#19 0x8093257 in PyErr_PrintEx (set_sys_last_vars=1) at Python/pythonrun.c:852
#20 0x8094124 in PyErr_Print () at Python/pythonrun.c:778
#21 0x8092dcd in PyRun_SimpleFileExFlags (fp=0x281ab240, filename=0xbfbffd50 "/home/pycs/usr/lib/pycs/bin/pycs.py",
    closeit=1, flags=0xbfbffc08) at Python/pythonrun.c:677
#22 0x8093c08 in PyRun_AnyFileExFlags (fp=0x281ab240, filename=0xbfbffd50 "/home/pycs/usr/lib/pycs/bin/pycs.py", closeit=1,
    flags=0xbfbffc08) at Python/pythonrun.c:483
#23 0x8052a74 in Py_Main (argc=2, argv=0xbfbffc80) at Modules/main.c:367
#24 0x8052374 in main (argc=2, argv=0xbfbffc80) at Modules/python.c:10
#25 0x80522d1 in _start ()


I've set up a script to supervise the PyCS process and restart it (then e-mail me some log snippets) if it goes down. Fingers crossed - let's see how this goes.
... more like this: [, ]

High performance XML-RPC

I was working on a standalone web server for XML-RPC a while back, writing in C++ for performance, and guessed that it might be capable of handling near 1000 hits/second.

Well, I just tried out the (BSD licensed) XMLRPC-C library, and it turns out that it includes a standalone web server that gets very close to that goal on my current Linux box (an Athlon XP 2000+).

Test setup: a single box, running both ApacheBench and the simple server example. I only have the one fast Linux box, so it had to be this way.

First test: time ab -n 5000 -p post.txt -T text/xml http://localhost:8080/RPC2:

This is a single test process hitting the server as fast as it can. Results: 722 hits/second. The server process was using somewhere in the 5-10% CPU range, the ApacheBench process was using 10-20%, and the rest was system.

[...]
Concurrency Level: 1
Time taken for tests: 6.923 seconds
Complete requests: 5000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 1480000 bytes
Total POSTed: 1635000
HTML transferred: 700000 bytes
Requests per second: 722.23 [#/sec] (mean)
Time per request: 1.38 [ms] (mean)
Time per request: 1.38 [ms] (mean, across all concurrent requests)
Transfer rate: 213.78 [Kbytes/sec] received
                        236.17 kb/s sent
                        449.95 kb/s total
[...]
real 0m6.866s
user 0m0.200s
sys 0m1.440s


Those times suggest ApacheBench was sucking up 24% of the CPU at the time, so we might guess that the performance here could be as high as 722.23 / (1 - 0.24) = 950 hits per second.

Second test: time ab -n 5000 -c 100 -p post.txt -T text/xml http://localhost:8080/RPC2

[...]
Concurrency Level: 100
Time taken for tests: 14.792 seconds
Complete requests: 5000
Failed requests: 0
Broken pipe errors: 0
Total transferred: 1480000 bytes
Total POSTed: 1660506
HTML transferred: 700000 bytes
Requests per second: 338.02 [#/sec] (mean)
Time per request: 295.84 [ms] (mean)
Time per request: 2.96 [ms] (mean, across all concurrent requests)
Transfer rate: 100.05 [Kbytes/sec] received
                        112.26 kb/s sent
                        212.31 kb/s total
[...]
real 0m14.814s
user 0m0.660s
sys 0m7.820s


Here, the hits per second rate is way down, but those eight seconds spent in the system make it look very much like the hundred ApacheBench threads (processes?) were mostly responsible for this. Repeating the calculation from above, we might possibly get 338.02 / (1 - (0.660 + 7.820) / 14.814) = 791 hits/second with 100 concurrent clients.

---

People with underperforming XML-RPC web services might want to seriously consider coding part of their apps in C++ and using XMLRPC-C and the Abyss web server to run it all.

In particular, I remember this post by Dave Winer on xmlrpc.com, talking about their pub-sub hooks for RSS subscription. The SOAP meets RSS story on thetwowayweb.com specifies the actual protocol, which is a little trickier to implement than what Dave describes in the message (as it requires the cloud to do the calls back rather than just manage a few lists).

We were talking about this on the pycs-devel mailing list a while back, I think, wondering whether it would make sense for pub-sub responses to go to your own community server, which has a good 'net connectin, and for you to pull them back down. For Radio and PyDS, you could return them as part of the xmlStorageSystem.ping() response. In this case, the community server would really want to be quick, as it could potentially be working for tens of thousands of users.
... more like this: [, ] ... topic exchange: []

Invoking qmail-inject from Python on FreeBSD without hanging

I recently ran into some odd behaviour when trying to call qmail's qmail-inject tool from a Python script with os.popen. The script was hanging when I tried to close the file.

It turns out that this is an old bug that is apparently qmail's problem. That doesn't help me, but the workaround in this message does: use popen2.popen2 instead.

Practically, that means that instead of this:

    f = os.popen('/var/qmail/bin/qmail-inject %s' % address, 'w')

You want to say:

    f = popen2.popen2('/var/qmail/bin/qmail-inject %s' % address)[1]

... and your script will then run fine.

My application was the e-mail responder script that I use to edit Crash. Previously I was just getting the script to time out like this:

import signal

def timed_out(sig, f):
    print "timed out!"
    os.exit(99)

signal.signal(signal.SIG_ALRM, timed_out)
signal.alarm(60)


I've left that bit in - it serves to kill the script when weblogs.com stops responding. But now, at least, I don't always get a failure response 60 seconds after the success response.
... more like this: [, ]