|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
performs far better on OS X and Solaris-style kernels
Will Bryant (<will.bryant@gmail.com>) writes:
On 21/09/2011, at 20:14 , Christian Grothoff wrote:
> > On the OpenIndiana performance, do you have something like 'strace' where you
> > could monitor what's going on? Most likely the code hangs blocking in some
> > syscall for longer than it should...
It has truss, and for the record, it showed no slow calls.
The problem turned out to be the tests using "localhost", which was picking the primary interface for the box, not the loopback interface.
Changing perf_get_concurrent.c to use 127.0.0.1 makes it vastly faster - it went from ~35 requests/sec to 12,000-15,000 requests/sec.
It makes a big difference on OS X too - from 115-140 requests/sec on my old macbook to 600-3600 requests/sec (variation here mainly due to the large amount of other stuff I have running).
Patch attached.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Will Bryant <will.bryant@gmail.com>
To:
libmicrohttpd development and user mailinglist <libmicrohttpd@gnu.org>
Date:
Today 03:01:54 AM
Attachments:
0001-use-separate-ports-for-subsequent-tests-in-the-perf-.patch
So patch attached to use sequential port number assignments in those two perf test programs - with that and the earlier pipe shutdown patch, OS X passes all tests now.
OpenIndiana also passes all the perf tests, leaving just the SIGPIPE matter. Incidentally, performance is terrible there. I would be interested to know why - my OpenIndiana box has a modern Intel Q9505 and gets in the region of 35 requests/s in each perf test, whereas my aging Intel Core 2 Duo macbook gets 600-900 despite having half the cores. Of course we only expect the non-concurrent test to use about 1 of the cores, but both that and the concurrent test actually use only about 1% of a single CPU. This is puzzling as OpenIndiana has the very performant and scalable sunos/solaris-derived kernel and libc, so something odd is definitely going on.
Regarding the SIGPIPE, do you think a signal handler should be installed in all test programs, to implement the recommended behavior for applications, or only in those that need it?
I am sitting on the fence. I think the argument for the latter would be that one would not normally expect SIGPIPE to occur during tests that do not test out error/abort behavior, but I don't think it would normally be harmful.
(I haven't implemented the configure script integration to set the HAVE_LISTEN_SHUTDOWN conditional define for Linux etc. - can you help there? I have, for what it's worth, checked that Linux does also work using the pipe technique instead, so that seems well-portable.)
Cheers,
Will
|