[Ur] TechEmpower Benchmarks

Adam Chlipala adamc at csail.mit.edu
Tue Dec 17 16:57:20 EST 2013


On 12/17/2013 03:40 PM, escalier at riseup.net wrote:
> The official results are out today. One thing I noticed when looking
> through them is that Ur/Web is generally one of the best in terms of
> latency. In fact, Ur/Web has the lowest average latency in `Data Updates`
> (discounting the frameworks that have millions of errors).
> http://www.techempower.com/benchmarks/#section=data-r8&hw=i7&test=update
>    

That's pretty neat.  Thanks for uncovering something to be proud of, 
amongst these results that generally don't look too good for Ur/Web. ;)  
It makes sense that a test that actually needs the database wouldn't be 
suffering unduly from initiating a database transaction for each 
request!  What I don't know is whether the various known issues with the 
Ur/Web execution explain why the throughput was so low for this test.  
(Locally, before fixes, I frequently saw segfaults in the server 
process.  I wonder if that is what happened during benchmarking.)

I see that your GitHub benchmarks repo now includes the '-q' flag to 
suppress logging.  Do we know of any other code changes that we want to 
push for inclusion in the next round, either in the benchmark code or 
Ur/Web itself?

On the benchmarks mailing list today, I've seen discussion of use of a 
special pipelining version of wrk, which I probably haven't used myself 
so far for testing.  Has anyone else used it?  This dimension might 
explain the gap I see between local results and reported results for 
plaintext, even though I've been testing on machines with the same 
number of cores as in their environment.

In general, it would be great to have a testing set-up that is as much 
like the actual benchmark as possible, including separate machines for 
Ur/Web process, database server, and wrk.  Actually, I hope the 
benchmark organizers just switch to more frequent overnight test runs of 
everything!  But, as I've said before, in the mean time, it could be 
very helpful for a brave volunteer to create a similar testing set-up, 
just in case real network connections tweak some performance constants 
in a way that implies changes are called for in the Ur/Web C code.



More information about the Ur mailing list