[Ur] Moving channel/client ID generation from the runtime into the database and/or compiler?

Adam Chlipala adamc at impredicative.com
Sat Sep 24 10:30:36 EDT 2011


austin seipp wrote:
> The Ur/Web runtime system represents asynchronous channels for message
> passing between clients with an ID that is generated by the when a new
> channel is allocated, and uses this for example when you insert a
> 'channel' into the database. Would it be possible to move this channel
> generation into the database itself, perhaps with a sequence or
> something similar?
>    

Moving name generation into the database is easy, but there is other 
important state.  Fundamentally, we must manage TCP connections by 
clients, which sounds scary to do with SQL.  Each channel has a buffer, 
and there is locking to manage proper use of the buffers.  All this is 
done now with shared memory on a single host.

I have contemplated supporting application distribution by changing 
[client] and [channel] representations to store server IDs as well, so 
that each server can easily manage its own name supply.  It would be a 
run-time error to access a channel associated with a different server.  
The Ur/Web runtime could be modified to set a cookie that a load 
balancer or proxy could use to detect requests to the appropriate 
servers, where the needed channels live.  The feasibility of this idea 
is based on the assumption that it's easy to group mutually 
communicating clients into cohorts that fit on single servers.  (Note 
that "client" here refers to a single page view, not a single browsing 
session, making the problem easier.)


HOWEVER... I'm wondering whether it really is important to support 
application distribution.  I can understand having back-up servers that 
you switch to if the primary server encounters a problem.  That model is 
already supported very well with Ur/Web.  You can use hot-swap support 
in the DBMS, and the only remaining server-side state is the clients and 
channels, which aren't especially persistent anyway.  If all Comet 
interaction is reset on the occasional server failure, that doesn't seem 
so awful to me.  Again, here we are only talking about failures of a 
single primary server, so we don't have the usual problem of "rare 
failure + many servers = frequent failure."

I know it's in vogue today to care about having many servers in 
production at once, serving the same application.  Still, I'm deeply 
skeptical that this capability matters for more than a handful of web 
applications that have ever existed.  I lean towards using a single very 
powerful server, with reasonable performance for shared memory across cores.

I'm very interested in arguments to support the claim that it is common 
for web applications to be deployed best in ways that have multiple 
servers running application logic simultaneously.



More information about the Ur mailing list