[Ur] callbacks from C FFI (again)

Sergey Mironov grrwlf at gmail.com
Fri Jan 3 16:39:47 EST 2014

2013/12/31 Adam Chlipala <adamc at csail.mit.edu>:
> On 12/30/2013 04:49 AM, Sergey Mironov wrote:
>> Using tasks, we may extract a
>> callback url from some ffi-specific queue, but currently urweb doesn't
>> provide a method to 'call' it safely (I assume, [redidrect] doesn't
>> work in tasks). That is why I think this way still requires some
>> non-trivial server-side modifications..
> What's wrong with using code like this to call the URL, using the context of
> the FFI function where this action happens?
>     uw_get_app(ctx)->handle(ctx, uri);
> You literally would not need a single line of additional code to make a
> callback.  Are you just suggesting that this operation should be made
> available in the standard library, so that it can be used without writing
> any FFI code?

I assumed that executing  `uw_get_app(ctx)->handle(ctx, uri)' is
incorrect because it starts new transaction for context ctx which
already has a transaction at the moment. But I've re-checked the code
and now I see I was wrong. In fact, it is uw_begin who calls handle,
handle doesn't call uw_begin. This is the reason of misunderstanding.
I agree now, calling urls taken from a table is really easy to
implement with only one tiny FFI function.

> Actually, now I'm thinking of an angle that I'm surprised I missed before:
> if tasks are OK, why involve the FFI at all in callbacks?  You can
> orchestrate callback plumbing entirely within Ur/Web code, can't you?  For
> instance, a task polls an SQL table and calls a function with each row as an
> argument, deleting all the rows afterward.

The real performance problem lives not inside the accessing url from a
task but in obtaining the status of a job. Imagine a job running as a
separate thread inside an FFI library (the downloader app from my
example would start such jobs as a result of handling user requests).
Such job has no context since we have not got `uw_clone' function to
clone one. What should it do to signal about it's completion? AFAIK it
can't do anything. Let me illustrate the allowed workarounds with a
small program written in UrWeb-like pseudo code:

# Scenario 1 (poll)

type jobid = int

table watchers : {JobId : jobid, Channel : channel xbody}

task periodic 1 = fn () =>
  ws <- sql(SELECT * FROM watchers);
  forM_ ws (fn w =>
    case JobFFI.getStatus w.JobId of
       None => return ()
     | Some exitcode =>
         send w.Channel <xml>ExitCode: {exitcode}</xml>;
         dml (DELETE FROM watchers WHERE JobId = {w.JobId}))

fun monitor (j:jobid) =
  c <- channel;
  dml(INSERT INTO watchers (JobId,Channel) VALUES( {j}, {c} ));
  return <xml>status : <active code={recv c}/> </xml>

fun start (cmd:string) =
  j <- JobFFI.run cmd;
  redirect (url ( monitor j ) )

As you can see, this program doesn't use callbacks at all.
Unfortunately, it queries whole  table `watchers' every second and it
is possible source of low performance. In order to improve the
performance we may want to fire the task only when we know we have at
least one job to remove. Here is how I view the
uw_trigger_task("name_of_task") you are suggesting:

# Scenario 2 (uw_trigger_task)

type jobid = int

table watchers : {JobId : jobid, Channel : channel xbody}

task triggerable tasknotify = fn () => (* implementation is the same *)

fun monitor (j:jobid) = (*... the same *)

fun start (cmd:string) =
  j <- JobFFI.run cmd tasknotify;
  redirect (url ( monitor j ) )

This time JobFFI.run remembers task name and calls uw_trigger_task
upon completion. Do I understand your idea correctly? I agree that
this scenario may be used to implement the downloader example with low
risks of running into performance troubles. But please read below..

>> And here comes your point about CGI. It really does make the picture
>> simpler. I think the callback handler from my patch may be moved to
>> the FFI code now.   Almost all functions being used  - uw_reset,
>> uw_begin, uw_commit, uw_rollback, uw_set_deadline all live in urweb.h
>> and AFAIK are 'public'. So FFI side would require only one function
>> from request.c:   uw_request_new_context .  Currently, we should
>> provide it with the the id, the uw_application and loggers, that is
>> why it is hardly possible to make it public. So I suggest writing a
>> new method for cloning existing context. It's signature may be
> This all sounds rather baroque.  I would think a preferable mechanism would
> be a new kind of _triggerable_ task, where a thread sits there idle until an
> FFI call like uw_trigger_task("name_of_task") causes the code associated
> with the task to run once (in its own thread).  For most applications of
> delayed callbacks, I bet periodic tasks are sufficient and offer performance
> almost indistinguishable from any alternatives, but triggerable tasks could
> be the simplest way to handle future performance issues.
> What do you think?

I have doubts. In my view, the whole concept of tasks is a kind of
compromise. It covers some certain cases of server-side procedures
without introducing a  low-level bindings to operating system's API of
threads, processes, signals and everything. It is simply not possible
to cover all cases with tasks. You as an author should stop extending
it in some point and say "everything else is beyond the scope of
Ur/Web, use FFI if tasks are not enough". And, in fact, you did it. I
think that triggerable tasks is a step beyond this point. Do this and
someone (probably, me) will come tomorrow and ask, for example, if it
is possible to add an argument uw_trigger_task could deliver into a
task. Or if it is possible to trigger a task on a specific machine in
a multi-machine application?

I think it is better to direct efforts into improvement of the FFI
interface and urweb API because we will need them anyway. If I only
had methods for

1) clone the context to use it with a new thread inside an FFI
2) call the url handler (I already know that `uw_begin' or
`uw_get_app(ctx)->handle(ctx, uri)' may help me here)

I would have written my downloader app keeping optimal performance in
mind and without touching the UrWeb core. I feel I can write all
required FFI API methods by myself so I'm asking only for
reviewing/accepting my patches. In contrast, triggerable task works
require your time and IMHO would cover only a limited number of use


By the way, I am missing the ability to pass Ur/Web records to/from
FFI. Is it hard to implement? I imagine they are represented with a
plain C structures in the transaction's memory pool so may it be that
it is not that hard?

More information about the Ur mailing list