Reading this experience report from the DARPA challenge via Slashdot, I wondered: if event registration caused object retention, how can we deal with these memory issues in Flapjax?
Worrying about memory leaks in JavaScript is a losing battle — the browsers have different collectors. But given functional reactive programming in other settings (e.g., Java/C#), how can we solve this kind of problem? We looked at deletion briefly, but never had time to address the issue in detail. The complexity in our case is that the event registration encodes the entire application — how do you decide that a certain code path is dead? It may be helpful to view a functional-reactive program as a super-graph of data producing, splitting, merging, and consuming nodes; the application state is the subgraph induced by the active nodes reaching the consumers. Then there’s a certain static overhead for the program, plus the dynamic overhead of its active components.
Most of the Flapjax code I’ve written has been for the user interface, so binding and unbinding isn’t much of an issue: if the interface changes temporarily (e.g., tabs), the older behavior is usually recoverable and shouldn’t be collected. When working with more complex models with longer lived state, a deletion policy is more important. Shriram has been working on larger Flapjax applications with application logic as well as presentation — I wonder if he’s run into serious GC issues.
Who is ‘we’? I’m assuming developers of flapjax for js1.5 :) In PLT Scheme and AS3/Flash, there are weak references, so all that is kept is what is necessary in those implementations of FRP. I do not recall any support for them in ES4, but as I mentioned back in the day, suspect that a program transformation that supports manual tracking is possible. The sluggishness of Links suggests that may not be a good idea, however.
I am curious about two things:
1. Optimizations for object churn and reuse. Heap allocation w/gc just doesn’t make sense for most of the internals of cbv frp that use one-time data (I’m looking at you, Pulse).
2. Support for parallelism. The latter is a biggy and I think it’ll be my main project next semester (I suspect UIs, in the reactive model, are in the embarrassingly parallel category, albeit with complications from dynamic data flow graphs and heap separation).
Weak references can’t account for everything. If event registrations use weak references, how do you prevent the entire graph from being collected? You want the active inputs and outputs of your graph to be the GC roots, but the trick is detecting their obsolescence. So the real question I’m asking is, how should resource finalization be modeled in an FR system?
As far as (1) goes, I’m definitely with you in theory, but in practice directly manipulating pulses has been the key to all of our knot-tying: persistent objects, lenses. Do you think there’s a way to reduce object churn when Flapjax is compiled to JavaScript, or should the target be something lower-level?
The UIs I’ve written with lenses are definitely ripe for parallelization: each lens is an Erlang style process, sending messages back and forth; these messages are then piped through constraints into other lenses. Then again, I think it makes sense to model each Event/Behavior as an Erlang-style process. But…what are you compiling to?
GC: Back references are strong, forward references are weak.
In
var x = a + b;
As soon as x is out of scope, the sum node has no strong forward references and can be collected, at which point so could a and b.
Mix that with graph-destruction-reconstruction semantics of if/switch and then I’m not sure what you think is being treated too conservatively. Do you have an example of what you’re talking about? I don’t like the reconstruction semantics because of lost state (which is why we don’t have them in Flapjax), but suspect GC is why Greg went with them in FrTime, and it would probably do a bunch for Flapjax at the expense of conditional reevaluation being slow.
OO: Yes, I was alluding to object churn being a target for the JS engine to optimize, not FX. I would love to be able to add collection annotation hints into generated Flapjax bytecode.
Parallelism: My original thought from last year, mostly out of curiosity, was targeting the GPU. I thought about doing Erlang for a server layer, mostly because Erlang seems backwards to me, but would want to work on the exception handling problem at that point as well, and am not ready for it :) For a purportedly functional language, Erlang seems to take an incredibly impure model in terms of massive amounts of communication between lightweight processes and thus is convoluted to me. Anyways, while getting scripting parallelized seems possible in the face of dynamically changing dependency graphs (probably just incurring extra penalties on changing the graph structure), I haven’t thought of good ways to interact safely with external persistent state.
So what are you working on nowadays? I’m bogged down with TA’ing and am thinking about data mining programs.
I think it’s not quite that simple, at least not from a practical engineering perspective. The DOM only back-references readers, not writers. For server-side polling, you sometimes need timers to be weak references (reads) and sometimes strong references (writes). And even once these issues are resolved, I think the interesting issue is indicating obsolescence. How can a programmer say “I’m done writing to the server” in an FRP way? You can model the “writing to the server” state as part of the system, but is there a shortcut?
You’re right, I think object churn is the big target for an event system. Object pooling would do just as well as collection hints, and is probably more realistic in terms of implementation. I think it would be an interesting exercise to try to whittle an FR interpreter into some sort of stack-based core. Have you read any of the literature on this? I’ve been wanting to learn the history of data-flow/FR for a while.
These days? String lens stuff. Right now we’re looking at string lenses for document synchronization that are tolerant of reorderings, additions, and deletions. I’m taking a course on ML that’s been very educational, if at times hard to follow. No TAing for my first year, which is quite lucky — there’s hardly time for the interesting stuff as it is.
Oh: and I’ll be in SF over New Year’s for PADL/POPL; we should hang out.
“I think the interesting issue is indicating obsolescence. How can a programmer say “I’m done writing to the server” in an FRP way?”
I got into a conversation with my brother-in-law about this while grappling over turkey legs last week. He is implementing some sort of object-and-relations-to-database system, with the assumption of a client/server split. In the abstract, he was struggling with distributed garbage collection between systems – in his opinion, explicitly freeing server resources using object destructors was the way to go…
“Object pooling would do just as well as collection hints, and is probably more realistic in terms of implementation.”
I had meant to imply the latter would help the former (modulo complexities around object initialization and assumption of static types).
“I think it would be an interesting exercise to try to whittle an FR interpreter into some sort of stack-based core. Have you read any of the literature on this? I’ve been wanting to learn the history of data-flow/FR for a while.”
I think FRP came, in part, as a reaction to Esterel/Lucid (which felt ugly to me, though I saw a neat talk recently by an Esterel guy making a FPGA targeting it). The data flow camp is significantly older. Edward Lee has done a significant amount of somewhat theoretical research in the area. I don’t see the ‘nice’ stack machine for FRP, but probably because I haven’t really thought about it.
Cool about the lenses stuff (I’m hoping for a publication accessible for plebeians like me). Whenever I hear about something like CLINQ, I feel that they’re tackling the smaller problem, but at least that means they’re admitting there is one :)
Just turned in my last ML homework today – I lost my thread of concentration somewhere around Gibbs sampling and Dirichlet processes. It’s a funny topic wrt PL, but is coming up more both in both analysis and synthesis. I still feel funny about giving up soundness whenever I propose using it though :) The papers I’ve read using ML-y approaches (Dawson Engler, Alice Zheng, Miryung Kim, some dynamic search stuff) don’t use anything sophisticated like what I’m learning, which makes the time I put into the course a little frustrating. However, I am seeing real PL/SE usage nowadays (automashups like Rob Ennal’s stuff, tree learning on web pages for Dappit, and even SK is pitching a neat security+NLP project right now). Who knows – maybe all that UW program learning stuff will really work and we’ll be out of the job soon :)
I’ll hopefully sneak in for the tail-end of sigplan (the administrative geniuses decided that week would be a great time to do a programming systems retreat). I’ll use whatever excuse I can get to go to Napa [and bet Joel would too].