Phillip,<br><br>Nice write-up. Random notes:<br><pre style="margin-left: 40px;">What I discovered is quite cool. The Cells system *automatically <br>discovers* dynamic dependencies, without having to explicitly specify that
<br>X depends on Y, as long as X and Y are both implemented using cell <br>objects. <br></pre><g> And that is part of why Cells is pretty much all-or-nothing for a developer: I have not tried to figure out the threshhold, but above what I think is a very low one, all application semantics must be expressed declaratively as Cell rules. Otherwise imperative code gets left out of the action as the automatic dataflow engine does its thing. The corollary being your "as long as" qualifier: my declarative rules are crippled if some important datapoint is not a Cell.
<br><br>For the first seven years of Cells development, when in doubt I started out a new slot/attribute as a non-Cell, bending over backwards if you will not to force the mechanism where it should not go. The default :cell meta-attribute was nil until just recently. But in each case, soon enough it turned out I would need them to be Cells. Just recently "true" became the default for :cell.
<br><pre><br></pre><pre style="margin-left: 40px;">Specifically, the cells system understands how to make event-based updates <br>orderly and deterministic, in a way that peak.events cannot.<br><br></pre>It may be of interest that this orderliness is relatively new to Cells. For the longest time and in the most intense applications I got away with murder. Strangely, it was development of a RoboCup client that forced Cells to "grow up".
<br><pre style="margin-left: 40px;">One especially interesting bit is that <br>the Cells system can "optimize away" dependencies or subscriptions when <br>their subjects are known to be constant values.<br></pre>
<pre>I was quite surprised at how much faster this made Cells run. <br><br></pre><pre style="margin-left: 40px;">I'm also wondering if a Cells-like system couldn't also be used to <br>implement STM (Software Transactional Memory) to allow for atomic
<br>operations even in the presence of threads. All reads and writes are <br>controlled by the cells system, so it can in principle abort and retry a <br>"transaction", by waiting until *something changes* that would affect the
<br>transaction's ability to succeed.<br></pre>We have a Google SoC project over on the Lisp side to implement STM, and yes, I am excited about that making Cells viable in a multi-threaded situation. Mind you, I had never heard of STM before this proposal landed on our doorstep, nor do even have much idea of what is available to applications when it comes to dealing with threads, but looking at how Cells manages data integrity I know it will need help to survive threads. STM looks like a great fix.
<br><pre style="margin-left: 40px;">However, seeing how the Cells paradigm works, it seems to me that it should <br>be pretty easy to establish the convention that side-effects should be <br>confined to non-rule "observer" code.
<br></pre>Right, it is just a convention, but I think one that gets easier to follow because the engine provides a simple way to say "do this when the time is right".<br><pre style="margin-left: 40px;"><br>experience w/e.g.
peak.binding attributes shows that it's rare to want to <br>put side-effects into pull-oriented rules.<br></pre>"We could do it, but it would be wrong." <br><pre style="margin-left: 40px;"><br>Really, the principal downside to Cells is wrapping your head around the
<br>idea that *everything* should be treated as pull-oriented rules.<br></pre>Yes, it really is a paradigm shift, one it takes a long time to internalize. What I noticed was that, if I decided to add a significant new mechanism to the system, after about two hours of coding I would be having increasing difficulties and start to get a vague "bad feeling". Then I would realize that I had, from long habit, fallen back into an imperative style. Hence the "bad feeling". Because the code was all new, it did not grow naturally from the Cell-based model. if it had, It would of course been done originally in the declarative style.
<br><br>I have encouraged Ryan, the PyCells author, not to allow backdoors to the Cells engine, precisely because of this. The big win comes from the declarative paradigm, and developers will not climb that learning curve if they can avoid it. SImple human nature. Cells makes one think harder up front in return for all sorts of good things later, and that is a tradeoff I have always liked to make as a developer.
<br><br><pre style="margin-left: 40px;"> There are <br>some operations (such as receiving a command and responding to it) that <br>seem to be more naturally expressed as pushing operations, where you make <br>some decisions and then directly update things or send other commands out.
<br></pre>Exactly! A spreadsheet is a steady-state thing (here are the values, here is the computed other state) and using Cells to express static reality is a snap. Otoh, imperative code is all about change, so it is great for handling events.
<br><br>We use ephemeral Cells to model events (they take on a value, propagate, then revert to null silently, without propagating), but one still can end up thinking pretty hard when it comes to events. I think the most frightening "rule" I have written was for a Timer class implemented by the Tcl "after" command.
<br><pre style="margin-left: 40px;"><br>Actually, you can still do that, it's just that those updates or commands <br>force another "virtual moment" of time into being, where if you had made <br>them pull-driven they could've happened in the *same* "virtual
<br>moment". So, it's more that pull-based rules are slightly more efficient <br>than push-based ones, which is nice because that means most developers will <br>consider it worth learning how to do it the pull way. ;)
<br></pre>That and the straitjacket I hope PyCells keeps from Cells.<br><pre style="margin-left: 40px;">Anyway, there is a *lot* of interesting food for thought, here. For <br>example, you could create object validation rules using cells, and the
<br>results would be automatically recomputed when something they depended on <br>changed. Not only that, but it would be possible to do atomic updates, <br>such that the validation wouldn't occur until *after* all the changes were
<br>made -- i.e., no false positives. Of course, you'd get the resulting <br>validation errors in the *next* "time quantum", so you'd need to make the <br>response to them event-driven as well.<br></pre>It's definitely a slippery slope. :)
<br><pre style="margin-left: 40px;">For example, this deterministic model of computation seems to <br>resemble "object prevalence" (e.g. Prevayler) in that everything (even the <br>clock) is deterministic, changes are atomic, and I/O occurs between logical
<br>moments of time. I haven't thought this particular link through very much <br>yet, it's just an intriguing similarity.<br></pre>Nice call. I have heard the Cells data integrity model maps nicely onto the transaction model of AllegroCache, a persistent Lisp object database.
<br><br><pre style="margin-left: 40px;">The head-exploding part is figuring out how to get errors to propagate <br>backwards in time, so that validation rules (which run in the "next <br>moment") could appear to cause an error at the point where the values were
<br>set.<br></pre>Sounds like you want at least one Undo. What about a "fail now or forever hold your peace" policy?<br><pre>cheers, kenny<br></pre>