From hon.DavidEzechi at common-lisp.net Wed Aug 13 18:07:38 2008 From: hon.DavidEzechi at common-lisp.net (David Ezechi) Date: Thu, 14 Aug 2008 01:07:38 +0700 Subject: [closer-devel] Payment Via Bank of America Message-ID: FROM CENTRAL BANK OF NIGERIA: TRANSFER UNIT DEPARTMENT(CBN) OUR REF:CBN/IRD/CBX/021/008 SWIFTCODE: BPH KPL PK, CUSTOMERS SERVICE HOURS--MONDAY TO SATURDAY: OFFICE HOURS MONDAY TO SATURDAY: GREETINGS!!! WE HAVE BEEN WAITING FOR YOU TO CONTACT ME FOR YOUR PAYMENT WORTH 15MILLION GBP (EIGHTEEN MILLION, GREAT BRITAIN POUNDS), BEING YOUR INHERITANCE FUND, BUT I DID NOT HEAR FROM YOU ALL THIS WHILE. MEANWHILE I AM OUT OF THE COUNTRY FOR A 3 MONTHS COURSE AND I WILL NOT BE BACK UNTIL THE COMPLETION OF MY 3 MONTHS COURSE. WHAT YOU HAVE TO DO NOW IS TO CONTACT THE PAYMENT AGENT HERE IMMEDIATELY FOR YOUR ACCOUNT INFORMATION AND FURTHER INSTRUCTION ON THE TRANSFER OF YOUR FUND. FOR YOUR INFORMATION, I HAVE PAID FOR THE INSURANCE PREMIUM AND CLEARANCE CERTIFICATE FEE OF THE TRANSFER SHOWING THAT IT IS NOT A DRUG MONEY OR MEANT TO SPONSOR TERRORISM ATTACK IN YOUR COUNTRY. YOU HAVE TO CONTACT THE AGENT NOW FOR YOUR ACCOUNT INFORMATION WITH THIS INFORMATION BELOW: CONTACT AGENT: William Ayodele EMAIL ADDRESS: willyayodele at sify.com RECONFIRM THE BELOW INFORMATION TO THE AGENT: 1. FULL NAMES: 2. RESIDENTIAL ADDRESS: 3. PHONE NUMBER: 4. IDENTIFICATION: LET ME REPEAT AGAIN, TRY TO CONTACT HIM IMMEDIATELY YOU RECEIVE THIS MAIL TO AVOID ANY FURTHER DELAY AND REMEMBER TO FURNISH HIM WITH THE NEEDED INFORMATION ABOVE. YOURS FAITHFULLY, HON. DAVID EZECHI From attila.lendvai at gmail.com Mon Aug 18 21:06:39 2008 From: attila.lendvai at gmail.com (Attila Lendvai) Date: Mon, 18 Aug 2008 23:06:39 +0200 Subject: [closer-devel] slots on layers Message-ID: dear list, from time to time we are returning to an important use-case that can't (?) be done using contextl currently. what we need is slots on the layer prototype that keep the expected semantics when layers are restored using (current-layer-context) and (funcall-with-layer-context ...). the problem is that if a non-special slot is defined for a layer, then it's shared between all threads (in the slot of the single cached prototype). but :special t slots won't work as expected either, because they store their values in special bindings which are lost when going through c-l-c/f-w-l-c. our real-world use-case is this: we have a bunch of factory methods that build up a gui component hierarchy. this algorithm is driven by the metadata of the data model of the application. this metadata has entities, typed properties, associations, etc, so if you define a data model then in return you get a 90% gui that can be used to navigate and edit the data. to fill in the remaining 10% for each project, we need to customize this algorithm (a bunch of layered methods), which is achived using layers. it serves well most of the time, but often at the entry points of this algorithm we need to store some lexically available information in the layer instance to make it accessible deep inside the recursive algorithm where the customized methods of the layer get called. the problem: the gui built by this algorithm is huge/infinite in various directions, so it's built lazily as the data graph is navigated. when the user clicks something in the browser, the server potentially needs to extend the component graph by invoking some closures. we restore the layer context when those closures are called, but the slots of the layers are not restored. the current contextl implementation tries hard to speed up layer activation. for us a much simpler implementation would suffice: the layer instance would not only be a prototype but a full instance which is instantiated at each layer activation/deactivation and the value of the remained slots copied over from the parent layer instance. the instance could be directly rebound in the *layer* variable. also note that this alternative implementation could fall back to using prototypes when the layer has no slots (which is probably most of the deployed usages. users, speak up! :) we understand that this can be much slower when it comes to changing the current effective layer, but we are not sure if this really counts when it comes to the overall application performance. we don't know how others are using contextl but in our usages the number of changes to the active layers is negligable compared to the rest of the runtime. any thoughts? -- attila PS: a simple example: CONTEXTL> (deflayer foo2 () ((slot :initform 42 :accessor slot-of :initarg :slot :special t))) CONTEXTL> (mapcar #'slot-of (with-active-layers ((foo2 :slot 2)) (list (layer-context-prototype (current-layer-context)) (with-active-layers ((foo2 :slot 3)) (layer-context-prototype (current-layer-context)))))) it returns (42 42) while we expect here to get (2 3). this effectively means that we can't reinstate layer contexts that have slots. From pc at p-cos.net Wed Aug 20 10:38:27 2008 From: pc at p-cos.net (Pascal Costanza) Date: Wed, 20 Aug 2008 12:38:27 +0200 Subject: [closer-devel] slots on layers In-Reply-To: References: Message-ID: Hi Atilla, Thanks a lot for your question and detailed explanation of your use case. Such feedback is very helpful in understanding the limitations of ContextL. I have a question, see a problem with your suggested approach, and think there is already a way in ContextL to achieve what you want. However, I am not 100% sure, so please don't consider this as the final word in this regard, but rather as a starting point for continuing the discussion. The question: I understand that you want to be able to reuse the state of layer-specific slots from deactivated layers when you activate such layers again. However, I don't quite understand how you expect to activate that specific layer instance. Do you expect to be able to inquire about which layers are currently active, and then pick out the one you're interested in? The problem here is that in the general case, the one layer you're interested in is somewhere in the list of active layers, and not necessarily at the start or the end of the list. So what criteria do you want to use to pick out the one specific layer you're interested in? Or do you just want to refer to a layer name? (This may seem the obvious way, but actually makes it harder for me to understand what you really want/need, that's why I'm asking the question...) The problem (probably related to the question): When you pick out the class prototype of the currently active layer combination (!), it is an instance of an automatically generated class that hass all the active layers as superclasses. So, for example, if layers l1, l2 and l3 are active in that order, the class prototype is an instance of a class %l123 that has l1, l2 and l3 as superclasses. Now, assume that you are interested in the layer-specific state of l3, so you want to pick out an instance that represents that state. It would have to be an instance of the class %l123 as well, in order to ensure that context-oriented dispatch works correctly. However, it could be that in a new situation where you want to activate l3 again, there is a different set of other layers currently active, say l2 and l4. The eventual class prototype that represents the set of active layers after reactivation of l3 should now be an instance of a class %l243, so a class different from %l123. This automatically means that you _cannot_ reuse that old instance of %l123. So in other words, reusing the same class prototype from a previous activation state is _not_ a good idea (unless you perform a change-class on that instance, but I have strong reservations with regard to my willingness to deal with the complexities that would arise from this ;). An alternative solution: Why don't you just use layer inheritance? You can then use layer-specific slots (whether they are :special or not) in sublayers of the respective layers you are interested in. As an example: (deflayer foo3) (deflayer sub1foo3 (foo3) ((some-slot :initform 2))) (deflayer sub2foo3 (foo3) ((some-slot :initform 3))) ...etc., for as many sublayers as you want. The idea here is that you use layers as prototypical objects (in the sense of languages like Self or JavaScript) which can directly inherit from each other. You then "just" have to ensure that you activate the correct sublayers for the various contexts you're interested in. It should actually also be possible to use anonymous layers which are generated at runtime (although that part of the ContextL API has not seen extensive testing in real-world use so far, to the best of my knowledge). Would that get you closer (ha!) to a working solution for your problem? Best, Pascal P.S.: I don't want to avoid changes to ContextL, it's just that I want to make sure that changes to ContextL fit well with the rest of the design of ContextL, that's why I'm yet hesitating to adopt your suggested solution. Again, thanks a lot for your feedback, it is very valuable! On 18 Aug 2008, at 23:06, Attila Lendvai wrote: > dear list, > > from time to time we are returning to an important use-case that can't > (?) be done using contextl currently. what we need is slots on the > layer prototype that keep the expected semantics when layers are > restored using (current-layer-context) and (funcall-with-layer-context > ...). > > the problem is that if a non-special slot is defined for a layer, then > it's shared between all threads (in the slot of the single cached > prototype). but :special t slots won't work as expected either, > because they store their values in special bindings which are lost > when going through c-l-c/f-w-l-c. > > our real-world use-case is this: we have a bunch of factory methods > that build up a gui component hierarchy. this algorithm is driven by > the metadata of the data model of the application. this metadata has > entities, typed properties, associations, etc, so if you define a data > model then in return you get a 90% gui that can be used to navigate > and edit the data. > > to fill in the remaining 10% for each project, we need to customize > this algorithm (a bunch of layered methods), which is achived using > layers. it serves well most of the time, but often at the entry points > of this algorithm we need to store some lexically available > information in the layer instance to make it accessible deep inside > the recursive algorithm where the customized methods of the layer get > called. > > the problem: the gui built by this algorithm is huge/infinite in > various directions, so it's built lazily as the data graph is > navigated. when the user clicks something in the browser, the server > potentially needs to extend the component graph by invoking some > closures. we restore the layer context when those closures are called, > but the slots of the layers are not restored. > > the current contextl implementation tries hard to speed up layer > activation. for us a much simpler implementation would suffice: the > layer instance would not only be a prototype but a full instance which > is instantiated at each layer activation/deactivation and the value of > the remained slots copied over from the parent layer instance. the > instance could be directly rebound in the *layer* variable. also note > that this alternative implementation could fall back to using > prototypes when the layer has no slots (which is probably most of > the deployed usages. users, speak up! :) > > we understand that this can be much slower when it comes to changing > the current effective layer, but we are not sure if this really counts > when it comes to the overall application performance. we don't know > how others are using contextl but in our usages the number of changes > to the active layers is negligable compared to the rest of the > runtime. > > any thoughts? > > -- > attila > > PS: a simple example: > > CONTEXTL> (deflayer foo2 () > ((slot :initform 42 :accessor slot- > of :initarg :slot :special t))) > CONTEXTL> (mapcar #'slot-of > (with-active-layers ((foo2 :slot 2)) > (list (layer-context-prototype (current-layer- > context)) > (with-active-layers ((foo2 :slot 3)) > (layer-context-prototype > (current-layer-context)))))) > > it returns (42 42) while we expect here to get (2 3). this effectively > means that we can't reinstate layer contexts that have slots. > _______________________________________________ > closer-devel mailing list > closer-devel at common-lisp.net > http://common-lisp.net/cgi-bin/mailman/listinfo/closer-devel -- Pascal Costanza, mailto:pc at p-cos.net, http://p-cos.net Vrije Universiteit Brussel, Programming Technology Lab Pleinlaan 2, B-1050 Brussel, Belgium From attila.lendvai at gmail.com Wed Aug 20 13:09:55 2008 From: attila.lendvai at gmail.com (Attila Lendvai) Date: Wed, 20 Aug 2008 15:09:55 +0200 Subject: [closer-devel] slots on layers In-Reply-To: References: Message-ID: > Hi Atilla, hello Pascal, > Thanks a lot for your question and detailed explanation of your use case. > Such feedback is very helpful in understanding the limitations of ContextL. my pleasure! > The question: I understand that you want to be able to reuse the state of > layer-specific slots from deactivated layers when you activate such layers > again. However, I don't quite understand how you expect to activate that > specific layer instance. Do you expect to be able to inquire about which > layers are currently active, and then pick out the one you're interested in? what you wrote in your mail is consistent with my understanding of contextl, so i think we have a misunderstanding here. what i want to reactivate later is the instance made from the class representing the currently active layer combination (and its slots), not just a single layer "extracted" from it. i think dealing with specific layers in layer context restoration would be against the whole idea of contextl. some more background info: in our setup there are several different layers. some are simple customizations like passive-components-layer: it tells the gui algorithm to forget about actions below this component; think of rendering components into transient tooltips - there's no point in rendering actions that the user can't click anyway... some other layers are very specific customizations, specific to a certain gui form _instance_! e.g.: some action instantiates a metadata driven finder component. but this finder will be used for selecting data instances, so we also want to customize this finder to add a "select this instance" action into the result of the lister component that it will instantiate when showing the result of its search. obviously the added "select this instance" action needs to know where it belongs to, but this action won't even be created until some browser requests later: the layered method adding this extra action for the search result list is called only after the user clicks the "search" action on the finder. this only happens some browser requests later. this gui algorithm is recursive, so you need to "attach" this customization to the specific finder component instance. if it's not done, or the layer's slot is shared then if the user navigates somehow to a different instance of the finder instantiated deeper in the component recursion, then this customization will screw up the workings of the two finder instances. [i'm trying hard to make these examples clear, but unfortunately i'm aware of the deficiencies... but keep in mind that his is a recursive meta-gui that can display/edit a random data model in a browser, so there are some inherent complexities...] > class different from %l123. This automatically means that you _cannot_ reuse > that old instance of %l123. So in other words, reusing the same class > prototype from a previous activation state is _not_ a good idea (unless you > perform a change-class on that instance, but I have strong reservations with > regard to my willingness to deal with the complexities that would arise from > this ;). it may not be a good idea in certain situations, but in the factory part of our lazy gui building algorithm this is just what we need: you start out with a component representing/displaying a single instance of your data graph. when creating the start component certain layers are enabled to customize the view that starts our from here. the current layer instance is stored (representing the full combination of the current layer setup) where laziness stops the further creation of child components using a lambda (this is the usual delay/force laziness implemented using lambda's). when the user later invokes an action that needs to extend the component graph (by force-ing some delay-ed lambdas), we restore the factory-time layer setup before calling the delayed lambda. e.g.: the user clicks a link in the browser which is representing the remote instance of an association. when it happens the link is replaced with a detailed view of the target of the association (whatever the target is... this is a component recursion point). as of change-class: in the implementation we proposed in the previous mail... when the layer setup is changed, then you need to _clone_ and rebind the layer prototype instance, so i'd simply write a special clone-layer-instance instead of change-class. of course this interferes with certain shared-initialize customizations, but the class defined by deflayer is private anyway. > An alternative solution: Why don't you just use layer inheritance? You can in short: because we need to remember such data in layered methods that was only available in the lexical scope of with-active-layer (several browser requests ago). this implies that anonymous layers are not good for this, because a new layer would need to be created for each run of the code block. > P.S.: I don't want to avoid changes to ContextL, it's just that I want to > make sure that changes to ContextL fit well with the rest of the design of > ContextL, that's why I'm yet hesitating to adopt your suggested solution. no worries, you didn't sound like that at all! > Again, thanks a lot for your feedback, it is very valuable! a simpler but very similar example: think of a recursive graph traversal algorithm that can be customized using contextl and can stop and resume the traversal at various points. say it needs to traverse potentially infinite graphs in the background and at various nodes it needs the user's feedback to make decisions that affect the _rest_ of the traversal but only on the rest of _that_ path. so it needs to be able to stop and resume the traversal at multiple nodes at the same time. whenever its stops, it needs a way to capture the current dynamic context (including the layer instance and its slots), so that it can be restored when the user comes back and anwers the question that made the traversal on this path to halt. if you add one more requirement, that some of the customizations are parametrized, then you get into a situation that is similar to ours. hth, -- attila From pc at p-cos.net Sun Aug 24 11:58:14 2008 From: pc at p-cos.net (Pascal Costanza) Date: Sun, 24 Aug 2008 13:58:14 +0200 Subject: [closer-devel] slots on layers In-Reply-To: References: Message-ID: <6777FEE5-5846-4B94-9772-9865F03F21BF@p-cos.net> On 20 Aug 2008, at 15:09, Attila Lendvai wrote: > a simpler but very similar example: think of a recursive graph > traversal algorithm that can be customized using contextl and can stop > and resume the traversal at various points. say it needs to traverse > potentially infinite graphs in the background and at various nodes it > needs the user's feedback to make decisions that affect the _rest_ of > the traversal but only on the rest of _that_ path. > > so it needs to be able to stop and resume the traversal at multiple > nodes at the same time. whenever its stops, it needs a way to capture > the current dynamic context (including the layer instance and its > slots), so that it can be restored when the user comes back and anwers > the question that made the traversal on this path to halt. OK, let me try to hook into this example, because it seems simpler than the other descriptions. Maybe we can bootstrap a solution from here. What you describe seems to be a classic scenario for using first-class continuations. What I already have on my todo list for ContextL is support for first-class dynamic closures. The idea would be that you can say something like (capture-dynamic-bindings), which gives you a first-class representation of all special variables, which you can later reinstate by something like (with-reinstated-dynamic- bindings ...). Such a dynamic closure facility would capture the current representation of active layers (because that's in a special variable), as well as, for example, special slots both in the layers and in other objects. (It won't be as general as that, I will have to require registering the special variables that you want to see captured, but I think I can make this relatively non-intrusive.) Would that solve your problem? I already know how to implement this in a portable way, and it shouldn't affect the essential performance of ContextL. I just need a couple of days of free time to do this... What's your opinion? Am I still off of what you need? Pascal -- Pascal Costanza, mailto:pc at p-cos.net, http://p-cos.net Vrije Universiteit Brussel, Programming Technology Lab Pleinlaan 2, B-1050 Brussel, Belgium From attila.lendvai at gmail.com Sat Aug 30 22:44:52 2008 From: attila.lendvai at gmail.com (Attila Lendvai) Date: Sun, 31 Aug 2008 00:44:52 +0200 Subject: [closer-devel] slots on layers In-Reply-To: <6777FEE5-5846-4B94-9772-9865F03F21BF@p-cos.net> References: <6777FEE5-5846-4B94-9772-9865F03F21BF@p-cos.net> Message-ID: > What you describe seems to be a classic scenario for using first-class > continuations. What I already have on my todo list for ContextL is support to be more precise what we need is only one-shot continuations, which can simply be done with a lambda plus some macrology without much overhead. > for first-class dynamic closures. The idea would be that you can say > something like (capture-dynamic-bindings), which gives you a first-class > representation of all special variables, which you can later reinstate by > something like (with-reinstated-dynamic-bindings ...). Such a dynamic > closure facility would capture the current representation of active layers > (because that's in a special variable), as well as, for example, special > slots both in the layers and in other objects. > > (It won't be as general as that, I will have to require registering the > special variables that you want to see captured, but I think I can make this > relatively non-intrusive.) > > Would that solve your problem? i think it would, but i still don't see why it is worth it. what you describe seems to be a lot more complex than if we cloned a new instance at each change to the layer context. AFAICS, from the user's POV the two solutions would be equivalent (except the different performance characteristics), but the implementation of the make-instance/copy-slots seems much more simple with potentially less surprises. as of the performance, these layer prototype instances would have standard slots that are much faster to access, while layer activation would be slower, but that's rare (at least in our use-cases). in fact the entire contextl related performance impact on our application is probably negligible, and probably most of it comes from the extra dispatch parameter. i don't want to say that first class dynamic closures are not useful in general, but in this situation it feels like too much machinery. we already have a more fine-grained protocol to restore the dynamic environment when for example a partial ajax rendering happens somewhere inside the component hierarchy (components can specialize the call-in-component-environment method, which is called on the parent path of the component, one by one). most, but not all of its usage is restoring dynamic variable bindings that could be covered by capture-dynamic-bindings, but the protocol is more flexible than that. it costs us a dozen of extra method calls, but our main priority is code maintainability/flexibility/reuse, not speed. i'm sorry if i sounded negative, but i don't have a fine enough english for this. and i'm just a user anyway, so take it all with a piece of salt... :) i may even not see the whole picture or an important detail. -- attila