[rucksack-devel] Conflicting Transactions: New overwrite old?

Jochen Schmidt js at crispylogics.com
Wed Jul 15 15:19:02 UTC 2009


Hi Arthur,

Am 15.07.2009 um 16:21 schrieb Arthur Lemmens:

>
>> The interesting thing is that CACHE-TOUCH-OBJECT also checks if there
>> is a conflict with another active transaction about this object. If
>> there is an older transaction which also modified the object, a
>> TRANSACTION-CONFLICT is signalled. So far so good - so the newer
>> transaction should handle it by retrying later. The problem is that
>> the newer transaction already MODIFIED the in memory object that is
>> hold by the older transaction.
>
> Yes, I think you're right.  And I agree that this is a problem.

I'm using here a modified variant of rucksack in which I added a GF  
CACHE-PREPARE-MODIFICATION.
This GF is called in every place were rucksack tries to modifiy objects:

1) Setting slots of instances of RS:PERSISTENT-CLASS
2) RS:P-REPLACE
3) RS::PERSISTENT-DATA-WRITE
4) SLOT-MAKUNBOUND on instances of RS:PERSISTENT-CLASS
5) UPDATE-INSTANCE-FOR-REDEFINED-CLASS

CACHE-PREPARE-MODIFICATION currently checks for conflicts and signals  
an error if there is one.
I removed the conflict check from CACHE-TOUCH-OBJECT.

I then implemented a new rucksack subclass which uses a shared reader/  
exclusive writer transaction scheme.
This is currently done on the transaction level - one decides if one  
wants an "exclusive transaction" or a "shared transaction". I use an  
own implementation of shared-locks so that an exclusive transaction  
will wait for all running shared transactions to end but shared  
transactions can always join their kinds. Other exclusive transactions  
and shared transactions will wait for the one running exclusive  
transaction of course. This actually seems to work quite well in one  
of my web application servers which creates one shared transaction per  
request. This enables parallel request processing in this case.

>
>> I see two solutions to this problem:
>>
>> 1) Do not reuse uncommited objects
>> Either by using cloned objects for modification or by always loading
>> fresh objects from disk before modification.
>
> I'm not sure I understand this.  If we take the following conflicting
> transaction scenario (the famous bank account from which two
> transactions are subtracting money at the same time), what exactly
> would you propose to make sure that the final value of A is 0 and not
> 100?
>
> T1:  [1] load A from disk (A1 = 200)
> T2:  [2] load A from disk (A2 = 200)
> T1:  [3] decrement A by 100 (A1 = 100)
> T2:  [4] decrement A by 100 (A2 = 100)
> T1:  [5] save A (= A1 = 100)
> T2:  [6] save A (= A2 = 100)
>
> Would you load a fresh A from disk at step 4, just before modifying
> it?  But that wouldn't solve the problem, would it?  Or am I missing
> something?

Yes that doesn't work (loading fresh from disk).

The cloned object idea would be more like:

T1: [1] load A from disk (A1=200)
T2: [2] load A from cache (A1=200)
T1: [3] decrement A by 100 (A1=100)
T2: [4] decrement A by 100 -> clone A1 before-> (A2=0)
T1: [5] save A(= A1=100)
T2: [6] save A(=A2=0)

I haven't looked at this at detail - another critical problem is that  
it isn't really possible to clone objects before modification in the  
current rucksack design. Think of setting slots of an persistent- 
object. The modification is recognized when SLOT-VALUE-USING-CLASS is  
used, but you cannot change the instance that the  SLOT-VALUE-USING- 
CLASS call holds. The only thing I could think of would be to always  
use wrapper objects for persistent objects containing not much more  
than the id. . All slot-accesses to the wrapper object would then get  
routed to an transaction local (isolated) wrapped-object. It's  
actually similar to rucksacks proxies, but one would always reference  
the proxies and never dereference them. Dispatch on persistent-objects  
would work on the wrapper-classes.

>
>
>> 2) Do the FIND-CONFLICTING-TRANSACTION check before any modification
>> occurs.
>
> Yes, this sounds like the best solution to me.  In terms of the above
> scenario, that means that we signal an error before step 4 actually
> happens.

Or clone a new transaction local object for the transaction. Of course  
this actually would only work when taking a dependency journal between  
transactions. If T1 in the above example would get rolled back one  
could not commit T2. If T1 is not yet committed T2 would have to wait  
for it. I think this would actually involve all those 2-phase-commit  
protocol stuff including the demand for a lock manager that resolves  
dead locks.

ciao,
Jochen



-- 
Jochen Schmidt
CRISPYLOGICS
Uhlandstr. 9, 90408 Nuremberg

Fon +49 (0)911 517 999 82
Fax +49 (0)911 517 999 83

mailto:(format nil "~(~36r@~36r.~36r~)" 870180 1680085828711918828  
16438) http://www.crispylogics.com





More information about the rucksack-devel mailing list