[elephant-devel] Optimization
Ian Eslick
eslick at media.mit.edu
Mon Dec 29 22:46:57 UTC 2008
I checked the cache-checkout model to elephant-unstable. Also checked
in some new tests.
The cache-checkout model is pretty simple, although there are a number
of ways to
shoot yourself in the foot if you don't use it wisely. I haven't
given this a serious review for concurrency issues. A second pair of
eyes is always appreciated. (src/elephant/cached-slots.lisp contains
the key changes)
- On checkout, the cached slots of the checked-out object behaves
like a standard built-in object, although with MOP overhead on
accesses
(see below)
- Write operations are also cached
- The persistent-checkin operation write the current cached slot state
to the DB
- You can sync a currently checked out object to make its state
persistent
- Checkout can be cancelled
- All checkout operations are transactional (e.g. only one can succeed)
and the persistent-checked-out-p predicate is also transactional, but
the transaction must terminate prior to any cached ops occuring.
- The user is responsible for any multi-process locking to ensure
isolation of checkout, subsequent cached ops and checkin or sync
steps.
Some quick tweaks and a slightly different test and we've got somewhat
better performance results than reported earlier (similar to the
results I sent in response to Elliott)
Standard: 25M/sec
Cached: 7M/sec
Persistent (within a transaction): 140k/sec **
(** - Improvements to persistent read performance are not yet in
repository)
Cached slot access is now only 4x slower than standard slot access
Persistent slot access is about 50x slower than cached slot access
Enjoy!
Ian
On Dec 28, 2008, at 5:09 PM, Ian Eslick wrote:
> I implemented a quick check-in/check-out protocol for persistent
> objects. I have to think a little more about concurrency as I'm sure
> there are some holes. I implemented a simple test loop which
> increments a local variable by the value in an instance slot.
>
> A standard-object slot allows about 50M updates-sec
> A persistent-object slot (inside a txn) is about 60k updates-sec
> A cached persistent-object slot is about 3M updates-sec
>
> The slot access protocol adds about 10x overhead per slot read but the
> cached access is about 50x faster than the pure persistent query.
> I'll see if there are some easy opportunities to speed things up.
>
> Ian
>
> On Dec 28, 2008, at 2:05 AM, Elliott Slaughter wrote:
>
>> There are several replies to my original query, so I will attempt to
>> address all of them here.
>>
>> On Wed, Dec 24, 2008 at 2:31 PM, Ian Eslick <eslick at media.mit.edu>
>> wrote:
>> A couple of quick thoughts on your problem:
>>
>> 1) Are you wrapping the critical sections of your code in with-
>> transaction? This causes all database pages you touch to be cached
>> within the body of the transaction. This avoids all 'sync'
>> operations
>> and transaction setup/teardown caused by a read/write slot operation
>> that takes place outside a with-transaction body.
>>
>> I currently wrap the contents of each frame update in with-
>> transaction. Each frame consists of updating 100 objects on the
>> screen (which by my estimation ought to be about 20 slot reads and 2
>> writes), so I ought to be doing about 2000 slot reads each frame and
>> 200 writes.
>>
>> Wrapping the entire game loop in with-transaction increases
>> performance by about 10%. I also tried
>>
>> (db-env-set-flags (controller-environment *store-controller*) 1 :txn-
>> nosync t)
>>
>> as suggested in the manual, with a similar (about 10%) performance
>> increase.
>>
>> 2) If you are doing this, have you increased your BDB cache size to
>> ensure you can cache your key working set?
>>
>> It should already be large enough, but just for kicks I increased it
>> from 2 to 10 MB, resulting in a performance increase of about 3%.
>>
>> 3) If yes, do you have significant contention between reads and
>> writes
>> for certain data structures that are causing transactions to be
>> aborted. Perhaps you can refactor around this.
>>
>> This is a single-threaded (and single-process) app. I don't see how
>> I could possibly have contention for the db.
>>
>> Ian
>>
>> And thanks to both Leslie and Alex for suggesting caching methods. I
>> will probably try to implement something along the lines of Alex's
>> suggestion, as that makes sense and should be simple enough (for the
>> single threaded model, which is sufficient for me).
>>
>>
>> Thanks again for all the help. I'll report back either when I have
>> something working or I've run into another wall.
>>
>> --
>> Elliott Slaughter
>>
>> "Any road followed precisely to its end leads precisely nowhere." -
>> Frank Herbert
>> _______________________________________________
>> elephant-devel site list
>> elephant-devel at common-lisp.net
>> http://common-lisp.net/mailman/listinfo/elephant-devel
>
>
> _______________________________________________
> elephant-devel site list
> elephant-devel at common-lisp.net
> http://common-lisp.net/mailman/listinfo/elephant-devel
More information about the elephant-devel
mailing list