[elephant-devel] Berkeley DB error: Cannot allocate memory.
Marc
klists at saphor.de
Wed Sep 24 11:27:52 UTC 2008
Ian Eslick wrote:
>> but not for larger logical transactions where they'd
>> really be called for. Raising the limits at best slightly lessens the
>> pain, but is not really a solution.
>
>
> If raising the limits isn't a solution, then your application is
> probably mismatched with Elephant/BDB. There might be some artificial
> reasons you are seeing these problems.
>
> 1) If you have a large DB with deep indices and highly concurrent
> transactions, then you may start hit lock limits as described above.
> You should be able to set these limits high enough that you'll never
> hit problems. Fortunately increasing locks is a good solution.
> max-concurrent-sessions * max-objects-per-session * 2 + 20% is a good
> number for max locks. You can use db_stat to see what kind of
> locks-per-session numbers you're seeing and an analysis of your
> application should tell you the max-concurrent-sessions.
the main problem for us is that we automatically import data files and
data fragments, where one of them can easily result in the creation of
thousands or even tens of thousands heavily cross-referencing objects,
each with a number of index entries associated to it. The complexity of
imports is not known in advance and can vary drastically from import to
import.
None of the objects is big in itself, but the number adds easily up to
tens of thousands of inserts, updates or deletesin one logical
transaction (concurrency is less of an issue for our imports and we
could easily live with blocking the DB during the various imports).
Since transactions of this size don't work with BDB as a backend, we
currently do not use transactions for the import as a whole. We do use
them for the import of single entries each consisting of few dozen or so
objects to ensure consistency within an entry and to speed up insertions.
>
> Max concurrent transactions is upper-bounded by your machine's
> processing capacity and max # of locks is proportional to the
> logarithm of the number of objects which will grow very slowly so can
> be given practical upper bounds.
naive question, but why is the # of logs proportional to the logarithm
of the number of objects to be inserted? Well, probably I'll have to
check how elephant uses locks in its BTrees.
>
> (Set lock sizes in config.sexp - :berkeley-db-max-locks &
> :berkeley-db-max-objects)
>
>
> 2) If you really are blowing out the BDB data cache in normal
> operation (vs. bulk loads) then your application is either not using
> transactions efficiently or is highly mismatched to any transactional
> DB whether that be BDB or some future lisp implementation.
the problem is that for our application the "bulk load" is a perfectly
normal and recurrent transaction.
>
> If a logical transaction results in writing or modifying 10's of
> megabytes of data, then no transactional DB will ever work for you.
> This is just physics. To get ACID properties you have to be able to
> stage all side effects in memory to ensure that they are written as an
> atomic unit. If you don't have enough memory to perform this staging,
> then you can't implement a transaction.
I'm a bit surprised about this. Most relational databases use
journalling and / or rollback files to ensure that they can commit or
rollback large transactions as one atomic unit. The procedure is quite
neatly described in http://www.sqlite.org/atomiccommit.html and
http://www.sqlite.org/tempfiles.html, but works AFAIK more or less the
same on other RDMSs.
I've just tried it out with sqlite which quite happily handles a
transaction of 10 million inserts as an atomic unit both for commit and
rollback – to be sure, in this case at the price of an extended read /
write lock on the DB and, of course, not in one physical write
operation. Main memory hardly enters the equation there.
Without journalling the various read levels would quite probably be even
more challenging to implement with reasonable efficiency than they are now.
>
>
> The following may be obvious, but it's easy to get complacent and
> forget that Elephant is backed up by a transactional DB and that you
> can't really ignore those implications for real-world applications.
>
> - The concept behind the transaction is to combine small DB operations
> into a single atomic chunk. Sometimes you can wrap a small set of
> such chunks for performance. You shouldn't be wrapping lots of
> unrelated operations into a single transaction. This is better for
> avoiding concurrency conflicts as well.
>
indeed, it is. However, the very point for us is that the objects are
related and cross-reference each other often quite heavily and that we
really should do is either commit or, in the case of errors, rollback an
import in its entirety rather then end up with objects that point to
nowhere.
> For example, all composite operations inside Elephant use transactions
> to ensure consistency of slot and index values. Slot writes to
> indexed slots wrap both the write and the index updates into a single
> transaction. Otherwise every sub-operation ends up in its own little
> transaction and you can get inconsistent indices if you have an error
> in between the slot write and index update.
>
> - One downside of a transactional store on BDB is every little
> transaction results in a set of pages being flushed to disk (any Btree
> index pages and the leaf pages with data). Flushing pages to disk
> waits for a write-done from the driver; this can be very expensive!
> (10's of ms if I remember correctly). Flushing one page takes about
> the same time as flushing a few dozen pages so you can save
> significant run time by wrapping your leaf operations in a transaction
> - plus all the usual ACID properties.
agreed.
> In my web application (using weblocks) I wrap every ajax callback on
> the server side in one big transaction since I know apriori that
> operations within these callbacks aren't going to blow out 20M of
> cache or 1000's of locks. Typically I update a couple of objects and
> indices. This may be a few dozen 4k or 8k pages.
for this type of interaction, a transaction, let alone a single callback
will never be large.
Best regards,
Marc
>
> On Sep 23, 2008, at 9:42 AM, Marc wrote:
>
>> Ian Eslick wrote:
>>> You could be running out of cache or locks. I believe there are now
>>> parameters in config.sexp you can set to raise the default limits.
>>> The lack of robustness to allocation failures is a problem with
>>> Berkeley DB.
>>>
>>> Unfortunately, running recovery isn't a trivial process. You have to
>>> guarantee that all other threads have released any Berkeley DB
>>> resources (abort all active transactions) and don't try to request any
>>> more (meaning no persistent slot reads/writes for Elephant) so you
>>> essentially need to get inside the state of each process, abort any
>>> transactions, and then suspend each thread. This isn't something that
>>> you can canonicalize inside the Elephant library.
>>>
>>> Chalk this up as another reason to someday implement a lisp-only
>>> version of the library!
>> Indeed, that'd be a dream. For us, at least, this prevents us from
>> seriously using transactions at all in elephant. We do use them to speed
>> up some bulk inserts when we know that the number of inserts in one go
>> won't be too big, but not for larger logical transactions where they'd
>> really be called for. Raising the limits at best slightly lessens the
>> pain, but is not really a solution.
>>
>> Best regards,
>>
>> Marc
>>>
>>> On Sep 21, 2008, at 10:25 PM, Red Daly wrote:
>>>
>>>> I have recently run into "Cannot allocate memory" problems with
>>>> elephant on a production server. Unfortunately when one transaction
>>>> is too large, it seems to blow the database until a manual recover is
>>>> done.
>>>>
>>>> The occasional failure is slightly worrisome but it the whole
>>>> database requiring a manual recover from one extra-large transaction
>>>> is a scary thought for a live application with thousands of users.
>>>>
>>>> Why does the memory allocation failure sour the whole database
>>>> instead of aborting a single transaction? I think that elephant
>>>> should try to encapsulate this failure, recovering the database or
>>>> whatever is necessary to make the store usable for the next
>>>> transaction.
>>>>
>>>> Best,
>>>> Red Daly
>>>>
>>>>
>>>> On Sat, Jan 5, 2008 at 5:02 PM, Victor Kryukov
>>>> <victor.kryukov at gmail.com> wrote:
>>>> On Jan 4, 2008 2:54 AM, Ian Eslick <eslick at csail.mit.edu> wrote:
>>>>> Hi Victor,
>>>>>
>>>>> Sounds like your transaction is blowing out the shared memory
>>>>> allocated by Berkeley DB to store dirty pages. This is caused by
>>>>> transactions that are too large; putting an entire file of data could
>>>>> well accomplish this. (We really should change the error message to
>>>>> be more informative in these cases).
>>>>>
>>>>> Try pushing with-transaction into the loop in import-movie as
>>>>> follows:
>>>>
>>>> Thanks for your suggestion, Ian - the problem was solved once I've
>>>> moved with-transaction inside the collect-rating-info.
>>>> _______________________________________________
>>>> elephant-devel site list
>>>> elephant-devel at common-lisp.net
>>>> http://common-lisp.net/mailman/listinfo/elephant-devel
>>>>
>>>> _______________________________________________
>>>> elephant-devel site list
>>>> elephant-devel at common-lisp.net
>>>> http://common-lisp.net/mailman/listinfo/elephant-devel
>>>
>>> _______________________________________________
>>> elephant-devel site list
>>> elephant-devel at common-lisp.net
>>> http://common-lisp.net/mailman/listinfo/elephant-devel
>>>
>>
>>
>> _______________________________________________
>> elephant-devel site list
>> elephant-devel at common-lisp.net
>> http://common-lisp.net/mailman/listinfo/elephant-devel
>
> _______________________________________________
> elephant-devel site list
> elephant-devel at common-lisp.net
> http://common-lisp.net/mailman/listinfo/elephant-devel
>
More information about the elephant-devel
mailing list