Deadlock while culling of cache using locmem backend
|Reported by:||Owned by:||Tomáš Kopeček|
|Component:||Core (Cache system)||Version:||master|
|Cc:||Triage Stage:||Ready for checkin|
|Has patch:||yes||Needs documentation:||no|
|Needs tests:||no||Patch needs improvement:||no|
I'm playing around with the locmem cache backend and I set my 'max_entries' to a particularly low value such that the culling algorithm will kick in and prune my cache.
ORG_CACHE = cache.get_cache("locmem:///?max_entries=20&timeout=3600")
When the cull logic actually fires, I experience what appears to be a deadlock condition, on this line of the locmem backend:
def _cull(self): if self._cull_frequency == 0: self._cache.clear() self._expire_info.clear() else: doomed = [k for (i, k) in enumerate(self._cache) if i % self._cull_frequency == 0] for k in doomed: self.delete(k) def delete (self, key): self._lock.writer_enters() # <------ deadlocks here try: self._delete(key) finally: self._lock.writer_leaves()
Based on the way the surrounding code is written, I think the '_cull' method should actually be calling the '_delete' method instead of 'delete' as the lock has already been acquired earlier in the call stack. Indeed, when I change it to call the '_delete' method, the deadlock condition no longer occurs.
Change History (8)
comment:6 Changed 9 years ago by
|Owner:||changed from nobody to Tomáš Kopeček|
|Status:||new → assigned|
|Triage Stage:||Design decision needed → Ready for checkin|