Opened 17 years ago
Closed 16 years ago
#6413 closed (fixed)
Deadlock while culling of cache using locmem backend
Reported by: | Owned by: | Tomáš Kopeček | |
---|---|---|---|
Component: | Core (Cache system) | Version: | dev |
Severity: | Keywords: | ||
Cc: | Triage Stage: | Ready for checkin | |
Has patch: | yes | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description
I'm playing around with the locmem cache backend and I set my 'max_entries' to a particularly low value such that the culling algorithm will kick in and prune my cache.
ORG_CACHE = cache.get_cache("locmem:///?max_entries=20&timeout=3600")
When the cull logic actually fires, I experience what appears to be a deadlock condition, on this line of the locmem backend:
def _cull(self): if self._cull_frequency == 0: self._cache.clear() self._expire_info.clear() else: doomed = [k for (i, k) in enumerate(self._cache) if i % self._cull_frequency == 0] for k in doomed: self.delete(k) def delete (self, key): self._lock.writer_enters() # <------ deadlocks here try: self._delete(key) finally: self._lock.writer_leaves()
Based on the way the surrounding code is written, I think the '_cull' method should actually be calling the '_delete' method instead of 'delete' as the lock has already been acquired earlier in the call stack. Indeed, when I change it to call the '_delete' method, the deadlock condition no longer occurs.
Attachments (1)
Change History (8)
by , 17 years ago
Attachment: | locmem_6413.diff added |
---|
comment:1 by , 17 years ago
You're probably right... I'll pass to a design decision to let the smarty-heads look at it.
Perhaps bring this up on django-dev to get some confirmation.
comment:2 by , 17 years ago
Triage Stage: | Unreviewed → Design decision needed |
---|
comment:3 by , 17 years ago
Here's a pretty simple test case.
>>> from django.core.cache import get_cache >>> cache = get_cache ('locmem:///?max_entries=20') >>> for x in range (25): ... print x ... cache.set (x, 'foo') # loop will hang on iteration x=20 ...
I think my patch will fix the locmem cache, but this also indicates non-intuitive behavior in django.utils.synch.RWLock
. I'm not a foremost expert on threading libraries, but in my experience with similar locking mechanisms, they typically don't deadlock a thread that tries to acquire the same lock more than once.
follow-up: 5 comment:4 by , 17 years ago
I don't know, why it is in stage "design decision needed". This is no-brainer. It is just a typo.
follow-up: 6 comment:5 by , 17 years ago
Replying to fredd4@gmail.com:
I don't know, why it is in stage "design decision needed". This is no-brainer. It is just a typo.
I agree, probably this ticket should be closed and patched. Furthermore open a design decision discussion about changing RWLock semantics. I personally think that Joe Holloway is correct.
comment:6 by , 16 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
Triage Stage: | Design decision needed → Ready for checkin |
Replying to permon:
Replying to fredd4@gmail.com:
I don't know, why it is in stage "design decision needed". This is no-brainer. It is just a typo.
I agree, probably this ticket should be closed and patched. Furthermore open a design decision discussion about changing RWLock semantics. I personally think that Joe Holloway is correct.
Design was enlightened in http://groups.google.cz/group/django-developers/browse_thread/thread/a8e8d9b84f525d7/d3603fcc4fd388cf?lnk=gst&q=RWLock#d3603fcc4fd388cf
So I think, that patch i sufficient.
comment:7 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
Patch for ticket #6413