Opened 7 years ago

Closed 7 years ago

#18401 closed Cleanup/optimization (wontfix)

Use ORM model in database based cache backend

Reported by: Anssi Kääriäinen Owned by: nobody
Component: Core (Cache system) Version: master
Severity: Normal Keywords:
Cc: Triage Stage: Design decision needed
Has patch: no Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no


The database based cache backend used raw SQL for both querying and table creation. This resulted in a lot of non-DRYness, and SQL which was hard to get correct for 3rd party backends (#18330).

Using ORM in cache backend could result in some slowdown due to qs cloning, and because the ORM in general is a little bit slower than raw SQL, but I can't see that as a blocker. Anybody using db as a cache backend is already using something that isn't as fast as memcached for example. I haven't done any measurements here, but I believe the slowdown is minor.

See pull request for a patch implementing this.

Change History (11)

comment:1 Changed 7 years ago by Anssi Kääriäinen

I spotted one possible problem. Using delete & save will do commits of their own if not in managed transactions. One solution is to move the force_managed wrapper from models/ into transaction module and use that for all the methods - although that would need to be written into a context manager, as we don't know the db we will be using at the function level...

comment:2 Changed 7 years ago by Aymeric Augustin

Indeed, the db cache backend uses raw SQL to be independent from transaction management (AFAIK).

comment:3 Changed 7 years ago by Anssi Kääriäinen

Triage Stage: UnreviewedAccepted

I don't believe you can skip transaction management by using RAW SQL. The cache-db backend uses standard connection, standard connection uses standard transaction management, and whatever transactions are going on in the connection already are used. If there is need for transaction management outside of normal transaction management, then you need a separate connection.

To get a perfect solution re the transaction management here, then we would want to wrap the writing connections in @force_managed (so, they join existing transaction if there is one, else create a separate transaction and commit that on success). For the read-only transactions one would ideally want to use similar @force_managed wrapper, but with using autocommit by default. In practice just using @force_managed is enough for all cases. It is notable that the current solution can leak transactions from the read-only cases, as those do not ensure transactions opened by the SQL are closed. force_managed would do this.

I am marking this accepted just to get this from the unreviewed state. If it turns out there are some blocker issues here, then lets revisit that triage stage. IMHO the DRY from this change is worth some hassle.

comment:4 in reply to:  3 Changed 7 years ago by Aymeric Augustin

Replying to akaariai:

I don't believe you can skip transaction management by using RAW SQL. The cache-db backend uses standard connection

Indeed -- I wrongly believed that a separate connection was used, sorry.

comment:5 in reply to:  2 Changed 7 years ago by Russell Keith-Magee

Replying to aaugustin:

Indeed, the db cache backend uses raw SQL to be independent from transaction management (AFAIK).

Incorrect. The rationale was that a cache backend, by definition, should be fast, and while the overhead of the ORM isn't huge, it's an overhead that matters when you're doing something specifically for performance reasons (i.e., caching). Since the query that the cache requires isn't that complex, there isn't really a significant benefit to using the full weight of the ORM to protect you from SQL... until you hit Oracle, and the query needs to be significantly different.

For my money, this ticket looks like a duplicate of #15580, and the approach suggested by Ian on that ticket is the right approach.

comment:6 Changed 7 years ago by Anssi Kääriäinen

No, this is not a duplicate of #15580. This is getting totally rid of all the raw SQL, both in the table creation, and in querying. Two reasons why: DRY, and 3rd party backends - how are they supposed to deal with the non-standard LIMIT SQL (or for that matter, anything where they happen to use something else than the raw SQL used currently)? There is a ticket about that (#18330). For the DRY part, for example timezone support needs zero repeated code here when we use ORM. No need to use connection internals either.

I have to do some testing here for speed. In general, I would say that if you are using a database for caching, then you really aren't that interested in speed - doing caching by using an SQL database is a slow approach to begin with.

comment:7 Changed 7 years ago by Aymeric Augustin

Russell, thanks for the explanation :)

Here are my two cents on the topic.

I'm unconvinced by the concept of the database cache backend (and I've stopped using it):

  • In development, I use the dummy cache, because I don't want caching; I switch to the local memory cache when I specifically want to work on caching;
  • In automated tests, the local memory cache is my friend;
  • In production, memcached is trivial to install.

The database cache backend doesn't take a lot of maintenance, and it's good enough for many practical uses, so I'm not suggesting to deprecate it.

I also don't have strong feelings on rewriting it to use the ORM. I just think that we won't have an efficient, generic, cross-engine implementation of a database cache, so it may not be worth spending a lot of effort on this.

Re. #16481: I committed the fix back when I got all the tests to pass under Oracle. Indeed, the technique in #15580 is much better. The same idea came up in #18330 again, so let's follow up over there.

comment:8 Changed 7 years ago by Anssi Kääriäinen

OK, I see that the #15580 is about a generic 3rd party backend friendly approach, not the approach committed. Missed that on previous post.

Still, I think it is worth to use the ORM in the db-cache. It will simplify the implementation, and it will make maintaining it easier. Yes, it will be somewhat slower, and without benchmarking it's hard to know how much slower. To me it seems anybody who can't take a minor performance hit here is using the wrong cache backend to begin with. If the performance hit is major, then this is actually a blocking reason.

comment:9 Changed 7 years ago by Anssi Kääriäinen

Triage Stage: AcceptedDesign decision needed

Changed this to DDN as clearly there isn't a consensus this is a good idea.

comment:10 Changed 7 years ago by Anssi Kääriäinen

(Somewhat long comment, jump to end for conclusions). I did some benchmarking, and indeed there is a big speed loss if you test the synthetic speed of the SQL. The test case (using on-disk SQLite if not otherwise mentioned):

import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_test.settings'
from django.core.cache import cache
from datetime import datetime
from django.db import transaction
# A little prevarming
for i in range(0, 10):
    cache.set('my_key', 'hello, world!', 30)
start =
for i in range(0, 1000):
    cache.set('my_key', 'hello, world!', 30)
print - start

Test result is about 2.5s vs 0.5s on my machine. However, add in transactions, which is the realistic use case, as usually you aren't adding 1000 keys in a loop. The test loop is changed to this:

cache.set('my_key', 'hello, world!', 30)

Test result: 1:14.9 vs 1:13.5.

So, in write situations the DB cache is so slow that the raw SQL vs. ORM speed simply doesn't matter. In addition, usually the thing you are going to cache is slow to generate to begin with.

For the record, most of the overhead of the ORM comes from cloning (in practice: deepcopy). Here is the top of the profile for the single-tx case:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.005    0.005   10.659   10.659<module>)
     1010    0.007    0.000    7.241    0.007
     1010    0.047    0.000    7.175    0.007
    12120    0.352    0.000    6.208    0.001
202991/48480    1.722    0.000    5.319    0.000
    10101    0.083    0.000    5.174    0.001
     2020    0.022    0.000    4.366    0.002
     1010    0.032    0.000    2.850    0.003
30298/15149    0.338    0.000    2.648    0.000
     1010    0.003    0.000    2.552    0.003
     1010    0.033    0.000    2.548    0.003
27269/24240    0.183    0.000    2.538    0.000
     4041    0.027    0.000    1.958    0.000
     3029    0.009    0.000    1.879    0.001
     3029    0.026    0.000    1.870    0.001
     2020    0.015    0.000    1.786    0.001
     4041    0.055    0.000    1.720    0.000

So, fixing the clone speed would be good, and also it would be good to introduce a .inplace() manager method for at least internal use. This would speed up .save() and other internal uses considerably.

I created another branch for this test, this one contains improved transaction handling. I didn't use too much time in hacking this, so this is not intended for commit:

For the more important read-only case, the speed difference is 0.7 seconds vs 0.1 seconds (test loop: cache.get('my_key')). But, even here the test isn't really valid - in request serving every request must use a new connection (connection.close() is called at end of the request). If we add this into the mix the difference is 0.55s vs 1.1s (add connection.close() to the test loop). On localhost PostgreSQL, the difference is 4.4s vs 3.8s using connection.close(), with same connection it is 0.26 vs 0.88s.

When testing this on localhost memcached, the results are: 0.08s for read-only case, 0.14 for the read-write case.

So, the conclusion is: For the read case using raw SQL is warranted. Still, in practice the difference here is going to be pretty low if you take the connection creation in account. If you are after real speed, you should use memcached instead. If the query.clone() performance was increased, or there was an .inplace() qs operator, the speed differences would be even less than mentioned above. When writing to the cache I can't see any performance reason to use raw SQL. Using the ORM for the write operations will get rid of most of the queries, and especially the problematic cull query.

The real problem is why is the ORM so slow for these cases - we use 4x the time generating the query vs communicating the query to the db, the db parsing, planning and executing the query, and transferring the results back.

The force_managed() tx-manager introduced in the branch needs its own ticket, and would simplify some other places (,, .save(), management commands...).

comment:11 Changed 7 years ago by Anssi Kääriäinen

Resolution: wontfix
Status: newclosed

I am going to close this one. I still do think using the ORM for cache backend is a good idea, but there doesn't seem to be enough support for this idea.

Note: See TracTickets for help on using tickets.
Back to Top