Opened 12 years ago

Closed 12 years ago

#18401 closed Cleanup/optimization (wontfix)

Use ORM model in database based cache backend

Reported by: Anssi Kääriäinen Owned by: nobody
Component: Core (Cache system) Version: dev
Severity: Normal Keywords:
Cc: Triage Stage: Design decision needed
Has patch: no Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description

The database based cache backend used raw SQL for both querying and table creation. This resulted in a lot of non-DRYness, and SQL which was hard to get correct for 3rd party backends (#18330).

Using ORM in cache backend could result in some slowdown due to qs cloning, and because the ORM in general is a little bit slower than raw SQL, but I can't see that as a blocker. Anybody using db as a cache backend is already using something that isn't as fast as memcached for example. I haven't done any measurements here, but I believe the slowdown is minor.

See pull request https://github.com/django/django/pull/97 for a patch implementing this.

Change History (11)

comment:1 by Anssi Kääriäinen, 12 years ago

I spotted one possible problem. Using delete & save will do commits of their own if not in managed transactions. One solution is to move the force_managed wrapper from models/deletion.py into transaction module and use that for all the methods - although that would need to be written into a context manager, as we don't know the db we will be using at the function level...

comment:2 by Aymeric Augustin, 12 years ago

Indeed, the db cache backend uses raw SQL to be independent from transaction management (AFAIK).

comment:3 by Anssi Kääriäinen, 12 years ago

Triage Stage: UnreviewedAccepted

I don't believe you can skip transaction management by using RAW SQL. The cache-db backend uses standard connection, standard connection uses standard transaction management, and whatever transactions are going on in the connection already are used. If there is need for transaction management outside of normal transaction management, then you need a separate connection.

To get a perfect solution re the transaction management here, then we would want to wrap the writing connections in @force_managed (so, they join existing transaction if there is one, else create a separate transaction and commit that on success). For the read-only transactions one would ideally want to use similar @force_managed wrapper, but with using autocommit by default. In practice just using @force_managed is enough for all cases. It is notable that the current solution can leak transactions from the read-only cases, as those do not ensure transactions opened by the SQL are closed. force_managed would do this.

I am marking this accepted just to get this from the unreviewed state. If it turns out there are some blocker issues here, then lets revisit that triage stage. IMHO the DRY from this change is worth some hassle.

in reply to:  3 comment:4 by Aymeric Augustin, 12 years ago

Replying to akaariai:

I don't believe you can skip transaction management by using RAW SQL. The cache-db backend uses standard connection

Indeed -- I wrongly believed that a separate connection was used, sorry.

in reply to:  2 comment:5 by Russell Keith-Magee, 12 years ago

Replying to aaugustin:

Indeed, the db cache backend uses raw SQL to be independent from transaction management (AFAIK).

Incorrect. The rationale was that a cache backend, by definition, should be fast, and while the overhead of the ORM isn't huge, it's an overhead that matters when you're doing something specifically for performance reasons (i.e., caching). Since the query that the cache requires isn't that complex, there isn't really a significant benefit to using the full weight of the ORM to protect you from SQL... until you hit Oracle, and the query needs to be significantly different.

For my money, this ticket looks like a duplicate of #15580, and the approach suggested by Ian on that ticket is the right approach.

comment:6 by Anssi Kääriäinen, 12 years ago

No, this is not a duplicate of #15580. This is getting totally rid of all the raw SQL, both in the table creation, and in querying. Two reasons why: DRY, and 3rd party backends - how are they supposed to deal with the non-standard LIMIT SQL (or for that matter, anything where they happen to use something else than the raw SQL used currently)? There is a ticket about that (#18330). For the DRY part, for example timezone support needs zero repeated code here when we use ORM. No need to use connection internals either.

I have to do some testing here for speed. In general, I would say that if you are using a database for caching, then you really aren't that interested in speed - doing caching by using an SQL database is a slow approach to begin with.

comment:7 by Aymeric Augustin, 12 years ago

Russell, thanks for the explanation :)


Here are my two cents on the topic.

I'm unconvinced by the concept of the database cache backend (and I've stopped using it):

  • In development, I use the dummy cache, because I don't want caching; I switch to the local memory cache when I specifically want to work on caching;
  • In automated tests, the local memory cache is my friend;
  • In production, memcached is trivial to install.

The database cache backend doesn't take a lot of maintenance, and it's good enough for many practical uses, so I'm not suggesting to deprecate it.

I also don't have strong feelings on rewriting it to use the ORM. I just think that we won't have an efficient, generic, cross-engine implementation of a database cache, so it may not be worth spending a lot of effort on this.


Re. #16481: I committed the fix back when I got all the tests to pass under Oracle. Indeed, the technique in #15580 is much better. The same idea came up in #18330 again, so let's follow up over there.

comment:8 by Anssi Kääriäinen, 12 years ago

OK, I see that the #15580 is about a generic 3rd party backend friendly approach, not the approach committed. Missed that on previous post.

Still, I think it is worth to use the ORM in the db-cache. It will simplify the implementation, and it will make maintaining it easier. Yes, it will be somewhat slower, and without benchmarking it's hard to know how much slower. To me it seems anybody who can't take a minor performance hit here is using the wrong cache backend to begin with. If the performance hit is major, then this is actually a blocking reason.

comment:9 by Anssi Kääriäinen, 12 years ago

Triage Stage: AcceptedDesign decision needed

Changed this to DDN as clearly there isn't a consensus this is a good idea.

comment:10 by Anssi Kääriäinen, 12 years ago

(Somewhat long comment, jump to end for conclusions). I did some benchmarking, and indeed there is a big speed loss if you test the synthetic speed of the SQL. The test case (using on-disk SQLite if not otherwise mentioned):

import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_test.settings'
from django.core.cache import cache
from datetime import datetime
from django.db import transaction
# A little prevarming
cache.clear()
for i in range(0, 10):
    cache.set('my_key', 'hello, world!', 30)
    cache.get('my_key')
transaction.enter_transaction_management()
transaction.managed(True)
start = datetime.now()
for i in range(0, 1000):
    cache.set('my_key', 'hello, world!', 30)
    cache.get('my_key')
print datetime.now() - start
transaction.commit()
transaction.leave_transaction_management()

Test result is about 2.5s vs 0.5s on my machine. However, add in transactions, which is the realistic use case, as usually you aren't adding 1000 keys in a loop. The test loop is changed to this:

transaction.enter_transaction_management()
transaction.managed(True)
cache.set('my_key', 'hello, world!', 30)
cache.get('my_key')
transaction.commit()
ransaction.leave_transaction_management()

Test result: 1:14.9 vs 1:13.5.

So, in write situations the DB cache is so slow that the raw SQL vs. ORM speed simply doesn't matter. In addition, usually the thing you are going to cache is slow to generate to begin with.

For the record, most of the overhead of the ORM comes from cloning (in practice: deepcopy). Here is the top of the profile for the single-tx case:

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.005    0.005   10.659   10.659 test.py:1(<module>)
     1010    0.007    0.000    7.241    0.007 db.py:68(set)
     1010    0.047    0.000    7.175    0.007 db.py:78(_base_set)
    12120    0.352    0.000    6.208    0.001 query.py:237(clone)
202991/48480    1.722    0.000    5.319    0.000 copy.py:145(deepcopy)
    10101    0.083    0.000    5.174    0.001 query.py:864(_clone)
     2020    0.022    0.000    4.366    0.002 query.py:360(get)
     1010    0.032    0.000    2.850    0.003 db.py:50(get)
30298/15149    0.338    0.000    2.648    0.000 copy.py:234(_deepcopy_tuple)
     1010    0.003    0.000    2.552    0.003 base.py:449(save)
     1010    0.033    0.000    2.548    0.003 base.py:483(save_base)
27269/24240    0.183    0.000    2.538    0.000 tree.py:55(__deepcopy__)
     4041    0.027    0.000    1.958    0.000 manager.py:196(using)
     3029    0.009    0.000    1.879    0.001 query.py:627(filter)
     3029    0.026    0.000    1.870    0.001 query.py:641(_filter_or_exclude)
     2020    0.015    0.000    1.786    0.001 query.py:759(order_by)
     4041    0.055    0.000    1.720    0.000 compiler.py:795(execute_sql)

So, fixing the clone speed would be good, and also it would be good to introduce a .inplace() manager method for at least internal use. This would speed up .save() and other internal uses considerably.

I created another branch for this test, this one contains improved transaction handling. I didn't use too much time in hacking this, so this is not intended for commit: https://github.com/akaariai/django/tree/cache_orm_tx

For the more important read-only case, the speed difference is 0.7 seconds vs 0.1 seconds (test loop: cache.get('my_key')). But, even here the test isn't really valid - in request serving every request must use a new connection (connection.close() is called at end of the request). If we add this into the mix the difference is 0.55s vs 1.1s (add connection.close() to the test loop). On localhost PostgreSQL, the difference is 4.4s vs 3.8s using connection.close(), with same connection it is 0.26 vs 0.88s.

When testing this on localhost memcached, the results are: 0.08s for read-only case, 0.14 for the read-write case.

So, the conclusion is: For the read case using raw SQL is warranted. Still, in practice the difference here is going to be pretty low if you take the connection creation in account. If you are after real speed, you should use memcached instead. If the query.clone() performance was increased, or there was an .inplace() qs operator, the speed differences would be even less than mentioned above. When writing to the cache I can't see any performance reason to use raw SQL. Using the ORM for the write operations will get rid of most of the queries, and especially the problematic cull query.

The real problem is why is the ORM so slow for these cases - we use 4x the time generating the query vs communicating the query to the db, the db parsing, planning and executing the query, and transferring the results back.

The force_managed() tx-manager introduced in the branch needs its own ticket, and would simplify some other places (deletion.py, bulk_create.py, .save(), management commands...).

comment:11 by Anssi Kääriäinen, 12 years ago

Resolution: wontfix
Status: newclosed

I am going to close this one. I still do think using the ORM for cache backend is a good idea, but there doesn't seem to be enough support for this idea.

Note: See TracTickets for help on using tickets.
Back to Top