#21351 closed Cleanup/optimization (fixed)
memoize function needs tests and improvements
Reported by: | Daniele Procida | Owned by: | Bouke Haarsma |
---|---|---|---|
Component: | Utilities | Version: | dev |
Severity: | Normal | Keywords: | |
Cc: | bmispelon@…, Bouke Haarsma | Triage Stage: | Accepted |
Has patch: | yes | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description
django.utils.functional.memoize(func, cache, num_args)
:
- lacks tests
- should raise exceptions when provided with illegal arguments (
func
must be a function,cache
a dict, andnum_args
a positive integer) - should raise an exception if any of the arguments of
func
, which are used as dictionary keys, is unhashable
Attachments (2)
Change History (21)
comment:1 by , 11 years ago
comment:2 by , 11 years ago
I think you're right, it looks over-zealous. Do you think the third point's unnecessary as well?
comment:3 by , 11 years ago
There's actually a worse problem with memoize
, that Shai Berger has pointed out: it chokes on keyword arguments...
His example:
>>> from django.utils import functional as F >>> def f(a,b): return a+b ... >>> cache = {} >>> f = F.memoize(f, cache, 2) >>> f(3,4) 7 >>> f(3,b=4) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: wrapper() got an unexpected keyword argument 'b' >>>
comment:4 by , 11 years ago
Triage Stage: | Unreviewed → Accepted |
---|
Accepting this on the basis of missing tests alone (I changed memoize
to be a no-op and the full test suite runs without problems).
As for the other issues described, maybe we could build memoize
on top of functools.lru_cache
[1].
It's only available on python 3, but there is a backport available [2].
[1] http://docs.python.org/3.2/library/functools.html?highlight=lru_cache#functools.lru_cache
[2] http://code.activestate.com/recipes/578078-py26-and-py30-backport-of-python-33s-lru-cache/
comment:5 by , 11 years ago
Cc: | added |
---|
comment:6 by , 11 years ago
I don't know all the places that memoize is used, but I suspect making it more robust will also make it slower, and for a function whose entire reason for existence is performance, that would be a problem.
Currently, memoize()
is undocumented, and I think we should consider just making it explicitly private. Users don't really need it in this form (and when they do, they should probably base it on the cache framework).
comment:7 by , 11 years ago
Cc: | added |
---|
Replacing memoize
by lru_cache
will probably take some work to get right, as memoize
allows one to provide a dict for caching. lru_cache
hides the cache from the user, only allowing the instrumented cache_clear()
for clearing the cache altogether.
Update: after my patch I found that although the cache is now hidden, this 'feature' was not used by Django except for clearing the caches.
comment:8 by , 11 years ago
As for the usage of memoize
:
$ git grep -c "memoize" django/contrib/staticfiles/finders.py:2 django/core/management/commands/loaddata.py:2 django/core/urlresolvers.py:4 django/utils/functional.py:2 django/utils/translation/trans_real.py:2 tests/decorators/tests.py:2
comment:9 by , 11 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
I've submitted a PR: https://github.com/django/django/pull/1842
However, I have some questions about the deprecation and backport:
- Can the backport be included, as it is MIT licensed?
memoize
is an internal API, should it follow the deprecation policy?- Is this the correct way to include a backport?
comment:10 by , 11 years ago
As discussed on IRC, a few benchmarks comparing memoize
with lru_cache
and lru_cache
's backport. Also, another column with the statistics bookkeeping in the backport disabled for extra speed. These results are achieved with lru_cache(maxsize=None)
, as that doesn't involve locking and is comparable to memoize
.
+-------+---------+-----------------+----------+----------+ | | memoize | lru_cache (py3) | backport | no-stats | +-------+---------+-----------------+----------+----------+ | write | 0.970 | 2.373 | 2.714 | 2.413 | | read | 0.342 | 0.830 | 1.053 | 0.950 | +-------+---------+-----------------+----------+----------+
comment:11 by , 11 years ago
The pull request looks good, I left a comment, you can mark the ticket as RFC once you've addressed it.
comment:12 by , 11 years ago
Has patch: | set |
---|---|
Triage Stage: | Accepted → Ready for checkin |
For ticket completeness bmispelon also left a comment about memoize(..., num_args=1)
whilst get_callable
takes 2 arguments:
bmispelon 16 hours ago:
The num_args argument was introduced in b6812f2d9d1bea3bdc9c5f9a747c5c71c764fa0f specifically for get_callable (note how it can take two arguments but the num_args is 1).
I suspect there might be something weird going on but I haven't had time to look into it and unfortunately, we cannot ask the original committer of that change anymore :(
Say a function has two parameters and one result, then A,B -> X
and A,C -> Y
. However as the function's cache strategy only accounts for the first argument, it will always do the same, namely A,? -> X
UNLESS in the case of an exception. Now when the exception happened for A,D -> ?
, then no cache is written and A,C -> Y
. However if A,C -> Y
is executed FIRST and cached, then A,? -> Y
and thus A,D -> Y
-- no exception being raised.
I have added a test to the PR demonstrate this problem.
comment:13 by , 11 years ago
Triage Stage: | Ready for checkin → Accepted |
---|
There's still some issues with the pull request:
- shai raised concerns about performance. Your benchmark has no units but I assume it's a "smaller is better" type which means that the
lru_cache
is less performant than the oldmemoize
. Can you address that (and maybe also show the code of your benchmark)? - The current pull request is not python3 compatible (because of the use of
xrange
in one of the tests) - I don't think the test you added to show the issue of
num_args
belongs in this pull request. While I agree that there's probably a bug there, it should either be opened as a separate ticket or just documented since we're deprecatingmemoize
altogether.
Thanks
by , 11 years ago
Attachment: | memoize_problem_21351.diff added |
---|
memoize num_args implication for get_callable
by , 11 years ago
Attachment: | speedtest.py added |
---|
comment:14 by , 11 years ago
I have attached a rewritten speedtest, which should show the performance impact on 1M reads and writes. lru_cache
is around 3 times slower on both, which should come as no surprise. memoize
has a very simplistic cache key generator as it copies positional arguments into a new list, while lru_cache
creates a hash of both positional and keyword arguments. This overhead comes at a reduced performance. Nevertheless looking at the performance difference of a single read (0.0000006s) and write (.0000014s), the impact seems rather limited.
Python: 2.7.5 (default, Sep 5 2013, 23:32:22) [GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] +-----------+----------+-----------+ | | memoize | lru_cache | +-----------+----------+-----------+ | 1M reads | 0.34065s | 1.07396s | | 1M writes | 1.01192s | 2.71383s | +-----------+----------+-----------+ Python: 3.3.2 (default, Nov 6 2013, 08:59:42) [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] +-----------+----------+-----------+ | | memoize | lru_cache | +-----------+----------+-----------+ | 1M reads | 0.30876s | 0.87845s | | 1M writes | 0.99767s | 2.39997s | +-----------+----------+-----------+
The PR has been updated with the redundant tests removed and PY3 compatibility.
comment:15 by , 11 years ago
This results in around 6-8% slowdown in url_reverse djangobench benchmark. While not critical at all, avoiding that would be nice. URL reversing is already bottleneck when producing a large json list where each element contains a reversed url.
Similar slowdown is in url_resolve benchmark, too.
comment:16 by , 11 years ago
I am OK for making this change, using standard library is usually a good idea. The lru_cache definitely handles correctly some cases Django doesn't. If the slowdown is too big for any specific use case, maybe it would be better to use a custom tailored cache instead there.
comment:17 by , 11 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
What's the rationale for raising exceptions with illegal arguments? Maybe it's the right thing to do, but I'm skeptical. Here's my research on the topic.