Version 6 (modified by Philippe Raoult, 16 years ago) ( diff )

--

Singleton Instances

This page describes the issues and proposals pertaining to the fact that django's ORM will create as many instances of the same DB object as there are requests.

Issues

Inconsistency

#5514 describes what I think is the main issue with the current behavior of the ORM. Having multiple instances can result in silent data losses. Even worse the actual result would depend on whether one used select_related or not:

class Operand(models.Model):
    value = models.IntField()

class Operation(models.Model):
    arg = models.ForeignKey(Operand)

for operation in Operation.objects.all():
    operation.arg.value += 1
    operation.arg.save()

# so far so good, but I want to make it tighter ! Let's use select_related to save a DB query per iteration
for operation in Operation.objects.all().select_related():
    operation.arg.value += 1
    operation.arg.save()

Here we have a problem if two operations point to the same operand. The first version works well but the second preloads all operands from the DB and the same DB operand will result in multiple operand instances in memory. So we're basically modifying and saving the original operand every time instead of cumulating the changes. Here's a band-aid for your foot.

Models with self FKs will exhibit exactly the same issues:

class Directory(models.Model):
   name = models.CharField()
   parent = models.ForeignKey('self')

for dir in Directory.objects.all():
    if condition(dir):
        dir.modify()
        dir.parent.modify()
        dir.parent.save()

Now let's see, what if condition returns True for a directory and its parent ? If the parent comes first in the main query, it will be modified, saved, reloaded when its child comes up later. If the child comes up first the parent is loaded, modified, saved, and then later on the original value from the outer query will be modified and saved, thus erasing the first change. FIXME: i'm not sure our proposal would actually fix this case, this needs to be investigated.

Performance

The original goal of #17 was reducing memory use and query count by reusing the same instance instead of having a new one. Let's see a typical example of that:

class ArticleType(models.Model):
    name = models.CharField(maxlength=200)
    categories = models.CharField(maxlength=200)

class Article(models.Model):
    title = models.CharField(maxlength=200)
    type_of_article = models.ForeignKey(ArticleType)

    def __unicode__(self):
        return u"%s (%s)" % (self.title, self.type_of_article.name)

{% for article in Article.objects.all %}
{{article}}
{% endfor %}

If you have a great number of articles and a smaller number of articletypes the performance/memory hit is staggering because:

  • you generate a DB query per article to get the type
  • you instantiate the type once per article instead of once per type
  • you have as many articletype instances in memory as there are articles

Proposals

Overview

The basic idea is to simply reuse existing instances. This works by keeping track of instantiated objects on a per model basis. Whenever we try to instantiate a model, we check if there is already an instance in memory and reuse it if so. This would solve both issues mentioned above.

Please note that the proposal is absolutely NOT a caching system: whenever an instance goes out of scope it is discarded and will have to be reloaded from the DB next time.

This feature would be turned off by default, and only operate for Models which have this feature explicitly enabled. Good candidates for this feature are Models like the articletype above, or models with self references.

Threads

No sharing would occur between threads, as is the case currently. Instances will be unique within each thread.

API

The public API for this feature will be very simple. It will be a single optional parameter in the Meta class:

class ArticleType(models.Model):
    name = models.CharField(maxlength=200)
    categories = models.CharField(maxlength=200)

    class Meta:
        singleton_instances = True

The default value for the parameter is False, which means that the feature has to be explicitly enabled.

Implementation

#17 currently has a working patch with the following:

  • Per-model WeakValueDictionary with dict[pk_val] = instance.
  • call classmethod which either gets a instance from the dict or return a newly created one.
  • a simple interface for enabling or disabling the feature for a given Model (default is disabled).
  • a slightly modified related manager which will first try to get the instance from the Model dict instead of blindly querying the DB.
  • slightly modified serializers to force the instantiation when reading from an outside source.
  • a slightly modified query to force flushing instances when they are delete'd from the DB.
  • tests!

but some things are missing:

  • there is no doc as to how to enable/disable this feature and what to expect from it.
  • the API for enabling/disabling the feature is very crude and consists only of two class methods. It should at the very least be possible to set the value as a member of the Meta class.
  • the Model dict should be set threadlocally.
  • it should be possible to force instantiation, because the serializers want that (solved by next item).
  • the internals of the patch should be changed to be less magic: overriding call is a neat trick but I'd rather have something like Model.get_singleton(pk, kwargs = None) and use this where needed (the places are few!). If the user wants to do Model(id = something, kwargs) he should get a fresh instance. If he wanted to have the DB one he'd have used get or something like that.
  • For the sake of completeness, it should be possible to do Model.objects.get_fresh_instance(id=something) to bypass the singleton
Note: See TracWiki for help on using the wiki.
Back to Top