Version 20 (modified by Paul Bissex, 6 years ago) ( diff )

--

Profiling Django

Profiling Specific Code

Python has several profiling modules, the least worst of which is the apparently unmaintained hotshot. It's better than the Profiler module for our purposes as it has simple 'start' and 'stop' methods, as well as a method that takes a callable and its arguments.

I wrap views and other callables I want to examine in a decorator like this one:

import hotshot
import os
import time
import settings
import tempfile

try:
    PROFILE_LOG_BASE = settings.PROFILE_LOG_BASE
except:
    PROFILE_LOG_BASE = tempfile.gettempdir()


def profile(log_file):
    """Profile some callable.

    This decorator uses the hotshot profiler to profile some callable (like
    a view function or method) and dumps the profile data somewhere sensible
    for later processing and examination.

    It takes one argument, the profile log name. If it's a relative path, it
    places it under the PROFILE_LOG_BASE. It also inserts a time stamp into the 
    file name, such that 'my_view.prof' become 'my_view-20100211T170321.prof', 
    where the time stamp is in UTC. This makes it easy to run and compare 
    multiple trials.     
    """

    if not os.path.isabs(log_file):
        log_file = os.path.join(PROFILE_LOG_BASE, log_file)

    def _outer(f):
        def _inner(*args, **kwargs):
            # Add a timestamp to the profile output when the callable
            # is actually called.
            (base, ext) = os.path.splitext(log_file)
            base = base + "-" + time.strftime("%Y%m%dT%H%M%S", time.gmtime())
            final_log_file = base + ext

            prof = hotshot.Profile(final_log_file)
            try:
                ret = prof.runcall(f, *args, **kwargs)
            finally:
                prof.close()
            return ret

        return _inner
    return _outer

Set a base path for storing profiling logs in your settings.py using PROFILE_LOG_BASE, then use it like this:

@profile("my_view.prof")
def myview(request):
...

This produces files of the form /path/to/logs/my_view-20100211T170321.prof, so you can test your view under various input conditions and compare the results.

To actually examine the results, you'll need to use the hotshot.stats module. A simple script like this:

#!/usr/bin/python

import hotshot.stats
import sys

stats = hotshot.stats.load(sys.argv[1])
#stats.strip_dirs()
stats.sort_stats('time', 'calls')
stats.print_stats(20)

(stats.strip_dirs() results in a more compact output, but I prefer to know which __init__.py I'm dealing with.)

Here's some output from a profile of an authentication module I was having trouble with:

         7986 function calls (7850 primitive calls) in 1.725 CPU seconds

   Ordered by: internal time, call count
   List reduced from 392 to 20 due to restriction <20>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        2    1.570    0.785    1.570    0.785 /usr/local/lib/python2.5/django/db/backends/__init__.py:36(_commit)
       15    0.043    0.003    0.043    0.003 /usr/local/lib/python2.5/linecache.py:68(updatecache)
        1    0.020    0.020    0.027    0.027 /usr/local/lib/python2.5/django/contrib/auth/models.py:1(<module>)
       12    0.014    0.001    0.030    0.002 /usr/local/lib/python2.5/django/utils/importlib.py:18(import_module)
     1013    0.010    0.000    0.010    0.000 /usr/local/lib/python2.5/posixpath.py:56(join)

This shows that the culprit is the database commit code; while my queries where fast, actually committing the results to a SQLite database is causing me grief.

You can use gather_profile_stats.py in the Django distribution to aggregate the generated profile logs and open them with pstats.Stats.

You can also examine the results with KCacheGrind by using it's hotshot2calltree conversion scripts.

Profiling Everything

Middleware

Profiling Middleware and Extended Profiling Middleware print profile results for method, additionally groups results by files and by modules. Add the "&prof=" query parameter and you'll see the profiling results in your browser. Unfortunately, these are both items on Django Snippets, so you should treat them as being untested abandonware, or, more charitably, example code.

django-profiling seems to be another variant of the profiling middleware, which declares itself superseded by django-logging, which declares itself unmaintained.

Profiling at the server

mod_python

django.core.handlers.profiler-hotshot provides profiling support for Django when using mod_python. TODO: describe how to use this.

Development Server

django-extensions provides a runprofileserver command for running the development server using the hotshot or (c)Profile libraries and provides compatible output for kcachegrind.

A similar solution for the internal development webserver: apply the included patch to django/core/management.py and start Django with manage.py runserver. Each request will create a .prof file in your /tmp directory.

WSGI

An alternative approach to using the built in development server is to run Django from a trivial WSGI server and profile that. For example, using wsgiref:

from wsgiref.simple_server import make_server
from django.core.handlers.wsgi import WSGIHandler
httpd = make_server('', 8000, WSGIHandler())
httpd.serve_forever()

Run this using:

$ DJANGO_SETTINGS_MODULE=mysite.settings python myserver.py 

To profile this run using cProfile (or profile):

$ DJANGO_SETTINGS_MODULE=mysite.settings python -m cProfile myserver.py 

The default invocation will print the profile stats when you kill the server, it's probably more useful to write to a file and use pstats to read it:

$ DJANGO_SETTINGS_MODULE=mysite.settings python -m cProfile -o mysite.profile myserver.py 
... lots of requests ...

$ python
>>> import pstats
>>> s = pstats.Stats("mysite.profile")
>>> s.sort_stats("time").print_stats(20)

Attachments (5)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.
Back to Top