|Version 17 (modified by ubernostrum, 8 years ago) (diff)|
Because Django uses WSGI, it can run on any WSGI-compatible Web server. Here's how to run Django on various server arrangements.
See the official documentation.
Apache and FCGI
Flup (the FCGI wrapper used for this) has a little bug with python 2.3 that will make larger pages stall on access just before end. Make sure you use the newest svn checkout for the flup library, as the bug is fixed in there. See DjangoUsingFlup for more.
Mac OS X users see Django with FCGI on OS X -- How to get Django up and running with FCGI and the Mac OS X default Apache 1.3
lighttpd (via FCGI)
See Hugo's excellent tutorials:
Also see ticket #152.
Django/lighttpd/FastCGI instructions for use on TextDrive shared hosting accounts are available here.
See ticket #172.
See the description in the trac-wiki for how to combine them. TooFPy is a pure-python webserver with focus on the creation of web services.
Apache and SCGI
Django behind/inside Zope
Not really a server arrangement per se, I've found a way to query a Django site from Zope or Plone and return the result. This allows you to include a Django site inside a pre-existing Zope/Plone site. Good for custom content that you don't want to use existing Zope technologies to develop. For the code, see this partly documented file. It's in a temporary location for the time being; when I get a blog set up, I plan to complete the explanation and post it. For more info, drop an email to jeff (at) bitprophet (dot) org.
Running Django as a traditional CGI is possible and would work the same as running any other sort of Python CGI script, but is generally not recommended.
With traditional CGI, the program which will be run -- in this case, Django plus a Django-powered application -- is loaded from disk into memory each time a request is served, which results in a significant amount of processing overhead and much slower responses. FastCGI and SCGI, in contrast, load the code only once -- when the server starts up -- and keep it in memory as long as the server is running, resulting in much faster responses.