Opened 6 years ago

Last modified 10 months ago

#29800 new Bug

Django hangs when Content-Length has incorrect value

Reported by: Alexander Charykov Owned by:
Component: HTTP handling Version: 2.1
Severity: Normal Keywords:
Cc: Herbert Fortes, Sam Kuffer, Natalia Bidart Triage Stage: Accepted
Has patch: no Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description (last modified by Manan)

When sending incorrect content-length header to application, django hangs on reading request.body.

Steps to reproduce:

curl -v -X POST -H "Accept: */*" -H "Content-Length: 22" -d "1"

with following handler:

from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt

def index(request):
    return HttpResponse(content=request.body)

Attachments (1)

doc_update.diff (1.0 KB ) - added by Sam Kuffer 5 years ago.

Download all attachments as: .zip

Change History (16)

comment:1 by Alexander Charykov, 6 years ago

Summary: Django hangs when Content-Length hangsDjango hangs when Content-Length has incorrect value

comment:2 by Tim Graham, 6 years ago

Component: UncategorizedHTTP handling
Triage Stage: UnreviewedAccepted

I can reproduce this, although I had to add @csrf_exempt to the view to bypass the CSRF error page.

comment:3 by Herbert Fortes, 6 years ago

Cc: Herbert Fortes added

comment:4 by Patrik Sletmo, 6 years ago

Owner: changed from nobody to Patrik Sletmo
Status: newassigned

comment:5 by Tim Graham, 6 years ago

The security team discussed this issue to confirm that we shouldn't treat it as a security issue.

From Markus Holtermann:

The issue can also be reproduced with:

$ gunicorn foo.wsgi --access-logfile - --worker-connections 5

From Simon Charette:

I cannot reproduce when Django is behind a reverse proxy such as NGINX with proxy_request_buffering on which is the default.

Given the development server should never be used in production and gunicorn documentation strongly recommends to use it behind a proxy server this doesn't sound severe to me. We should add a timeout on body read to the development server but it's unlikely to be an attack vector if you already follow best practice.

I mean, both Django's development server and gunicorn have been known to be vulnerable to slow loris attacks and friends for a while at this point at it feels to me that serving Django apps behind a reverse proxy is the norm nowadays.

From Collin Anderson:

I this has come up before (or maybe it's a related issue):

From Claude Paroz:

Might be worth reaching to Graham Dumpleton about this issue. The WSGI specs talk about a CONTENT_LENGTH shorter than the payload, but I'm not sure it addresses the reverse issue.

comment:6 by Patrik Sletmo, 6 years ago

I skimmed through the WSGI specification and it contains a description on how to handle Content-Length headers shorter than the payload, but as Claude Paroz guessed it does not give any instructions for handling the opposite case.

As the recommended way of running Django in production is behind a reverse proxy I don't think that there should be any fix implemented that attempts to mitigate the effect of a Slowloris attack. I do however think that from a usability standpoint it could make sense not to hang the entire application in case the request given above is received.

Would there be any practical implications that could motivate against using a timeout period for receiving the data? If nginx is used as a reverse proxy with the default option proxy_request_buffering set, the timeout would limit the time required for transmitting the request from the reverese proxy to Django, and this really shouldn't take any substantial amount of time. I suppose that implementing a timeout option in Django in some situations could create the need to configure a timeout in both the reverse proxy and in Django, but I think that these cases are rather slim.

comment:7 by Patrik Sletmo, 6 years ago

I've been thinking some more on how to handle this issue and right now I'm considering the possibility of not fixing it at all. The two alternatives would in that case be the following:

  1. Mark the issue as "wontfix".


  1. Document the issue so that users are aware of its existence, and refer to the use of a reverse proxy. My suggestion is that it could be added here somehow

The reasoning behind not providing any fix beyond what has been suggested above is simple. The types of HTTP requests triggering the hang are typically only crafted by a malicious party with the intention to trigger this sort of bug. I can not imagine any case in which the behaviour documented in this issue would come as a surprise to anyone performing this request on their development server. As this bug is nearly never triggered unintentionally when developing, and also protected against when deployed properly in production, I think that it seems strange to introduce additional functionality that must be maintained. I would say that an introduced timeout comes with more complications than it solves.

Do you have any opinion on the above mentioned proposals, Alexander Charykov and Tim Graham?

comment:8 by Patrik Sletmo, 6 years ago

Owner: Patrik Sletmo removed
Status: assignednew

Since I lack the required insight in the Django project and thus cannot decide on what solution would be the best one on my own, I've chosen to deassign myself from this issue. I hope that my reasoning above may become useful in the future.

comment:9 by Jannik Schürg, 6 years ago

I think this is what it boils down to:

from wsgiref.simple_server import make_server

def app(environ, start_response):
    # does not timeout for malformd 'Content-Length'

make_server('', 8000, app).serve_forever()

in reply to:  7 comment:10 by Zach Garwood, 6 years ago

Replying to Patrik Sletmo:

  1. Document the issue so that users are aware of its existence, and refer to the use of a reverse proxy. My suggestion is that it could be added here somehow

I think this is the best course of action.

comment:11 by Manan, 6 years ago

Description: modified (diff)

Added a csrf_exempt to bypass the default 403 CSRF error page

by Sam Kuffer, 5 years ago

Attachment: doc_update.diff added

comment:12 by Sam Kuffer, 5 years ago

Cc: Sam Kuffer added

comment:13 by Adam Zapletal, 20 months ago

I confirmed that this is still an issue and would be happy to open a pull request with the proposed docs patch above if there's interest in it.

comment:14 by Mariusz Felisiak, 16 months ago

Cc: Natalia Bidart added

comment:15 by Aditya Dhara, 10 months ago

Hi folks, I encountered a similar bug and since this thread was the only place I could see information on this bug I thought I'll share what I found after working around this issue

TL;DR I think this is because of the way gunicorn uses the content-length header [here](

My service is a flask+gunicorn service in a kubernetes cluster. we have distributed tracing enabled at the flask layer and when I include a content-length header, I saw traces start and end at the load-balancer layer with a 400 status code after a 1s timeout, but no traces start for the flask application. Between the load-balancer and my flask app, I had a few places to look - the k8s-networking, docker-runtime, python 3.10 and gunicorn. Naturally my biggest suspect was gunicorn

Gunicorn chooses different "readers" based on whether a content-length header is set, the transfer-encoding header has value chunked, or neither:

if content-length is set, it uses a LengthReader is used, which loops infinitely until a certain length is reached:

This is a feature of gunicorn, I think it relies on timing out when the reader doesn't have enough bytes

I cannot reproduce when Django is behind a reverse proxy such as NGINX with proxy_request_buffering on which is the default.

I believe this might be happening because proxy_request_buffering adds a correct content-length header after buffering the request on the nginx side? not sure I have the tools to verify this tbh so I would appreciate it if someone can confirm this

Version 0, edited 10 months ago by Aditya Dhara (next)
Note: See TracTickets for help on using tickets.
Back to Top