#33735 closed New feature (fixed)
Add asynchronous responses for use with an ASGI server
Reported by: | florianvazelle | Owned by: | Carlton Gibson |
---|---|---|---|
Component: | HTTP handling | Version: | 4.0 |
Severity: | Normal | Keywords: | ASGI async |
Cc: | Carlton Gibson, Andrew Godwin, Michael Brown, Jon Janzen, Carles Pina Estany | Triage Stage: | Ready for checkin |
Has patch: | yes | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description (last modified by )
This ticket follows the one I opened on django/daphne#413.
Initial Issue
Using Daphne as a server for a Django application seems to cause a malfunction in streaming responses.
When you pass an iterator into a response, the contents of the response are not retrieved chunk by chunk, but from a single block when the iterator has finished iterating.
I have a minimal project that can reproduce the behavior, at florianvazelle/minimal-daphne-stream.
Solution
After some research, in Daphne, we use Twisted with asyncioreactor. I managed to reproduce the bug in a minimal way: Minimal example of an "ASGI-like" streaming server with Twisted.
The streaming issue occurs when we call a blocking method in an iterator (io, sleep, request ...), which blocks the reactor. That's why the result is not done progressively.
The reactor does not get control of execution back until the end of the iteration.
To correct this behavior we need to add an asynchronous layer and use async / non-blocking alternatives.
Proposition
In my minimal project, the view will become:
import asyncio from django.http.response import StreamingHttpResponse async def iterable_content(): for _ in range(5): await asyncio.sleep(1) print('Returning chunk') yield b'a' * 10000 def test_stream_view(request): return StreamingHttpResponse(iterable_content())
But django does not handle asynchronous generators.
Some works are already did in this tickets #32798 (StreamingHttpResponse Raises SynchronousOnlyOperation in ASGI Server) – Django, but considering the performance, we could manage it differently here.
I propose to add asynchronous responses, to be used in an async context, so with an ASGI server.
I make a PoC, in a very basic way: PoC Fixed #33735 -- Handle async streaming response.
Change History (23)
comment:1 by , 3 years ago
Description: | modified (diff) |
---|
comment:2 by , 3 years ago
Cc: | added |
---|
comment:3 by , 3 years ago
Description: | modified (diff) |
---|
comment:4 by , 3 years ago
Triage Stage: | Unreviewed → Accepted |
---|---|
Type: | Uncategorized → New feature |
comment:5 by , 3 years ago
Has patch: | set |
---|
comment:6 by , 3 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
comment:7 by , 2 years ago
Patch needs improvement: | set |
---|
follow-up: 14 comment:8 by , 2 years ago
Cc: | added |
---|
Hi Florian — Thanks again for this.
I finally got the time to sit with the PR and that for #32798 to try and think through the right way forward.
I'm commenting here, rather than the PR, because I think we need a slightly different approach.
Also cc-ing Andrew and Micheal to ask for their input too.
So what follows is an RFC...
Looking at the PR here, and that for #32798 (either the complex first suggestion, or the simpler but slower alternative) I think we should consciously opt-out of trying to map between sync and async generators. Maybe somebody writes a queue implementation that has sync one end and async at the other, and handles thread-safety, and back-pressure, and who knows what. Maybe we could use that in Django if we felt it trust-worthy enough. But I don't think we should try and write one ourselves.
Rather, I think we should say that, in the case of streaming responses, users need to adjust their code for whether they're planning to run under WSGI or ASGI. (We will handle the other case both ways, but at a performance and memory cost.)
To wit:
We add __aiter__
to StreamingHttpResponse
and adopt ASGIHandler
to use async for
in the if response.streaming
block. Under ASGI you would provide StreamingHttpResponse
with an async iterator, still yielding bytestrings as content. This should allow the streaming async responses use-case.
Under WSGI you'd continue to provide a sync iterator and everything would work as it is already.
For each case, __iter__
and __aiter__
we handle the wrong case by consuming the iterator in a single step (using the asgiref.sync
utilities, with thread_sensitive
for sync_to_async
) and then yield it out in parts as expected. (This is the performance and memory cost.) We log a warning (that may be filtered in a logging content if not desired) saying that folks should use an appropriate generator type in this case.
Note that this is similar to the __aiter__
implementation on QuerySet, which fetches all the records once inside a single sync_to_async() to avoid multiple (costly) context switches there:
def __aiter__(self): # Remember, __aiter__ itself is synchronous, it's the thing it returns # that is async! async def generator(): await sync_to_async(self._fetch_all)() for item in self._result_cache: yield item return generator()
I think this should be both good enough and maintainable. In the happy case, where you provide the right kind of iterator for your chosen protocol (ASGI/WSGI) you get the result you want, without us need anything overly complex. In the deviant case it'll work (assuming you're not trying to stream Too much™). It means we're not branching in the handlers to handle each of the different possible iterator types. The abstraction leaks a little bit, but I kind of think that's reasonable at this point.
Maybe: If it proves impossible to get the WSGI+async/ASGI+sync fallback to work correctly, just erroring at that point would maybe be acceptable. 🤔 (That you could provide an async iterator to ASGIHandler would allow long-lived responses, that aren't currently feasible.)
Thoughts? Thanks!
Assuming all that makes sense, would you (or Michael maybe) want to take this on?
If not I can pick it up, since it would be good to hit Django 4.2 (feature freeze Jan 23) for this. 🎯
comment:9 by , 2 years ago
Hmm, I think that makes sense, as long as we put appropriately large warnings in the docs around it. It still works, it's simpler, and if someone feels brave enough to try and map between iterator types in an efficient fashion, they can come around and try that work separately later on.
I can probably have a crack at doing it too, if nobody else wants to rush in and pick it up :)
comment:10 by , 2 years ago
Cc: | added |
---|
comment:11 by , 2 years ago
Has patch: | unset |
---|---|
Owner: | removed |
Patch needs improvement: | unset |
Status: | assigned → new |
Hey Florian. Thinking about the 4.2 feature freeze after the new year, I'm going to deassign you so others can pick up if they're interested. (Of course that other may be you, if you still have bandwidth :) Thanks.
comment:13 by , 2 years ago
Cc: | added |
---|
comment:14 by , 2 years ago
Replying to Carlton Gibson:
Assuming all that makes sense, would you (or Michael maybe) want to take this on?
If not I can pick it up, since it would be good to hit Django 4.2 (feature freeze Jan '23) for this. 🎯
We encountered exactly this problem in an application. I don't think that I will have enough bandwidth to prepare a PR (if I had, well, you will see it :-) ) but for sure I will test and comment any code changes around here (I've spent quite some time looking at this code the last two days and have an easy to reproduce test, a real case, with daphne and without, etc.).
comment:15 by , 2 years ago
Owner: | set to |
---|---|
Status: | new → assigned |
comment:16 by , 2 years ago
Has patch: | set |
---|
I've opened an initial PR here, outlining the approach from comment:8.
For the basic example, streaming an async generator, it works well, with both Daphne and Uvicorn.
Still need to think about streaming_content
(for wrapping by middleware) and aclosing usage, but I wanted to open sooner to enable feedback.
(python -X dev ...
FTW :)
All eyes welcome. 🙏
comment:17 by , 2 years ago
Patch needs improvement: | set |
---|
comment:19 by , 2 years ago
Triage Stage: | Accepted → Ready for checkin |
---|
Accepting for review.