Opened 23 months ago

Closed 15 months ago

Last modified 14 months ago

#33735 closed New feature (fixed)

Add asynchronous responses for use with an ASGI server

Reported by: florianvazelle Owned by: Carlton Gibson
Component: HTTP handling Version: 4.0
Severity: Normal Keywords: ASGI async
Cc: Carlton Gibson, Andrew Godwin, Michael Brown, Jon Janzen, Carles Pina Estany Triage Stage: Ready for checkin
Has patch: yes Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description (last modified by florianvazelle)

This ticket follows the one I opened on django/daphne#413.

Initial Issue

Using Daphne as a server for a Django application seems to cause a malfunction in streaming responses.
When you pass an iterator into a response, the contents of the response are not retrieved chunk by chunk, but from a single block when the iterator has finished iterating.
I have a minimal project that can reproduce the behavior, at florianvazelle/minimal-daphne-stream.

Solution

After some research, in Daphne, we use Twisted with asyncioreactor. I managed to reproduce the bug in a minimal way: Minimal example of an "ASGI-like" streaming server with Twisted.
The streaming issue occurs when we call a blocking method in an iterator (io, sleep, request ...), which blocks the reactor. That's why the result is not done progressively.
The reactor does not get control of execution back until the end of the iteration.
To correct this behavior we need to add an asynchronous layer and use async / non-blocking alternatives.

Proposition

In my minimal project, the view will become:

import asyncio

from django.http.response import StreamingHttpResponse


async def iterable_content():
    for _ in range(5):
        await asyncio.sleep(1)
        print('Returning chunk')
        yield b'a' * 10000


def test_stream_view(request):
    return StreamingHttpResponse(iterable_content())

But django does not handle asynchronous generators.
Some works are already did in this tickets #32798 (StreamingHttpResponse Raises SynchronousOnlyOperation in ASGI Server) – Django, but considering the performance, we could manage it differently here.

I propose to add asynchronous responses, to be used in an async context, so with an ASGI server.
I make a PoC, in a very basic way: PoC Fixed #33735 -- Handle async streaming response.

Change History (22)

comment:1 by florianvazelle, 23 months ago

Description: modified (diff)

comment:2 by Mariusz Felisiak, 23 months ago

Cc: Carlton Gibson added

comment:3 by florianvazelle, 23 months ago

Description: modified (diff)

comment:4 by Carlton Gibson, 23 months ago

Triage Stage: UnreviewedAccepted
Type: UncategorizedNew feature

Accepting for review.

comment:5 by Carlton Gibson, 22 months ago

Has patch: set

comment:6 by Carlton Gibson, 22 months ago

Owner: changed from nobody to florianvazelle
Status: newassigned

comment:7 by Carlton Gibson, 22 months ago

Patch needs improvement: set

comment:8 by Carlton Gibson, 18 months ago

Cc: Andrew Godwin Michael Brown added

Hi Florian — Thanks again for this.

I finally got the time to sit with the PR and that for #32798 to try and think through the right way forward.

I'm commenting here, rather than the PR, because I think we need a slightly different approach.
Also cc-ing Andrew and Michael to ask for their input too.

So what follows is an RFC...

Looking at the PR here, and that for #32798 (either the complex first suggestion, or the simpler but slower alternative) I think we should consciously opt-out of trying to map between sync and async generators. Maybe somebody writes a queue implementation that has sync one end and async at the other, and handles thread-safety, and back-pressure, and who knows what. Maybe we could use that in Django if we felt it trust-worthy enough. But I don't think we should try and write one ourselves.

Rather, I think we should say that, in the case of streaming responses, users need to adjust their code for whether they're planning to run under WSGI or ASGI. (We will handle the other case both ways, but at a performance and memory cost.)

To wit:

We add __aiter__ to StreamingHttpResponse and adopt ASGIHandler to use async for in the if response.streaming block. Under ASGI you would provide StreamingHttpResponse with an async iterator, still yielding bytestrings as content. This should allow the streaming async responses use-case.

Under WSGI you'd continue to provide a sync iterator and everything would work as it is already.

For each case, __iter__ and __aiter__ we handle the wrong type of iterator by consuming the iterator in a single step (using the asgiref.sync utilities, with thread_sensitive for sync_to_async) and then yield it out in parts as expected. (This is the performance and memory cost.) We log a warning (that may be filtered in a logging contfiguration if not desired) saying that folks should use an appropriate generator type in this case.

Note that this is similar to the __aiter__ implementation on QuerySet, which fetches all the records once inside a single sync_to_async() to avoid multiple (costly) context switches there:

    def __aiter__(self):
        # Remember, __aiter__ itself is synchronous, it's the thing it returns
        # that is async!
        async def generator():
            await sync_to_async(self._fetch_all)()
            for item in self._result_cache:
                yield item


        return generator()

I think this should be both good enough and maintainable. In the happy case, where you provide the right kind of iterator for your chosen protocol (ASGI/WSGI) you get the result you want, without us need anything overly complex. In the deviant case it'll work (assuming you're not trying to stream Too much™). It means we're not branching in the handlers to handle each of the different possible iterator types. The abstraction leaks a little bit, but I kind of think that's reasonable at this point.

Maybe: If it proves impossible to get the WSGI+async/ASGI+sync fallback to work correctly, just erroring at that point would maybe be acceptable. 🤔 (That you could provide an async iterator to ASGIHandler would allow long-lived responses, that aren't currently feasible.)

Thoughts? Thanks!

Assuming all that makes sense, would you (or Michael maybe) want to take this on?
If not I can pick it up, since it would be good to hit Django 4.2 (feature freeze Jan '23) for this. 🎯

Last edited 18 months ago by Carlton Gibson (previous) (diff)

comment:9 by Andrew Godwin, 18 months ago

Hmm, I think that makes sense, as long as we put appropriately large warnings in the docs around it. It still works, it's simpler, and if someone feels brave enough to try and map between iterator types in an efficient fashion, they can come around and try that work separately later on.

I can probably have a crack at doing it too, if nobody else wants to rush in and pick it up :)

comment:10 by Jon Janzen, 17 months ago

Cc: Jon Janzen added

comment:11 by Carlton Gibson, 17 months ago

Has patch: unset
Owner: florianvazelle removed
Patch needs improvement: unset
Status: assignednew

Hey Florian. Thinking about the 4.2 feature freeze after the new year, I'm going to deassign you so others can pick up if they're interested. (Of course that other may be you, if you still have bandwidth :) Thanks.

comment:12 by Carlton Gibson, 16 months ago

#32798 was the related duplicate.

comment:13 by Carles Pina Estany, 16 months ago

Cc: Carles Pina Estany added

in reply to:  8 comment:14 by Carles Pina Estany, 16 months ago

Replying to Carlton Gibson:

Assuming all that makes sense, would you (or Michael maybe) want to take this on?
If not I can pick it up, since it would be good to hit Django 4.2 (feature freeze Jan '23) for this. 🎯

We encountered exactly this problem in an application. I don't think that I will have enough bandwidth to prepare a PR (if I had, well, you will see it :-) ) but for sure I will test and comment any code changes around here (I've spent quite some time looking at this code the last two days and have an easy to reproduce test, a real case, with daphne and without, etc.).

comment:15 by Carlton Gibson, 16 months ago

Owner: set to Carlton Gibson
Status: newassigned

comment:16 by Carlton Gibson, 16 months ago

Has patch: set

I've opened an initial PR here, outlining the approach from comment:8.

For the basic example, streaming an async generator, it works well, with both Daphne and Uvicorn.

Still need to think about streaming_content (for wrapping by middleware) and aclosing usage, but I wanted to open sooner to enable feedback.
(python -X dev ... FTW :)

All eyes welcome. 🙏

comment:17 by Carlton Gibson, 15 months ago

Patch needs improvement: set

comment:18 by Carlton Gibson, 15 months ago

Patch needs improvement: unset

PR should now be ready for review.

comment:19 by Mariusz Felisiak, 15 months ago

Triage Stage: AcceptedReady for checkin

comment:20 by Carlton Gibson <carlton.gibson@…>, 15 months ago

Resolution: fixed
Status: assignedclosed

In 0bd2c0c:

Fixed #33735 -- Added async support to StreamingHttpResponse.

Thanks to Florian Vazelle for initial exploratory work, and to Nick
Pope and Mariusz Felisiak for review.

comment:21 by Mariusz Felisiak <felisiak.mariusz@…>, 14 months ago

In 52b05482:

Fixed #34342, Refs #33735 -- Fixed test client handling of async streaming responses.

Bug in 0bd2c0c9015b53c41394a1c0989afbfd94dc2830.

Co-authored-by: Carlton Gibson <carlton.gibson@…>

comment:22 by Mariusz Felisiak <felisiak.mariusz@…>, 14 months ago

In 610cd06c:

[4.2.x] Fixed #34342, Refs #33735 -- Fixed test client handling of async streaming responses.

Bug in 0bd2c0c9015b53c41394a1c0989afbfd94dc2830.

Co-authored-by: Carlton Gibson <carlton.gibson@…>

Backport of 52b054824e899db40ba48f908a9a00dadc56cb89 from main

Note: See TracTickets for help on using tickets.
Back to Top