Opened 6 years ago

Closed 6 years ago

Last modified 6 years ago

#29599 closed Cleanup/optimization (invalid)

chunk_size for InMemoryUploadedFile is not used

Reported by: Ali Aliyev Owned by: nobody
Component: File uploads/storage Version: dev
Severity: Normal Keywords: chunks
Cc: Triage Stage: Unreviewed
Has patch: yes Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description

Hello,

is it okay that the chunk_size is not used for InMemoryUploadedFile. chunks?

Examples to compare:

https://github.com/django/django/blob/master/django/core/files/uploadedfile.py#L92
https://github.com/django/django/blob/master/django/core/files/base.py#L48

Change History (2)

comment:1 by Carlton Gibson, 6 years ago

Resolution: invalid
Status: newclosed

is it okay...

Yes.

In general, why do you handle a file in chunks? So that you can control how much is in memory at one time.

With InMemoryUploadedFile you already decided you'd handle the whole thing in memory at once.
So the idea of handling it in chunks doesn't really make sense.

(See the comment a few lines below.)

in reply to:  1 comment:2 by Ali Aliyev, 6 years ago

Thanks for the quick response!

I have my own file storage class where _save method is called: https://github.com/django/django/blob/master/django/core/files/storage.py#L49
The problem is that I have to upload chunks (each chunks should not be more than 4mb) but content is instance of the InMemoryUploadedFile in which case I had to implement my own chunks method:

class AzureStorage(Storage):
    ...
    def _read_in_chunks(self, file_object, chunk_size=1024):
        while True:
            data = file_object.read(chunk_size)
            if not data:
                break
            yield data

    def _save(self, name, content):
        if hasattr(content.file, 'content_type'):
            content_type = content.file.content_type
        else:
            content_type = mimetypes.guess_type(name)[0]

        chunks = self._read_in_chunks(
            content,
            settings.AZURE_CHUNK_SIZE * 1024 * 1024
        )

        blocks_list = []

        for chunk in chunks:
            block_id = uuid.uuid4()
            self.connection.put_block(self.azure_container, name, chunk, block_id)
            blocks_list.append(str(block_id))

        self.connection.put_block_list(
            self.azure_container,
            name,
            blocks_list,
            x_ms_blob_content_type=content_type
        )

        return name

so content.cunks will not work here

Replying to Carlton Gibson:

is it okay...

Yes.

In general, why do you handle a file in chunks? So that you can control how much is in memory at one time.

With InMemoryUploadedFile you already decided you'd handle the whole thing in memory at once.
So the idea of handling it in chunks doesn't really make sense.

(See the comment a few lines below.)

Note: See TracTickets for help on using tickets.
Back to Top