Opened 10 years ago

Closed 10 years ago

#23517 closed Uncategorized (wontfix)

Collect static files in parallel

Reported by: thenewguy Owned by: nobody
Component: contrib.staticfiles Version: 1.7
Severity: Normal Keywords:
Cc: wgordonw1@… Triage Stage: Unreviewed
Has patch: no Needs documentation: no
Needs tests: no Patch needs improvement: no
Easy pickings: no UI/UX: no

Description

It would really speed up collectstatic on remote storages to copy files in parallel.

It shouldn't be too complicated to refactor the command to work with multiprocessing.

I am submitting the ticket as a reminder to myself when I have a free moment. Would this be accepted into Django?

Change History (5)

comment:1 by Aymeric Augustin, 10 years ago

Resolution: needsinfo
Status: newclosed

I'm afraid we'll be reluctant to hardcode concurrent behavior in Django if there's another solution.

You shoud be able to implement parallel upload in the storage backend with:

  • a save method that enqueues the operation for processing by a thread pool and returns immediately,
  • a post_process method that waits until the thread pool has completed all uploads.

Can you try that approach, and if it doesn't work, reopen this ticket?

Thanks!

comment:2 by thenewguy, 10 years ago

Just wanted to post back on this. I was able to write a quick 20 line proof of concept for this using the threading module. The speedup was pretty significant so I figured I would reopen this again. I could be wrong, but I imagine something like this would be beneficial to the general django userbase. Granted, I don't know if others get as restless as I do while waiting on static files to upload.

I've quickly tested collectstatic with 957 static files. All files are post processed in some fashion (at least being hashed by ManifestFilesMixin) and also a gzipped file is created if the saved file benefits from gzip compression. The storage backend stored the files on AWS S3. The AWS S3 console listed 3254 files were deleted when I deleted the files after each test. So in total, 3254 files were created during collectstatic per case.

The following times are generated by the command line and should not be interpreted as quality benchmarks... but they are good enough to show the significance.

set startTime=%time%
python manage.py collectstatic --noinput
echo Start Time:  %startTime%
echo Finish Time: %time%

Times (keep in mind staticfiles collectstatic does not output the count for gzipped files, so there are roughly 957*2 more files than it reports)

957 static files copied, 957 post-processed.
	async using 100 threads (ParallelUploadStaticS3Storage)
		Start Time:  16:43:57.01
		Finish Time: 16:49:30.31
		Duration: 5.55500 minutes

	sync using regular s3 storage (StaticS3Storage)
		Start Time:  16:19:24.21
		Finish Time: 16:41:46.78
		Duration: 22.3761667 minutes

This storage is derived from ManifestFilesMixin and a subclass of S3BotoStorage (django-storages) that creates gzipped copies and checks for file changes to keep reliable modification dates before saving:

class ParallelUploadStaticS3Storage(StaticS3Storage):
    """
        THIS STORAGE ASSUMES THAT UPLOADS ONLY OCCUR
        FROM CALLS TO THE COLLECTSTATIC MANAGEMENT
        COMMAND. SAVING TO THIS STORAGE DIRECTLY IS
        NOT RECOMMENDED BECAUSE THE UPLOAD THREADS
        ARE NOT JOINED UNTIL POST_PROCESS IS CALLED.
    """
    
    active_uploads = []
    thread_count = 100
    
    def remove_completed_uploads(self):
        for i, thread in reversed(list(enumerate(self.active_uploads))):
            if not thread.is_alive():
                del self.active_uploads[i]
    
    def _save_content(self, key, content, **kwargs):
        while self.thread_count < len(self.active_uploads):
            self.remove_completed_uploads()
        
        # copy the file to memory for the moment to get around file closed errors -- BAD HACK FIXME FIX
        content = ContentFile(content.read(), name=content.name)
        
        f = super(ParallelUploadStaticS3Storage, self)._save_content
        thread = threading.Thread(target=f, args=(key, content), kwargs=kwargs)
        
        self.active_uploads.append(thread)
        thread.start()
    
    def post_process(self, *args, **kwargs):
        # perform post processing
        for post_processed in super(ParallelUploadStaticS3Storage, self).post_process(*args, **kwargs):
            yield post_processed
        
        # wait for the remaining uploads to finish
        print "Post processing completed. Now waiting for the remaining uploads to finish."
        for thread in self.active_uploads:
            thread.join()

comment:3 by thenewguy, 10 years ago

Resolution: needsinfo
Status: closednew

comment:4 by thenewguy, 10 years ago

Cc: wgordonw1@… added

comment:5 by Tim Graham, 10 years ago

Resolution: wontfix
Status: newclosed

I think Aymeric was trying to say that if Django has enough sufficient hooks so that users can implement this on their own, then that's enough. Maybe StaticS3Storage would like to include this in their code, but it's not obvious to me that we should include this in Django itself.

Note: See TracTickets for help on using tickets.
Back to Top