Opened 7 years ago

Last modified 23 months ago

#28200 new Cleanup/optimization

Do not touch hash-designated files which already exist at the destination storage

Reported by: Michal Krupa Owned by: nobody
Component: contrib.staticfiles Version: 1.11
Severity: Normal Keywords: staticfiles, storage, remote
Cc: Triage Stage: Accepted
Has patch: yes Needs documentation: no
Needs tests: no Patch needs improvement: yes
Easy pickings: no UI/UX: no


It seems a little silly that, even though local file copies are used to prevent needing to fetch files from remote destinations, files still get replaced even when they already exist.

In example, a remote storage implementation like S3 gets queried for the file, the file gets removed, and then replaced. With the boto3 library, this means touching the file initially (HEAD), a second request to DELETE the file, and yet a third to PUT the new file. Since filenames for hashed files are computed based on the contents of the files, this seems like an unnecessary 3 requests for every time static assets get processed.

Since a matching hash provides file integrity verification, I propose that the logic for copying hash-designated files be skipped all together.

PR Available here -

Change History (3)

comment:1 by Tim Graham, 7 years ago

Patch needs improvement: set

Is this a duplicate of or related to #28055? There are failing tests with the current PR.

in reply to:  1 comment:2 by Michal Krupa, 7 years ago

Replying to Tim Graham:

Is this a duplicate of or related to #28055? There are failing tests with the current PR.

Ah yes I see the failed test results, jobs were still running when this ticket was opened. I will re-work the submission - thanks for the response!

comment:4 by Tim Graham, 7 years ago

Triage Stage: UnreviewedAccepted

Tentatively accepting, though it's not certain that this can be fixed.

Note: See TracTickets for help on using tickets.
Back to Top