#32958 closed Cleanup/optimization (needsinfo)
URLValidator tests don't obviously test how long validation takes
Reported by: | Chris Jerdonek | Owned by: | nobody |
---|---|---|---|
Component: | Testing framework | Version: | dev |
Severity: | Normal | Keywords: | |
Cc: | Triage Stage: | Unreviewed | |
Has patch: | no | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | no |
Description
I was looking at the validator tests and came across this comment ("Trailing junk does not take forever to reject"):
https://github.com/django/django/blob/012f38f9594b35743e9ab231757b7b62db638323/tests/validators/tests.py#L258-L260
However, it's not obvious whether the test actually checks how long the validation takes, or what "forever" really means (e.g. is 1 second forever?). For example, I couldn't see any measurement of elapsed time in the tests.
It would be good if the comment elaborated on this and, if necessary, the test could check that validation doesn't take too long.
Change History (3)
comment:1 by , 3 years ago
Resolution: | → needsinfo |
---|---|
Status: | new → closed |
comment:2 by , 3 years ago
Type: | Uncategorized → Cleanup/optimization |
---|
comment:3 by , 3 years ago
Do you have a suggestion on how to handle this?
I had two changes in mind. The first was to update the comment with more explanation of what forever means. It can be as simple as adding what you said, e.g. "Trailing junk does not take forever to reject (minutes, maybe hours)."
The second was to add e.g. a context manager around each validation test case to check that the validation doesn't take longer than some amount of time. That would serve to check that it doesn't take forever. If the time is generous enough (e.g. 1 second?), there shouldn't be any risk of flakiness. (Unlike with other flaky tests, there is no I/O happening here.) If forever is truly minutes or hours, then even 60 seconds could work. But I think that might be unnecessarily generous.
In this case "forever" means minutes (maybe hours). It's hard to estimate an upper bound, there are too many variables and we don't want to add a flaky test that fails on contributors computers.
Do you have a suggestion on how to handle this?
We can clarify a comment, but I don't think it's feasible to add a not error-prone test based on runtime.