Opened 7 years ago
Closed 7 years ago
#29101 closed New feature (needsinfo)
Beautify Django Test Output
Reported by: | Rich Jones | Owned by: | nobody |
---|---|---|---|
Component: | Testing framework | Version: | 2.0 |
Severity: | Normal | Keywords: | testing, ux |
Cc: | Triage Stage: | Unreviewed | |
Has patch: | no | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | yes | UI/UX: | yes |
Description
This is a feature request''
There has been a lot of fantastic work in Django 2.0 towards making the default interfaces web and command line interfaces more beautiful. Excellent!
Unfortunately, it seems like this work has not been extended to the "manage.py test" command, which still has very confusing and overwhelming output!
I'm sure many of you have, like me, stared at the standard output (err?) of a test
command running through a complicated and conditional CI system and wondered.. okay, but which test is this currently running? In which module? Is that supposed to be included? Which tests got skipped? Perhaps most annoyingly, especially for new developers, normal print() statements don't appear in test output.
I also now make extensive use of the tag
feature introduced in 1.10, but there is no feedback in the CLI about which tags are being actively included or excluded, how many total were skipped, etc.
Finally - there is no color in the output! Color has been added to great effect to express success and error states for other manage.py commands - why not for tests too? It'd be great to use green to indicate a passing test, red to indicate a failed test, and yellow to indicate a skipped test.
Currently, the experience is something like:
$ docker run \ --add-host=database:$DB_HOST_IP \ --add-host=nomad:$NOMAD_HOST_IP \ --env-file workers/environments/test \ --volume $volume_directory:/home/user/data_store \ --link drdb:postgres \ --link nomad:nomad \ -i dr_worker_tests python3 manage.py test "$@" -- exclude-tag=slow --exclude-tag=star --no-input Successfully tagged dr_worker_tests:latest Creating test database for alias 'default'... System check identified no issues (0 silenced). 2018-02-01 21:15:48,342 local/MainProcess data_refinery_workers.downloaders.array_express ERROR [downloader_job: 1]: A Batch's file doesn't have the same download URL as the other batches' files. [ .. literally thousands of log statements .. ] [ .. literally thousands of log statements .. ] [ .. literally thousands of log statements .. ] [ .. literally thousands of log statements .. ] [ .. literally thousands of log statements .. ] ---------------------------------------------------------------------- Ran 39 tests in 474.636s OK Destroying test database for alias 'default'...
It would be so, so much nicer if the experience was something like:
$ docker run \ --add-host=database:$DB_HOST_IP \ --add-host=nomad:$NOMAD_HOST_IP \ --env-file workers/environments/test \ --volume $volume_directory:/home/user/data_store \ --link drdb:postgres \ --link nomad:nomad \ -i dr_worker_tests python3 manage.py test "$@" -- exclude-tag=slow --exclude-tag=slower --no-input Starting tests, excluding [slow, slower]. Running: test_download_file (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase) ---------------------------------------------------------------------- [ .. Related log and print statements .. ] **OK!** Running: test_download_other_file (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase) ---------------------------------------------------------------------- [ .. Related log and print statements .. ] **OK!** Skipping: super_long_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase) Skipping: super_duper_long_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase) Running: final_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase) ---------------------------------------------------------------------- [ .. Related log and print statements .. ] **OK!** ---------------------------------------------------------------------- Ran **39** tests in **7 minutes, 54 seconds**. Skipped **2** tests. **100%** success rate, great job!
Basically, turn what is currently just a massive, unintelligible wall of unformatted text into something that can be easily parsed and understood at a glance by a human.
A bonus would be to suppress unnecessary output (ex, "Destroying test database for alias 'default'..." when it's default), include more statistical and informative output, and use colorization and general humanization and friendliness to improve the testing experience.
I don't think this would be too difficult to implement at all. In fact, I think it'd be perfect for somebody looking to make their first contributions, or for a team to work on as a sprint at a hackathon.
Thoughts?
Rich Jones,
New DSF Member!
Change History (2)
comment:1 by , 7 years ago
Component: | Uncategorized → Testing framework |
---|
comment:2 by , 7 years ago
Resolution: | → needsinfo |
---|---|
Status: | new → closed |
Colorized test results is tracked in #22449.
As for the other suggestions, it would be better to have smaller, more focused tickets that are easily actionable. I'm going to close this one but feel free to open some others with that in mind.
It looks like you may not have tried using the --verbosity option with the
test
command.Also, I know some Django developers use
pytest
rather than the default Django test runner. I haven't used it myself but that may provide some of what you seek. It would be nice not to duplicate work.