﻿id	summary	reporter	owner	description	type	status	component	version	severity	resolution	keywords	cc	stage	has_patch	needs_docs	needs_tests	needs_better_patch	easy	ui_ux
29101	Beautify Django Test Output	Rich Jones	nobody	"''This is a feature request!''

There has been a lot of fantastic work in Django 2.0 towards making the default interfaces web and command line interfaces more beautiful. Excellent!

Unfortunately, it seems like this work has not been extended to the ""manage.py test"" command, which still has very confusing and overwhelming output!

I'm sure many of you have, like me, stared at the standard output (err?) of a `test` command running through a complicated and conditional CI system and wondered.. okay, but which test is this currently running? In which module? Is that supposed to be included? Which tests got skipped? Perhaps most annoyingly, especially for new developers, normal print() statements don't appear in test output.

I also now make extensive use of the `tag` feature introduced in 1.10, but there is no feedback in the CLI about which tags are being actively included or excluded,  how many total were skipped, etc.

Finally - there is no color in the output! Color has been added to great effect to express success and error states for other manage.py commands - why not for tests too? It'd be great to use green to indicate a passing test, red to indicate a failed test, and yellow to indicate a skipped test.

Currently, the experience is something like:

{{{
$ docker run \
       --add-host=database:$DB_HOST_IP \
       --add-host=nomad:$NOMAD_HOST_IP \
       --env-file workers/environments/test \
       --volume $volume_directory:/home/user/data_store \
       --link drdb:postgres \
       --link nomad:nomad \
       -i dr_worker_tests python3 manage.py test ""$@"" -- exclude-tag=slow --exclude-tag=star --no-input

Successfully tagged dr_worker_tests:latest
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
2018-02-01 21:15:48,342 local/MainProcess data_refinery_workers.downloaders.array_express ERROR [downloader_job: 1]: A Batch's file doesn't have the same download URL as the other batches' files.
[ .. literally thousands of log statements .. ]
[ .. literally thousands of log statements .. ]
[ .. literally thousands of log statements .. ]
[ .. literally thousands of log statements .. ]
[ .. literally thousands of log statements .. ]
----------------------------------------------------------------------
Ran 39 tests in 474.636s

OK
Destroying test database for alias 'default'...
}}}

It would be so, so much nicer if the experience was something like:
{{{
$ docker run \
       --add-host=database:$DB_HOST_IP \
       --add-host=nomad:$NOMAD_HOST_IP \
       --env-file workers/environments/test \
       --volume $volume_directory:/home/user/data_store \
       --link drdb:postgres \
       --link nomad:nomad \
       -i dr_worker_tests python3 manage.py test ""$@"" -- exclude-tag=slow --exclude-tag=slower --no-input

Starting tests, excluding [slow, slower].
Running: test_download_file (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase)
----------------------------------------------------------------------
[ .. Related log and print statements .. ]
**OK!**

Running: test_download_other_file (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase)
----------------------------------------------------------------------
[ .. Related log and print statements .. ]
**OK!**

Skipping: super_long_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase)
Skipping: super_duper_long_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase)
Running: final_test (data_refinery_workers.downloaders.test_sra.DownloadSraTestCase)
----------------------------------------------------------------------
[ .. Related log and print statements .. ]
**OK!**

----------------------------------------------------------------------
Ran **39** tests in **7 minutes, 54 seconds**. Skipped **2** tests. 
**100%** success rate, great job!
}}}

Basically, turn what is currently just a massive, unintelligible wall of unformatted text into something that can be easily parsed and understood at a glance by a human. 

A bonus would be to suppress unnecessary output (ex, ""Destroying test database for alias 'default'..."" when it's default), include more statistical and informative output, and use colorization and general humanization and friendliness to improve the testing experience.

I don't think this would be too difficult to implement at all. In fact, I think it'd be perfect for somebody looking to make their first contributions, or for a team to work on as a sprint at a hackathon.

Thoughts?

Rich Jones,
New DSF Member!"	New feature	closed	Testing framework	2.0	Normal	needsinfo	testing, ux		Unreviewed	0	0	0	0	1	1
