Make DebugLexer.tokenize() more similar to Lexer.tokenize()
Currently, DebugLexer.tokenize() looks somewhat dissimilar to Lexer.tokenize(), even though they do more or less the same thing. For example, the Lexer
class's implementation contains just one call to self.create_token()
, whereas DebugLexer
contains three calls to it.
This ticket is to make DebugLexer.tokenize()
look visibly much more the same as Lexer.tokenize()
, and in particular simpler. One advantage is that it will be more obvious on inspection that DebugLexer
acts the same as Lexer
. Another advantage is that it will be easier to keep the two implementations in step when applying optimizations to Lexer
. It will be more maintainable and less likely to introduce bugs if the implementations for the two lexers don't diverge too much.
The idea will become more clear once the PR is posted.
Change History
(10)
Patch needs improvement: |
set
|
Triage Stage: |
Unreviewed → Accepted
|
Patch needs improvement: |
unset
|
Triage Stage: |
Accepted → Ready for checkin
|
Resolution: |
→ fixed
|
Status: |
assigned → closed
|
PR: https://github.com/django/django/pull/14753