Changes between Version 1 and Version 2 of Ticket #23424, comment 15


Ignore:
Timestamp:
Aug 25, 2021, 1:17:02 PM (3 years ago)
Author:
Chris Jerdonek

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #23424, comment 15

    v1 v2  
    1 The way I would try solving this today is, after [https://github.com/django/django/pull/14739 PR #14739] is merged, make [https://github.com/django/django/blob/196a99da5d9c4c33a78259a58d38fb114a4d2ee8/django/template/base.py#L364 Lexer.create_token()] raise a custom exception called something like `VerbatimTagStarting` with the end string as the argument if a verbatim tag is encountered. (The line where this happens will be clear after [https://github.com/django/django/pull/14739 PR #14739].) Then, in [https://github.com/django/django/blob/196a99da5d9c4c33a78259a58d38fb114a4d2ee8/django/template/base.py#L359 Lexer.tokenize()], handle `VerbatimTagStarting` being raised by searching for the new end string, and then resuming the `tag_re.split()` iteration at the new location in the template string afterwards. This will be much easier to implement now that ticket #33002 is resolved. One side benefit of this is that `Lexer.create_token()` should become simpler (though the complexity will move to `tokenize()`). However, `Lexer.create_token()` probably shouldn't be having much logic anyways (nor state, which the changes I'm suggesting would also remove).
     1The way I would try solving this today is, ~~after [https://github.com/django/django/pull/14739 PR #14739] is merged,~~ make [https://github.com/django/django/blob/196a99da5d9c4c33a78259a58d38fb114a4d2ee8/django/template/base.py#L364 Lexer.create_token()] raise a custom exception called something like `VerbatimTagStarting` with the end string as the argument if a verbatim tag is encountered. (The line where this happens is [https://github.com/django/django/blob/55cf9e93b5e5bdf19bedeb1d900ee8a83f8489fb/django/template/base.py#L386-L387 here].) Then, in [https://github.com/django/django/blob/196a99da5d9c4c33a78259a58d38fb114a4d2ee8/django/template/base.py#L359 Lexer.tokenize()], handle `VerbatimTagStarting` being raised by searching for the new end string, and then resuming the `tag_re.split()` iteration at the new location in the template string afterwards. This will be much easier to implement now that ticket #33002 is resolved. One side benefit of this is that `Lexer.create_token()` should become simpler (though the complexity will move to `tokenize()`). However, `Lexer.create_token()` probably shouldn't be having much logic anyways (nor state, which the changes I'm suggesting would also remove).
Back to Top