gh-63161: Fix PEP 263 support#139481
Conversation
* Support non-UTF-8 shebang and comments if non-UTF-8 encoding is specified. * Detect decoding error in comments for UTF-8 encoding.
| const char *line = tok->lineno <= 2 ? tok->buf : tok->cur; | ||
| int lineno = tok->lineno <= 2 ? 1 : tok->lineno; | ||
| if (!tok->encoding) { | ||
| /* The default encoding is UTF-8, so make sure we don't have any | ||
| non-UTF-8 sequences in it. */ | ||
| if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | ||
| _PyTokenizer_error_ret(tok); | ||
| return 0; | ||
| } | ||
| } | ||
| else { | ||
| PyObject *tmp = PyUnicode_Decode(line, strlen(line), |
There was a problem hiding this comment.
| const char *line = tok->lineno <= 2 ? tok->buf : tok->cur; | |
| int lineno = tok->lineno <= 2 ? 1 : tok->lineno; | |
| if (!tok->encoding) { | |
| /* The default encoding is UTF-8, so make sure we don't have any | |
| non-UTF-8 sequences in it. */ | |
| if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | |
| _PyTokenizer_error_ret(tok); | |
| return 0; | |
| } | |
| } | |
| else { | |
| PyObject *tmp = PyUnicode_Decode(line, strlen(line), | |
| const int is_pseudo_line = (tok->lineno <= 2); | |
| const char *line = is_pseudo_line ? tok->buf : tok->cur; | |
| int lineno = is_pseudo_line ? 1 : tok->lineno; | |
| size_t slen = strlen(line); | |
| if (slen > (size_t)PY_SSIZE_T_MAX) { | |
| _PyTokenizer_error_ret(tok); | |
| return 0; | |
| } | |
| Py_ssize_t linelen = (Py_ssize_t)slen; | |
| if (!tok->encoding) { | |
| /* The default encoding is UTF-8, so make sure we don't have any | |
| non-UTF-8 sequences in it. */ | |
| if (!_PyTokenizer_ensure_utf8(line, tok, lineno)) { | |
| _PyTokenizer_error_ret(tok); | |
| return 0; | |
| } | |
| } | |
| else { | |
| PyObject *tmp = PyUnicode_Decode(line, linelen, |
vstinner
left a comment
There was a problem hiding this comment.
LGTM. I am not sure about the tokenizer changes, but I trust unit tests :-)
|
Unfortunately there was a regression which caused one of existing tests to fail. Earlier, decoding error for default (UTF-8) encoding was raised only when the tokenizer tried to decode an identifier or string literal. So you had an affected line with underscored identifier or string literal containing undecodable bytes in a traceback. Now it is raised at the beginning of parsing string or after reading a line from the file (only for first few lines). Fixing this regression was not easy. But now you have a nice line with the cursor pointing exactly to the undecodable byte in a traceback, and this works in more cases than earlier. But it did not work and still does not work if the encoding is explicitly specified. Then you get a SyntaxError without correct reference to the position of decoding error. This is a different complex issue. |
|
Thanks @serhiy-storchaka for the PR 🌮🎉.. I'm working now to backport this PR to: 3.13, 3.14. |
* Support non-UTF-8 shebang and comments if non-UTF-8 encoding is specified. * Detect decoding error in comments for UTF-8 encoding. * Include the decoding error position for default encoding in SyntaxError. (cherry picked from commit 5c942f1) Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
|
Sorry, @serhiy-storchaka, I could not cleanly backport this to |
|
GH-139898 is a backport of this pull request to the 3.14 branch. |
* Support non-UTF-8 shebang and comments if non-UTF-8 encoding is specified. * Detect decoding error in comments for UTF-8 encoding. * Include the decoding error position for default encoding in SyntaxError. (cherry picked from commit 5c942f1) Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Uh oh!
There was an error while loading. Please reload this page.