| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
module (GH-113709) (#113733)
(cherry picked from commit 3003fbbf00422bce6e327646063e97470afa9091)
|
|
|
|
|
| |
(GH-112284) (#112285)
(cherry picked from commit d59feb5dbe5395615d06c30a95e6a6a9b7681d4d)
|
|
|
|
|
|
|
|
|
| |
(GH-110832) (#110931)
(cherry picked from commit a1ac5590e0f8fe008e5562d22edab65d0c1c5507)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
Co-authored-by: Filipe Laíns <lains@riseup.net>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
|
|
|
|
|
|
|
|
|
| |
(GH-110271) (#110396)
gh-110259: Fix f-strings with multiline expressions and format specs (GH-110271)
(cherry picked from commit cc389ef627b2a486ab89d9a11245bef48224efb1)
Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
|
|
|
| |
numerical literal (GH-109081) (#109090)
gh-88943: Improve syntax error for non-ASCII character that follows a numerical literal (GH-109081)
It now points on the invalid non-ASCII character, not on the valid numerical literal.
(cherry picked from commit b2729e93e9d73503b1fda4ea4fecd77c58909091)
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
|
|
|
|
|
|
|
|
| |
(GH-107968) (#107970)
gh-107967: Fix infinite recursion on invalid escape sequence warning (GH-107968)
(cherry picked from commit d66bc9e8a7a8d6774d912a4b9d151885c4d8de1d)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
| |
Co-authored-by: Menelaos Kotoglou <contact@menelaoskotoglou.com>
|
|
|
|
|
|
|
| |
(GH-105939) (#105941)
(cherry picked from commit 6586cee27f32f0354fe4e77c7b8c6e399329b5e2)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
|
| |
tokenizer (GH-105828) (#105832)
(cherry picked from commit d382ad49157b3802fc5619f68d96810def517869)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
| |
(GH-105728) (#105729)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
| |
0-prefixed literals (GH-105555) (#105602)
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
| |
NEWLINE tokens (GH-105364) (#105367)
|
|
|
|
|
|
|
|
| |
input iteratively (GH-105070) (#105119)
gh-105069: Add a readline-like callable to the tokenizer to consume input iteratively (GH-105070)
(cherry picked from commit 9216e69a87d16d871625721ed5a8aa302511f367)
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
|
| |
(GH-105061) (#105120)
gh-105042: Disable unmatched parens syntax error in python tokenize (GH-105061)
(cherry picked from commit 70f315c2d6de87b0514ce16cc00a91a5b60a6098)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
|
|
|
| |
(GH-105030) (#105041)
gh-105017: Include CRLF lines in strings and column numbers (GH-105030)
(cherry picked from commit 96fff35325e519cc76ffacf22e57e4c393d4446f)
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
|
|
|
| |
(GH-105022) (#105023)
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
|
|
| |
comment (GH-104870) (#104872)
gh-104866: Tokenize should emit NEWLINE after exiting block with comment (GH-104870)
(cherry picked from commit c90a862cdcf55dc1753c6466e5fa4a467a13ae24)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* Support for conversion specifiers o (octal) and X (uppercase hexadecimal).
* Support for length modifiers j (intmax_t) and t (ptrdiff_t).
* Length modifiers are now applied to all integer conversions.
* Support for wchar_t C strings (%ls and %lV).
* Support for variable width and precision (*).
* Support for flag - (left alignment).
|
|
|
|
|
|
|
|
|
|
|
| |
This commit replaces the Python implementation of the tokenize module with an implementation
that reuses the real C tokenizer via a private extension module. The tokenize module now implements
a compatibility layer that transforms tokens from the C tokenizer into Python tokenize tokens for backward
compatibility.
As the C tokenizer does not emit some tokens that the Python tokenizer provides (such as comments and non-semantic newlines), a new special mode has been added to the C tokenizer mode that currently is only used via
the extension module that exposes it to the Python layer. This new mode forces the C tokenizer to emit these new extra tokens and add the appropriate metadata that is needed to match the old Python implementation.
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
| |
(#104660)
|
|
|
|
| |
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
|
| |
|
|
|
|
| |
(GH-104136)
|
|
|
|
|
|
| |
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
Co-authored-by: Ken Jin <kenjin@python.org>
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
| |
|
|
|
|
|
| |
(GH-103896)
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
|
|
| |
Turns out we always need to remember/restore fstring buffers in all of
the stack of tokenizer modes, cause they might change to
`TOK_REGULAR_MODE` and have newlines inside the braces (which is when we
need to reallocate the buffer and restore the fstring ones).
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
|
|
|
|
|
| |
(GH-99893)
Automerge-Triggered-By: GH:pablogsal
|
|
|
|
| |
fill the available buffer (#99605)
|
|
|
|
| |
Replace Py_INCREF() with Py_NewRef() in C files of the Parser/
directory and in the PEG generator.
|
| |
|
|
|
|
|
| |
Right now, the tokenizer only returns type and two pointers to the start and end of the token.
This PR modifies the tokenizer to return the type and set all of the necessary information,
so that the parser does not have to this.
|
| |
|
|
|
| |
Automerge-Triggered-By: GH:pablogsal
|
|
|
|
|
| |
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8.
It also fixes an off-by-one error introduced in 3.10 for the line number when the tokenizer reports bad UTF8.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
errors from stdin (#94386)
* gh-94360: Fix a tokenizer crash when reading encoded files with syntax errors from stdin
Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
* nitty nit
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
|
|
|
|
|
|
| |
It combines PyImport_ImportModule() and PyObject_GetAttrString()
and saves 4-6 lines of code on every use.
Add also _PyImport_GetModuleAttr() which takes Python strings as arguments.
|
| |
|
|
|
|
|
|
|
| |
* Replace deprecated Py_DebugFlag with PyConfig.parser_debug in the
parser.
* Add Parser.debug member.
* Add tok_state.debug member.
* Py_FrozenMain(): Replace Py_VerboseFlag with PyConfig.verbose.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the token.h header file. There was never any public tokenizer
C API. The token.h header file was only designed to be used by Python
internals.
Move Include/token.h to Include/internal/pycore_token.h. Including
this header file now requires that the Py_BUILD_CORE macro is
defined. It no longer checks for the Py_LIMITED_API macro.
Rename functions:
* PyToken_OneChar() => _PyToken_OneChar()
* PyToken_TwoChars() => _PyToken_TwoChars()
* PyToken_ThreeChars() => _PyToken_ThreeChars()
|
|
|
|
|
| |
The warning emitted by the Python parser for a numeric literal
immediately followed by keyword has been changed from deprecation
warning to syntax warning.
|
| |
|
| |
|
|
|
|
|
| |
_PyTokenizer_FindEncodingFilename (GH-32033)
WASI does not have dup() and Emscripten's emulation is slow.
|