| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
(GH-107968) (#107970)
gh-107967: Fix infinite recursion on invalid escape sequence warning (GH-107968)
(cherry picked from commit d66bc9e8a7a8d6774d912a4b9d151885c4d8de1d)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
| |
Co-authored-by: Menelaos Kotoglou <contact@menelaoskotoglou.com>
|
|
|
|
|
|
|
| |
(GH-105939) (#105941)
(cherry picked from commit 6586cee27f32f0354fe4e77c7b8c6e399329b5e2)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
|
| |
tokenizer (GH-105828) (#105832)
(cherry picked from commit d382ad49157b3802fc5619f68d96810def517869)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
| |
(GH-105728) (#105729)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
| |
0-prefixed literals (GH-105555) (#105602)
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
| |
NEWLINE tokens (GH-105364) (#105367)
|
|
|
|
|
|
|
|
| |
input iteratively (GH-105070) (#105119)
gh-105069: Add a readline-like callable to the tokenizer to consume input iteratively (GH-105070)
(cherry picked from commit 9216e69a87d16d871625721ed5a8aa302511f367)
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
|
| |
(GH-105061) (#105120)
gh-105042: Disable unmatched parens syntax error in python tokenize (GH-105061)
(cherry picked from commit 70f315c2d6de87b0514ce16cc00a91a5b60a6098)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
|
|
|
|
|
|
|
|
| |
(GH-105030) (#105041)
gh-105017: Include CRLF lines in strings and column numbers (GH-105030)
(cherry picked from commit 96fff35325e519cc76ffacf22e57e4c393d4446f)
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
|
|
|
| |
(GH-105022) (#105023)
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
|
|
| |
comment (GH-104870) (#104872)
gh-104866: Tokenize should emit NEWLINE after exiting block with comment (GH-104870)
(cherry picked from commit c90a862cdcf55dc1753c6466e5fa4a467a13ae24)
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* Support for conversion specifiers o (octal) and X (uppercase hexadecimal).
* Support for length modifiers j (intmax_t) and t (ptrdiff_t).
* Length modifiers are now applied to all integer conversions.
* Support for wchar_t C strings (%ls and %lV).
* Support for variable width and precision (*).
* Support for flag - (left alignment).
|
|
|
|
|
|
|
|
|
|
|
| |
This commit replaces the Python implementation of the tokenize module with an implementation
that reuses the real C tokenizer via a private extension module. The tokenize module now implements
a compatibility layer that transforms tokens from the C tokenizer into Python tokenize tokens for backward
compatibility.
As the C tokenizer does not emit some tokens that the Python tokenizer provides (such as comments and non-semantic newlines), a new special mode has been added to the C tokenizer mode that currently is only used via
the extension module that exposes it to the Python layer. This new mode forces the C tokenizer to emit these new extra tokens and add the appropriate metadata that is needed to match the old Python implementation.
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
| |
(#104660)
|
|
|
|
| |
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
|
| |
|
|
|
|
| |
(GH-104136)
|
|
|
|
|
|
| |
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
Co-authored-by: Ken Jin <kenjin@python.org>
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
| |
|
|
|
|
|
| |
(GH-103896)
Co-authored-by: Pablo Galindo <pablogsal@gmail.com>
|
|
|
|
|
|
| |
Turns out we always need to remember/restore fstring buffers in all of
the stack of tokenizer modes, cause they might change to
`TOK_REGULAR_MODE` and have newlines inside the braces (which is when we
need to reallocate the buffer and restore the fstring ones).
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Co-authored-by: Lysandros Nikolaou <lisandrosnik@gmail.com>
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Co-authored-by: Marta Gómez Macías <mgmacias@google.com>
Co-authored-by: sunmy2019 <59365878+sunmy2019@users.noreply.github.com>
|
|
|
|
|
| |
(GH-99893)
Automerge-Triggered-By: GH:pablogsal
|
|
|
|
| |
fill the available buffer (#99605)
|
|
|
|
| |
Replace Py_INCREF() with Py_NewRef() in C files of the Parser/
directory and in the PEG generator.
|
| |
|
|
|
|
|
| |
Right now, the tokenizer only returns type and two pointers to the start and end of the token.
This PR modifies the tokenizer to return the type and set all of the necessary information,
so that the parser does not have to this.
|
| |
|
|
|
| |
Automerge-Triggered-By: GH:pablogsal
|
|
|
|
|
| |
This makes tokenizer.c:valid_utf8 match stringlib/codecs.h:decode_utf8.
It also fixes an off-by-one error introduced in 3.10 for the line number when the tokenizer reports bad UTF8.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
errors from stdin (#94386)
* gh-94360: Fix a tokenizer crash when reading encoded files with syntax errors from stdin
Signed-off-by: Pablo Galindo <pablogsal@gmail.com>
* nitty nit
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
|
|
|
|
|
|
| |
It combines PyImport_ImportModule() and PyObject_GetAttrString()
and saves 4-6 lines of code on every use.
Add also _PyImport_GetModuleAttr() which takes Python strings as arguments.
|
| |
|
|
|
|
|
|
|
| |
* Replace deprecated Py_DebugFlag with PyConfig.parser_debug in the
parser.
* Add Parser.debug member.
* Add tok_state.debug member.
* Py_FrozenMain(): Replace Py_VerboseFlag with PyConfig.verbose.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove the token.h header file. There was never any public tokenizer
C API. The token.h header file was only designed to be used by Python
internals.
Move Include/token.h to Include/internal/pycore_token.h. Including
this header file now requires that the Py_BUILD_CORE macro is
defined. It no longer checks for the Py_LIMITED_API macro.
Rename functions:
* PyToken_OneChar() => _PyToken_OneChar()
* PyToken_TwoChars() => _PyToken_TwoChars()
* PyToken_ThreeChars() => _PyToken_ThreeChars()
|
|
|
|
|
| |
The warning emitted by the Python parser for a numeric literal
immediately followed by keyword has been changed from deprecation
warning to syntax warning.
|
| |
|
| |
|
|
|
|
|
| |
_PyTokenizer_FindEncodingFilename (GH-32033)
WASI does not have dup() and Emscripten's emulation is slow.
|
| |
|
|
|
|
|
|
|
| |
(GH-31479)
Fix parsing a numeric literal immediately (without spaces) followed by
"not in" keywords, like in "1not in x". Now the parser only emits
a warning, not a syntax error.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
global objects. (gh-30928)
We're no longer using _Py_IDENTIFIER() (or _Py_static_string()) in any core CPython code. It is still used in a number of non-builtin stdlib modules.
The replacement is: PyUnicodeObject (not pointer) fields under _PyRuntimeState, statically initialized as part of _PyRuntime. A new _Py_GET_GLOBAL_IDENTIFIER() macro facilitates lookup of the fields (along with _Py_GET_GLOBAL_STRING() for non-identifier strings).
https://bugs.python.org/issue46541#msg411799 explains the rationale for this change.
The core of the change is in:
* (new) Include/internal/pycore_global_strings.h - the declarations for the global strings, along with the macros
* Include/internal/pycore_runtime_init.h - added the static initializers for the global strings
* Include/internal/pycore_global_objects.h - where the struct in pycore_global_strings.h is hooked into _PyRuntimeState
* Tools/scripts/generate_global_objects.py - added generation of the global string declarations and static initializers
I've also added a --check flag to generate_global_objects.py (along with make check-global-objects) to check for unused global strings. That check is added to the PR CI config.
The remainder of this change updates the core code to use _Py_GET_GLOBAL_IDENTIFIER() instead of _Py_IDENTIFIER() and the related _Py*Id functions (likewise for _Py_GET_GLOBAL_STRING() instead of _Py_static_string()). This includes adding a few functions where there wasn't already an alternative to _Py*Id(), replacing the _Py_Identifier * parameter with PyObject *.
The following are not changed (yet):
* stop using _Py_IDENTIFIER() in the stdlib modules
* (maybe) get rid of _Py_IDENTIFIER(), etc. entirely -- this may not be doable as at least one package on PyPI using this (private) API
* (maybe) intern the strings during runtime init
https://bugs.python.org/issue46541
|
| |
|
|
|
|
|
|
|
|
|
|
| |
@pablogsal, sorry i failed to rebase to main, so i recreated https://github.com/python/cpython/pull/22190#issuecomment-1024633392
> PyRun_InteractiveOne\*() functions allow to explicitily set fd instead of stdin.
but stdin was hardcoded in readline call.
> This patch does not fix target file for prompt unlike original bpo one : prompt fd is unrelated to tokenizer source which could be read only. It is more of a bugfix regarding the docs : actual documentation say "prompt the user" so one would expect prompt to go on stdout not a file for both PyRun_InteractiveOne\*() and PyRun_InteractiveLoop\*().
Automerge-Triggered-By: GH:pablogsal
|