| Commit message (Collapse) | Author | Age | Files | Lines |
| ... | |
| |
|
|
| |
(GH-129135)
|
| |
|
|
| |
(#129700)
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
(gh-129563)
|
| |
|
| |
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
|
| |
|
|
|
|
| |
(gh-129738)
The read of `shared->array` should happen under the lock to avoid a race.
|
| |
|
|
|
| |
Co-authored-by: Garrett Gu <garrettgu777@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
|
| |
|
| |
For the free-threaded version of the cyclic GC, restructure the "mark alive" phase to use software prefetch instructions. This gives a speedup in most cases when the number of objects is large enough. The prefetching is enabled conditionally based on the number of long-lived objects the GC finds.
|
| |
|
|
|
|
|
|
|
|
| |
Replace PyErr_WriteUnraisable() with PyErr_FormatUnraisable().
Update tests:
* test_coroutines
* test_exceptions
* test_generators
* test_struct
|
| |
|
|
|
|
| |
* Replace PyImport_ImportModule() + PyObject_GetAttr() with
PyImport_ImportModuleAttr().
* Replace PyImport_ImportModule() + PyObject_GetAttrString() with
PyImport_ImportModuleAttrString().
|
| |
|
| |
Expand out SETLOCAL so that code generator can see the decref. Mark Py_CLEAR as escaping
|
| |
|
|
| |
(GH-129618)
|
| |
|
|
| |
`Python/flowgrapc.::optimize_if_const_subscr` (#129634)
|
| |
|
|
|
| |
Move folding of constant subscription from AST optimizer to CFG.
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
|
| |
|
|
|
| |
(GH-129608)
* Remove support for GO_TO_INSTRUCTION
|
| | |
|
| | |
|
| | |
|
| |
|
| |
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CFG (#129426)
Codegen phase has an optimization that transforms
```
LOAD_CONST x
LOAD_CONST y
LOAD_CONXT z
BUILD_LIST/BUILD_SET (3)
```
->
```
BUILD_LIST/BUILD_SET (0)
LOAD_CONST (x, y, z)
LIST_EXTEND/SET_UPDATE 1
```
This optimization has now been moved to CFG phase to make #128802 work.
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
Co-authored-by: Yan Yanchii <yyanchiy@gmail.com>
|
| |
|
|
| |
interpreter. (GH-129525)
|
| |
|
| |
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
|
| |
|
|
| |
`PySys_AddWarnOptionUnicode` (#126118)
|
| |
|
| |
Replace PyErr_WriteUnraisable() with PyErr_FormatUnraisable().
|
| |
|
| |
Simplify recursion check in _PyEval_EvalFrameDefault
|
| |
|
| |
Replace "on verb+ing" with "while verb+ing".
|
| |
|
| |
Replace PyErr_WriteUnraisable() with PyErr_FormatUnraisable().
|
| |
|
|
|
|
|
|
|
|
|
| |
Add PyImport_ImportModuleAttr() and
PyImport_ImportModuleAttrString() functions.
* Add unit tests.
* Replace _PyImport_GetModuleAttr()
with PyImport_ImportModuleAttr().
* Replace _PyImport_GetModuleAttrString()
with PyImport_ImportModuleAttrString().
* Remove "pycore_import.h" includes, no longer needed.
|
| |
|
|
|
|
| |
The stack pointers in interpreter frames are nearly always valid now, so
use them when visiting each thread's frame. For now, don't collect
objects with deferred references in the rare case that we see a frame
with a NULL stack pointer.
|
| | |
|
| | |
|
| |
|
|
| |
Enable free-threaded specialization of LOAD_CONST.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove all 'if (0)' and 'if (1)' conditional stack effects
* Use array instead of conditional for BUILD_SLICE args
* Refactor LOAD_GLOBAL to use a common conditional uop
* Remove conditional stack effects from LOAD_ATTR specializations
* Replace conditional stack effects in LOAD_ATTR with a 0 or 1 sized array.
* Remove conditional stack effects from CALL_FUNCTION_EX
|
| | |
|
| | |
|
| |
|
|
| |
(#128971)
|
| |
|
|
|
| |
* Remove compiler workaround
* Remote _Py_USING_PGO
|
| |
|
| |
remove unused DPRINTF in ceval.c
|
| |
|
|
|
|
|
|
|
|
| |
Since tracemalloc uses PyMutex, it becomes safe to use TABLES_LOCK()
even after _PyTraceMalloc_Fini(): remove the "pre-check" in
PyTraceMalloc_Track() and PyTraceMalloc_Untrack().
PyTraceMalloc_Untrack() no longer needs to acquire the GIL.
_PyTraceMalloc_Fini() can be called earlier during Python
finalization.
|
| | |
|
| |
|
|
| |
We also cleanup `PyCodec_StrictErrors` and the error message rendered
when an object of incorrect type is passed to codec error handlers.
|
| |
|
|
| |
This fixes how `PyCodec_BackslashReplaceErrors` handles the `start` and `end`
attributes of `UnicodeError` objects via the `_PyUnicodeError_GetParams` helper.
|
| |
|
| |
Always build tracemalloc with PyMem_RawMalloc() hooks.
|
| |
|
|
|
|
|
|
|
| |
Support calling PyTraceMalloc_Track() and PyTraceMalloc_Untrack()
during late Python finalization.
* Call _PyTraceMalloc_Fini() later in Python finalization.
* Test also PyTraceMalloc_Untrack() without the GIL
* PyTraceMalloc_Untrack() now gets the GIL.
* Test also PyTraceMalloc_Untrack() in test_tracemalloc_track_race().
|
| |
|
|
| |
This fixes how `PyCodec_ReplaceErrors` handles the `start` and `end` attributes
of `UnicodeError` objects via the `_PyUnicodeError_GetParams` helper.
|
| |
|
|
| |
This fixes how `PyCodec_XMLCharRefReplaceErrors` handles the `start` and `end`
attributes of `UnicodeError` objects via the `_PyUnicodeError_GetParams` helper.
|