| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |
|
| |
|
| |
|
|
|
|
|
|
| |
`_PyEval_EvalFrameDefault()` (GH-107535)
* Set C recursion limit to 1500, set cost of eval loop to 2 frames, and compiler mutliply to 2.
|
| |
|
|
|
|
|
|
| |
There's no need to use a dummy uop to skip unused cache entries. The macro syntax lets you write `unused/1` instead.
Similarly, move `unused/5` from op `_LOAD_ATTR_INSTANCE_VALUE` to macro `LOAD_ATTR_INSTANCE_VALUE`.
|
|
|
|
| |
cannot be disabled. (GH-107337)
|
|
|
| |
* Ensures that exception handling events are balanced. Each [re]raise event has a matching unwind/handled event.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Remove '#include "structmember.h"'.
* If needed, add <stddef.h> to get offsetof() function.
* Update Parser/asdl_c.py to regenerate Python/Python-ast.c.
* Replace:
* T_SHORT => Py_T_SHORT
* T_INT => Py_T_INT
* T_LONG => Py_T_LONG
* T_FLOAT => Py_T_FLOAT
* T_DOUBLE => Py_T_DOUBLE
* T_STRING => Py_T_STRING
* T_OBJECT => _Py_T_OBJECT
* T_CHAR => Py_T_CHAR
* T_BYTE => Py_T_BYTE
* T_UBYTE => Py_T_UBYTE
* T_USHORT => Py_T_USHORT
* T_UINT => Py_T_UINT
* T_ULONG => Py_T_ULONG
* T_STRING_INPLACE => Py_T_STRING_INPLACE
* T_BOOL => Py_T_BOOL
* T_OBJECT_EX => Py_T_OBJECT_EX
* T_LONGLONG => Py_T_LONGLONG
* T_ULONGLONG => Py_T_ULONGLONG
* T_PYSSIZET => Py_T_PYSSIZET
* T_NONE => _Py_T_NONE
* READONLY => Py_READONLY
* PY_AUDIT_READ => Py_AUDIT_READ
* READ_RESTRICTED => Py_AUDIT_READ
* PY_WRITE_RESTRICTED => _Py_WRITE_RESTRICTED
* RESTRICTED => (READ_RESTRICTED | _Py_WRITE_RESTRICTED)
|
| |
|
|
|
|
|
|
|
|
| |
* Add pycore_setobject.h header file.
* Move the following API to the internal C API:
* _PySet_Dummy
* _PySet_NextEntry()
* _PySet_Update()
|
| |
|
| |
|
| |
|
|
|
|
| |
bytecodes.c (#106758)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By turning `assert(kwnames == NULL)` into a macro that is not in the "forbidden" list, many instructions that formerly were skipped because they contained such an assert (but no other mention of `kwnames`) are now supported in Tier 2. This covers 10 instructions in total (all specializations of `CALL` that invoke some C code):
- `CALL_NO_KW_TYPE_1`
- `CALL_NO_KW_STR_1`
- `CALL_NO_KW_TUPLE_1`
- `CALL_NO_KW_BUILTIN_O`
- `CALL_NO_KW_BUILTIN_FAST`
- `CALL_NO_KW_LEN`
- `CALL_NO_KW_ISINSTANCE`
- `CALL_NO_KW_METHOD_DESCRIPTOR_O`
- `CALL_NO_KW_METHOD_DESCRIPTOR_NOARGS`
- `CALL_NO_KW_METHOD_DESCRIPTOR_FAST`
|
|
|
|
| |
From `family(opname, STRUCTSIZE) = OPNAME + SPEC1 + ... + SPECn;`
to `family(OPNAME, STRUCTSIZE) = SPEC1 + ... + SPECn;`
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Tier 2 opcode _IS_ITER_EXHAUSTED_LIST (and _TUPLE)
didn't set it->it_seq to NULL, causing a subtle bug
that resulted in test_exhausted_iterator in list_tests.py
to fail when running all tests with -Xuops.
The bug was introduced in gh-106696.
Added this as an explicit test.
Also fixed the dependencies for ceval.o -- it depends on executor_cases.c.h.
|
|
|
|
| |
Also rename `_ITER_EXHAUSTED_XXX` to `_IS_ITER_EXHAUSTED_XXX` to make it clear this is a test.
|
|
|
|
|
|
|
| |
This moves EXIT_TRACE, SAVE_IP, JUMP_TO_TOP, and
_POP_JUMP_IF_{FALSE,TRUE} from ceval.c to bytecodes.c.
They are no less special than before, but this way
they are discoverable o the copy-and-patch tooling.
|
| |
|
|
|
|
| |
For an example of what this does for Tier 1 and Tier 2, see
https://github.com/python/cpython/issues/106529#issuecomment-1631649920
|
| |
|
|
|
|
| |
Also add PyMapping_GetOptionalItemString() function.
|
| |
|
|
|
|
|
|
|
|
| |
* Convert PyObject_DelAttr() and PyObject_DelAttrString() macros to
functions.
* Add PyObject_DelAttr() and PyObject_DelAttrString() functions to
the stable ABI.
* Replace PyObject_SetAttr(obj, name, NULL) with
PyObject_DelAttr(obj, name).
|
|
|
| |
* Add two more specializations of LOAD_ATTR.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
When `_PyOptimizer_BackEdge` returns `NULL`, we should restore `next_instr` (and `stack_pointer`). To accomplish this we should jump to `resume_with_error` instead of just `error`.
The problem this causes is subtle -- the only repro I have is in PR gh-106393, at commit d7df54b139bcc47f5ea094bfaa9824f79bc45adc. But the fix is real (as shown later in that PR).
While we're at it, also improve the debug output: the offsets at which traces are identified are now measured in bytes, and always show the start offset. This makes it easier to correlate executor calls with optimizer calls, and either with `dis` output.
<!-- gh-issue-number: gh-104584 -->
* Issue: gh-104584
<!-- /gh-issue-number -->
|
|
|
|
|
|
| |
* Check eval-breaker in ENTER_EXECUTOR.
* Make sure that frame->prev_instr is set before entering executor.
|
|
|
| |
Replace _PyObject_FastCall() calls with PyObject_Vectorcall().
|
| |
|
|
|
|
|
|
| |
This enables super-instruction formation,
removal of checks for uninitialized variables,
and frees up an instruction.
|
|
|
|
|
| |
Added a new, experimental, tracing optimizer and interpreter (a.k.a. "tier 2"). This currently pessimizes, so don't use yet -- this is infrastructure so we can experiment with optimizing passes. To enable it, pass ``-Xuops`` or set ``PYTHONUOPS=1``. To get debug output, set ``PYTHONUOPSDEBUG=N`` where ``N`` is a debug level (0-4, where 0 is no debug output and 4 is excessively verbose).
All of this code is likely to change dramatically before the 3.13 feature freeze. But this is a first step.
|
| |
|
|
|
|
|
|
| |
* Add test for long loops
* Clear ENTER_EXECUTOR when deopting code objects.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
internal frame. (GH-105727)
* Add table describing possible executable classes for out-of-process debuggers.
* Remove shim code object creation code as it is no longer needed.
* Make lltrace a bit more robust w.r.t. non-standard frames.
|
|
|
|
| |
opcode.hasarg/hasname/hasconst (#105482)
|
|
|
|
| |
(GH-105680)
|
| |
|
|
|
|
| |
* Remove LOAD_CONST__LOAD_FAST and LOAD_FAST__LOAD_CONST superinstructions.
|
| |
|
|
|
|
| |
equivalent. (GH-105230)
|
| |
|
| |
|
| |
|
|
|
| |
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
|