| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
`unused`, but is used and thus cached in a prior uop. (#121788)
* Reject uop definitions that declare values as 'unused' that are already cached by prior uops
* Track which variables are defined and only load from memory when needed
* Support explicit `flush` in macro definitions.
* Make sure stack is flushed in where needed.
|
|
|
|
| |
(GH-121483)
|
|
|
| |
Avoids the extra conversion from stack refs to PyObjects.
|
| |
|
|
|
| |
Fix a few wrong steals in bytecodes.c
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix warnings when using -Wimplicit-fallthrough compiler flag.
Annotate explicitly "fall through" switch cases with a new
_Py_FALLTHROUGH macro which uses __attribute__((fallthrough)) if
available. Replace "fall through" comments with _Py_FALLTHROUGH.
Add _Py__has_attribute() macro. No longer define __has_attribute()
macro if it's not defined. Move also _Py__has_builtin() at the top
of pyport.h.
Co-Authored-By: Nikita Sobolev <mail@sobolevn.me>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR sets up tagged pointers for CPython.
The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly.
Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out.
This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it.
The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs.
Please read Include/internal/pycore_stackref.h for more information!
---------
Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com>
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
(#120640)
* Remove BEFORE_WITH and BEFORE_ASYNC_WITH instructions.
* Add LOAD_SPECIAL instruction
* Reimplement `with` and `async with` statements using LOAD_SPECIAL
|
|
|
|
|
| |
* Rename _POP_FRAME to _RETURN_VALUE as it returns a value as well as popping a frame.
* Remove remaining _POP_FRAMEs
|
| |
|
|
|
|
|
|
|
|
| |
* Add docs for new APIs
* Add soft-deprecation notices
* Add What's New porting entries
* Update comments referencing `PyFrame_LocalsToFast()` to mention the proxy instead
* Other related cleanups found when looking for refs to the deprecated APIs
|
|
|
|
|
|
|
|
|
| |
Support non-dict globals in LOAD_FROM_DICT_OR_GLOBALS
The implementation basically copies LOAD_GLOBAL. Possibly it could be deduplicated,
but that seems like it may get hairy since the two operations have different operands.
This is important to fix in 3.14 for PEP 649, but it's a bug in earlier versions too,
and we should backport to 3.13 and 3.12 if possible.
|
| |
|
| |
|
|
|
|
| |
(GH-119510)
|
|
|
|
| |
(#119677)
|
|
|
|
|
|
|
|
|
|
| |
The PEP 649 implementation will require a way to load NotImplementedError
from the bytecode. @markshannon suggested implementing this by converting
LOAD_ASSERTION_ERROR into a more general mechanism for loading constants.
This PR adds this new opcode. I will work on the rest of the implementation
of the PEP separately.
Co-authored-by: Irit Katriel <1055913+iritkatriel@users.noreply.github.com>
|
|
|
|
| |
(#119216)
|
| |
|
|
|
|
|
| |
frame.f_locals (#118639)
Also fix unrelated assert in debug Tier2/JIT builds.
|
|
|
|
|
|
|
|
|
|
| |
support of calls. (GH-118322)
* Add CALL_PY_GENERAL, CALL_BOUND_METHOD_GENERAL and call CALL_NON_PY_GENERAL specializations.
* Remove CALL_PY_WITH_DEFAULTS specialization
* Use CALL_NON_PY_GENERAL in more cases when otherwise failing to specialize
|
| |
|
|
|
|
|
|
| |
* Check tracing in RESUME_CHECK
* Only change to RESUME_CHECK if not tracing
|
|
|
|
|
|
| |
* Target _FOR_ITER_TIER_TWO at POP_TOP following the matching END_FOR
* Modify _GUARD_NOT_EXHAUSTED_RANGE, _GUARD_NOT_EXHAUSTED_LIST and _GUARD_NOT_EXHAUSTED_TUPLE so that they also target the POP_TOP following the matching END_FOR
|
|
|
|
| |
(GH-118482)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The code for Tier 2 is now only compiled when configured
with `--enable-experimental-jit[=yes|interpreter]`.
We drop support for `PYTHON_UOPS` and -`Xuops`,
but you can disable the interpreter or JIT
at runtime by setting `PYTHON_JIT=0`.
You can also build it without enabling it by default
using `--enable-experimental-jit=yes-off`;
enable with `PYTHON_JIT=1`.
On Windows, the `build.bat` script supports
`--experimental-jit`, `--experimental-jit-off`,
`--experimental-interpreter`.
In the C code, `_Py_JIT` is defined as before
when the JIT is enabled; the new variable
`_Py_TIER2` is defined when the JIT *or* the
interpreter is enabled. It is actually a bitmask:
1: JIT; 2: default-off; 4: interpreter.
|
|
|
|
| |
(GH-118349)
|
|
|
| |
Small TSAN fixups for instrumentation
|
| |
|
| |
|
|
|
|
| |
(GH-118279)
|
| |
|
| |
|
|
|
| |
Covert DEOPT_IFs on likely side exits to EXIT_IFs
|
|
|
|
| |
known already (GH-118050)
|
|
|
|
|
| |
(#114742)
Make instance attributes stored in inline "dict" thread safe on free-threaded builds
|
|
|
|
|
|
|
| |
(#116775)
Makes sys.settrace, sys.setprofile, and monitoring generally thread-safe.
Mostly uses a stop-the-world approach and synchronization around the code object's _co_instrumentation_version. There may be a little bit of extra synchronization around the monitoring data that's required to be TSAN clean.
|
| |
|
|
|
| |
Make _PyDict_LoadGlobal threadsafe
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``),
shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The
API used for adaptive specialization counters is changed but the behavior is
(supposed to be) identical.
The behavior of the Tier 2 counters is changed:
- There are no longer dynamic thresholds (we never varied these).
- All counters now use the same exponential backoff.
- The counter for ``JUMP_BACKWARD`` starts counting down from 16.
- The ``temperature`` in side exits starts counting down from 64.
|
|
|
| |
This merges all `_CHECK_STACK_SPACE` uops in a trace into a single `_CHECK_STACK_SPACE_OPERAND` uop that checks whether there is enough stack space for all calls included in the entire trace.
|
|
|
|
| |
objects. (GH-116115)
|
|
|
| |
Use critical sections to lock around accesses to cell contents. The critical sections are no-ops in the default (with GIL) build.
|
|
|
|
|
| |
Splits the "cold" path, deopts and exits, from the "hot" path, reducing the size of most jitted instructions, at the cost of slower exits.
|
| |
|