| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
builds (#127711)
We use the same approach that was used for specialization of LOAD_GLOBAL in free-threaded builds:
_CHECK_ATTR_MODULE is renamed to _CHECK_ATTR_MODULE_PUSH_KEYS; it pushes the keys object for the following _LOAD_ATTR_MODULE_FROM_KEYS (nee _LOAD_ATTR_MODULE). This arrangement avoids having to recheck the keys version.
_LOAD_ATTR_MODULE is renamed to _LOAD_ATTR_MODULE_FROM_KEYS; it loads the value from the keys object pushed by the preceding _CHECK_ATTR_MODULE_PUSH_KEYS at the cached index.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Enable specialization of CALL_KW
* Fix bug pushing frame in _PY_FRAME_KW
`_PY_FRAME_KW` pushes a pointer to the new frame onto the stack for
consumption by the next uop. When pushing the frame fails, we do not
want to push the result, `NULL`, to the stack because it is not
a valid stackref. This works in the default build because `PyStackRef_NULL`
and `NULL` are the same value, so the `PyStackRef_XCLOSE()` in the error
handler ignores it. In the free-threaded build the values are not the same;
`PyStackRef_XCLOSE()` will attempt to decref a null pointer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
builds (#127123)
The CALL family of instructions were mostly thread-safe already and only required a small number of changes, which are documented below.
A few changes were needed to make CALL_ALLOC_AND_ENTER_INIT thread-safe:
Added _PyType_LookupRefAndVersion, which returns the type version corresponding to the returned ref.
Added _PyType_CacheInitForSpecialization, which takes an init method and the corresponding type version and only populates the specialization cache if the current type version matches the supplied version. This prevents potentially caching a stale value in free-threaded builds if we race with an update to __init__.
Only cache __init__ functions that are deferred in free-threaded builds. This ensures that the reference to __init__ that is stored in the specialization cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT passes.
Fix a bug in _CREATE_INIT_FRAME where the frame is pushed to the stack on failure.
A few other miscellaneous changes were also needed:
Use {LOCK,UNLOCK}_OBJECT in LIST_APPEND. This ensures that the list's per-object lock is held while we are appending to it.
Add missing co_tlbc for _Py_InitCleanup.
Stop/start the world around setting the eval frame hook. This allows us to read interp->eval_frame non-atomically and preserves the behavior of _CHECK_PEP_523 documented below.
|
|
|
|
|
| |
No additional thread safety changes are required. Note that sending to
a generator that is shared between threads is currently not safe in the
free-threaded build.
|
|
|
|
|
|
| |
Use existing helpers to atomically modify the bytecode. Add unit tests
to ensure specializing is happening as expected. Add test_specialize.py
that can be used with ThreadSanitizer to detect data races.
Fix thread safety issue with cell_set_contents().
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
The specialization only depends on the type, so no special thread-safety
considerations there.
STORE_SUBSCR_LIST_INT needs to lock the list before modifying it.
`_PyDict_SetItem_Take2` already internally locks the dictionary using a
critical section.
|
|
|
|
|
|
|
|
|
| |
This gets rid of the immortal check in `PyStackRef_FromPyObjectSteal()`.
Overall, this improves performance about 2% in the free threading
build.
This also renames `PyStackRef_Is()` to `PyStackRef_IsExactly()` because
the macro requires that the tag bits of the arguments match, which is
only true in certain special cases.
|
|
|
|
|
|
|
|
|
|
|
| |
Add free-threaded specialization for `UNPACK_SEQUENCE` opcode.
`UNPACK_SEQUENCE_TUPLE/UNPACK_SEQUENCE_TWO_TUPLE` are already thread safe since tuples are immutable.
`UNPACK_SEQUENCE_LIST` is not thread safe because of nature of lists (there is nothing preventing another thread from adding items to or removing them the list while the instruction is executing). To achieve thread safety we add a critical section to the implementation of `UNPACK_SEQUENCE_LIST`, especially around the parts where we check the size of the list and push items onto the stack.
---------
Co-authored-by: Matt Page <mpage@meta.com>
Co-authored-by: mpage <mpage@cs.stanford.edu>
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Enable specialization of LOAD_GLOBAL in free-threaded builds.
Thread-safety of specialization in free-threaded builds is provided by the following:
A critical section is held on both the globals and builtins objects during specialization. This ensures we get an atomic view of both builtins and globals during specialization.
Generation of new keys versions is made atomic in free-threaded builds.
Existing helpers are used to atomically modify the opcode.
Thread-safety of specialized instructions in free-threaded builds is provided by the following:
Relaxed atomics are used when loading and storing dict keys versions. This avoids potential data races as the dict keys versions are read without holding the dictionary's per-object lock in version guards.
Dicts keys objects are passed from keys version guards to the downstream uops. This ensures that we are loading from the correct offset in the keys object. Once a unicode key has been stored in a keys object for a combined dictionary in free-threaded builds, the offset that it is stored in will never be reused for a different key. Once the version guard passes, we know that we are reading from the correct offset.
The dictionary read fast-path is used to read values from the dictionary once we know the correct offset.
|
| |
|
| |
|
|
|
|
| |
collection (GH-126502)" (#126983)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
collection (GH-126502)
* Mark almost all reachable objects before doing collection phase
* Add stats for objects marked
* Visit new frames before each increment
* Remove lazy dict tracking
* Update docs
* Clearer calculation of work to do.
|
|
|
|
|
| |
subclasses (GH-126264)
Co-authored-by: Nicolas Tessore <n.tessore@ucl.ac.uk>
|
|
|
|
|
| |
(GH-124846)
Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
|
|
|
|
| |
- The specialization logic determines the appropriate specialization using only the operand's type, which is safe to read non-atomically (changing it requires stopping the world). We are guaranteed that the type will not change in between when it is checked and when we specialize the bytecode because the types involved are immutable (you cannot assign to `__class__` for exact instances of `dict`, `set`, or `frozenset`). The bytecode is mutated atomically using helpers.
- The specialized instructions rely on the operand type not changing in between the `DEOPT_IF` checks and the calls to the appropriate type-specific helpers (e.g. `_PySet_Contains`). This is a correctness requirement in the default builds and there are no changes to the opcodes in the free-threaded builds that would invalidate this.
|
|
|
|
| |
(#126369)
|
|
|
|
|
| |
Fix conversion warning in bytecodes
Co-authored-by: mpage <mpage@cs.stanford.edu>
|
|
|
|
|
|
|
|
|
| |
`BINARY_OP` (#123926)
Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads.
Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization.
Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
|
| |
|
|
|
|
|
|
|
|
| |
* Add LOAD_CONST_IMMORTAL opcode
* Add LOAD_SMALL_INT opcode
* Remove RETURN_CONST opcode
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Fix usage of PyStackRef_FromPyObjectSteal in CALL_TUPLE_1
This was missed in gh-124894
* Fix usage of PyStackRef_FromPyObjectSteal in _CALL_STR_1
This was missed in gh-124894
* Regenerate code
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is essentially a cleanup, moving a handful of API declarations to the header files where they fit best, creating new ones when needed.
We do the following:
* add pycore_debug_offsets.h and move _Py_DebugOffsets, etc. there
* inline struct _getargs_runtime_state and struct _gilstate_runtime_state in _PyRuntimeState
* move struct _reftracer_runtime_state to the existing pycore_object_state.h
* add pycore_audit.h and move to it _Py_AuditHookEntry , _PySys_Audit(), and _PySys_ClearAuditHooks
* add audit.h and cpython/audit.h and move the existing audit-related API there
*move the perfmap/trampoline API from cpython/sysmodule.h to cpython/ceval.h, and remove the now-empty cpython/sysmodule.h
|
|
|
| |
Co-authored-by: Kirill Podoprigora <kirill.bast9@mail.ru>
|
|
|
|
| |
PyStackRefs. (GH-125439)
|
|
|
|
| |
PyStackRef_CLOSEs (GH-125324)
|
|
|
|
| |
(GH-125251)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
{globals, builtins} keys (gh-124953)
Each of the `LOAD_GLOBAL` specializations is implemented roughly as:
1. Load keys version.
2. Load cached keys version.
3. Deopt if (1) and (2) don't match.
4. Load keys.
5. Load cached index into keys.
6. Load object from (4) at offset from (5).
This is not thread-safe in free-threaded builds; the keys object may be replaced
in between steps (3) and (4).
This change refactors the specializations to avoid reloading the keys object and
instead pass the keys object from guards to be consumed by downstream uops.
|
|
|
|
| |
NULL pointers. (GH-124894)
|
|
|
|
|
|
|
| |
* Spill the evaluation around escaping calls in the generated interpreter and JIT.
* The code generator tracks live, cached values so they can be saved to memory when needed.
* Spills the stack pointer around escaping calls, so that the exact stack is visible to the cycle GC.
|
| |
|
| |
|
|
|
|
|
| |
(GH-124443)
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
|
|
|
| |
Fix off-by-ones in conversion function
|
|
|
|
| |
of a boolean expression (#124394)
|
|
|
|
|
|
| |
Use a `_PyStackRef` and defer the reference to `f_funcobj` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
|
|
|
|
| |
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
Co-authored-by: Sam Gross <655866+colesbury@users.noreply.github.com>
|
|
|
|
|
|
|
|
| |
(#123924)
Use a `_PyStackRef` and defer the reference to `f_executable` when
possible. This avoids some reference count contention in the common case
of executing the same code object from multiple threads concurrently in
the free-threaded build.
|
|
|
|
| |
that it is kept in memory for calls (GH-124003)
|
|
|
|
| |
errors (GH-123546)
|
|
|
|
| |
Use _Py_IsImmortalLoose() in bytesobject.c, typeobject.c
and ceval.c.
|
|
|
|
| |
tier 2. (GH-123381)
|
|
|
| |
Fix MSVC warning "conversion from '__int64' to 'int'"
|
| |
|