summaryrefslogtreecommitdiffstats
path: root/Python/ceval_macros.h
Commit message (Collapse)AuthorAgeFilesLines
* GH-139922: Tail calling for MSVC (VS 2026) (GH-143068)Chris Eibl2025-12-221-6/+9
| | | | | Co-authored-by: Ken Jin <28750310+Fidget-Spinner@users.noreply.github.com> Co-authored-by: Brandt Bucher <brandt@python.org> Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
* gh-120321: Make gi_frame_state transitions atomic in FT build (gh-142599)Sam Gross2025-12-191-0/+25
| | | | | | | | | | | | | This makes generator frame state transitions atomic in the free threading build, which avoids segfaults when trying to execute a generator from multiple threads concurrently. There are still a few operations that aren't thread-safe and may crash if performed concurrently on the same generator/coroutine: * Accessing gi_yieldfrom/cr_await/ag_await * Accessing gi_frame/cr_frame/ag_frame * Async generator operations
* gh-134584: Remove custom float decref ops (GH-142576)Ken Jin2025-12-151-3/+4
|
* GH-140683: JIT: Improve machine code for loading smaller constants on ↵Mark Shannon2025-12-111-2/+6
| | | | | | AArch64. (GH-142511) * Use movz and movk instructions for loading 16 and 32 bit operands and oparg. * Loading of 64 bit operands is unchanged.
* GH-135379: Top of stack caching for the JIT. (GH-135465)Mark Shannon2025-12-111-0/+15
| | | | Uses three registers to cache values at the top of the evaluation stack This significantly reduces memory traffic for smaller, more common uops.
* GH-141794: Limit size of generated machine code. (GH-142228)Mark Shannon2025-12-031-1/+1
| | | | | | * Factor out bodies of the largest uops, to reduce jit code size. * Factor out common assert, also reducing jit code size. * Limit size of jitted code for a single executor to 1MB.
* GH-139109: Support switch/case dispatch with the tracing interpreter. ↵Mark Shannon2025-11-181-5/+5
| | | | (GH-141703)
* gh-139109: A new tracing JIT compiler frontend for CPython (GH-140310)Ken Jin2025-11-131-8/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR changes the current JIT model from trace projection to trace recording. Benchmarking: better pyperformance (about 1.7% overall) geomean versus current https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251108-3.15.0a1%2B-7e2bc1d-JIT/bm-20251108-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-7e2bc1d-vs-base.svg, 100% faster Richards on the most improved benchmark versus the current JIT. Slowdown of about 10-15% on the worst benchmark versus the current JIT. **Note: the fastest version isn't the one merged, as it relies on fixing bugs in the specializing interpreter, which is left to another PR**. The speedup in the merged version is about 1.1%. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg Stats: 50% more uops executed, 30% more traces entered the last time we ran them. It also suggests our trace lengths for a real trace recording JIT are too short, as a lot of trace too long aborts https://github.com/facebookexperimental/free-threading-benchmarking/blob/main/results/bm-20251023-3.15.0a1%2B-eb73378-CLANG%2CJIT/bm-20251023-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-eb73378-pystats-vs-base.md . This new JIT frontend is already able to record/execute significantly more instructions than the previous JIT frontend. In this PR, we are now able to record through custom dunders, simple object creation, generators, etc. None of these were done by the old JIT frontend. Some custom dunders uops were discovered to be broken as part of this work gh-140277 The optimizer stack space check is disabled, as it's no longer valid to deal with underflow. Pros: * Ignoring the generated tracer code as it's automatically created, this is only additional 1k lines of code. The maintenance burden is handled by the DSL and code generator. * `optimizer.c` is now significantly simpler, as we don't have to do strange things to recover the bytecode from a trace. * The new JIT frontend is able to handle a lot more control-flow than the old one. * Tracing is very low overhead. We use the tail calling interpreter/computed goto interpreter to switch between tracing mode and non-tracing mode. I call this mechanism dual dispatch, as we have two dispatch tables dispatching to each other. Specialization is still enabled while tracing. * Better handling of polymorphism. We leverage the specializing interpreter for this. Cons: * (For now) requires tail calling interpreter or computed gotos. This means no Windows JIT for now :(. Not to fret, tail calling is coming soon to Windows though https://github.com/python/cpython/pull/139962 Design: * After each instruction, the `record_previous_inst` function/label is executed. This does as the name suggests. * The tracing interpreter lowers bytecode to uops directly so that it can obtain "fresh" values at the point of lowering. * The tracing version behaves nearly identical to the normal interpreter, in fact it even has specialization! This allows it to run without much of a slowdown when tracing. The actual cost of tracing is only a function call and writes to memory. * The tracing interpreter uses the specializing interpreter's deopt to naturally form the side exit chains. This allows it to side exit chain effectively, without repeating much code. We force a re-specializing when tracing a deopt. * The tracing interpreter can even handle goto errors/exceptions, but I chose to disable them for now as it's not tested. * Because we do not share interpreter dispatch, there is should be no significant slowdown to the original specializing interpreter on tailcall and computed got with JIT disabled. With JIT enabled, there might be a slowdown in the form of the JIT trying to trace. * Things that could have dynamic instruction pointer effects are guarded on. The guard deopts to a new instruction --- `_DYNAMIC_EXIT`.
* gh-131253: free-threaded build support for pystats (gh-137189)Neil Schemenauer2025-11-031-1/+2
| | | | | | | | Allow the --enable-pystats build option to be used with free-threading. The stats are now stored on a per-interpreter basis, rather than process global. For free-threaded builds, the stats structure is allocated per-thread and then periodically merged into the per-interpreter stats structure (on thread exit or when the reporting function is called). Most of the pystats related code has be moved into the file Python/pystats.c.
* gh-140513: Fail to compile if `_Py_TAIL_CALL_INTERP` is set but ↵Krishna Chaitanya2025-11-011-0/+8
| | | | | `preserve_none` and `musttail` do not exist. (GH-140548) Co-authored-by: Chris Eibl <138194463+chris-eibl@users.noreply.github.com>
* gh-140104: Set next_instr properly in the JIT during exceptions (GH-140233)Ken Jin2025-10-271-1/+3
| | | | Co-authored-by: devdanzin <74280297+devdanzin@users.noreply.github.com> Co-authored-by: Chris Eibl <138194463+chris-eibl@users.noreply.github.com>
* gh-139109: Dynamic opcode targets (GH-139111)Ken Jin2025-09-181-7/+7
| | | | Make opcode targets table dynamic
* gh-135755: Make Py_TAIL_CALL_INTERP macro private (#138981)Victor Stinner2025-09-181-1/+1
| | | Rename Py_TAIL_CALL_INTERP to _Py_TAIL_CALL_INTERP.
* GH-137959: Replace shim code in jitted code with a single trampoline ↵Mark Shannon2025-08-211-28/+9
| | | | function. (GH-137961)
* GH-132532: Add new DSL macros to better declare semantics of exits at ends ↵Mark Shannon2025-08-091-0/+11
| | | | of instructions/uops. (GH-137098)
* GH-136410: Faster side exits by using a cold exit stub (GH-136411)Mark Shannon2025-08-011-4/+1
|
* GH-133231: Changes to executor management to support proposed `sys._jit` ↵Mark Shannon2025-05-041-3/+6
| | | | | | | | module (GH-133287) * Track the current executor, not the previous one, on the thread-state. * Batch executors for deallocation to avoid having to constantly incref executors; this is an ad-hoc form of deferred reference counting.
* gh-132758: Fix tail call and pystats builds (GH-132759)Ken Jin2025-04-231-6/+18
|
* GH-131498: Cases generator: manage stacks automatically (GH-132074)Mark Shannon2025-04-041-40/+0
|
* GH-127705: Use `_PyStackRef`s in the default build. (GH-127875)Mark Shannon2025-03-101-15/+0
|
* gh-129989: Properly disable tailcall interp in configure (GH-129991)Ken Jin2025-02-151-1/+1
| | | Co-authored-by: Zanie Blue <contact@zanie.dev>
* gh-129819: Allow tier2/JIT and tailcall (GH-129820)Ken Jin2025-02-121-1/+1
|
* GH-128682: Account for escapes in `DECREF_INPUTS` (GH-129953)Mark Shannon2025-02-121-11/+17
| | | | | | | | * Handle escapes in DECREF_INPUTS * Mark a few more functions as escaping * Replace DECREF_INPUTS with PyStackRef_CLOSE where possible
* GH-129709: Clean up tier two (GH-129710)Brandt Bucher2025-02-071-19/+19
|
* GH-129763: Remove the LLTRACE macro (GH-129764)Brandt Bucher2025-02-071-3/+3
|
* gh-128563: A new tail-calling interpreter (GH-128718)Ken Jin2025-02-061-13/+34
| | | | | Co-authored-by: Garrett Gu <garrettgu777@gmail.com> Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com> Co-authored-by: Hugo van Kemenade <1324225+hugovk@users.noreply.github.com>
* GH-128682: Mark two more macros as escaping. (GH-129645)Mark Shannon2025-02-041-9/+0
| | | Expand out SETLOCAL so that code generator can see the decref. Mark Py_CLEAR as escaping
* GH-128563: Move some labels, to simplify implementing tailcalling ↵Mark Shannon2025-01-311-1/+3
| | | | interpreter. (GH-129525)
* gh-128563: Move GO_TO_INSTRUCTION and PREDICT to cases generator (GH-129115)Ken Jin2025-01-221-32/+1
|
* gh-128563: Move lltrace into the frame struct (GH-129113)Ken Jin2025-01-211-2/+3
|
* GH-128563: Add new frame owner type for interpreter entry frames (GH-129078)Mark Shannon2025-01-211-2/+2
| | | Add new frame owner type for interpreter entry frames
* GH-128375: Better instrument for `FOR_ITER` (GH-128445)Mark Shannon2025-01-061-1/+1
|
* GH-127705: Add debug mode for `_PyStackRef`s inspired by HPy debug mode ↵Mark Shannon2024-12-201-3/+3
| | | | (GH-128121)
* gh-128033: change `PyMutex_LockFast` to take `PyMutex` as argument (#128054)Kumar Aditya2024-12-181-1/+1
| | | Change `PyMutex_LockFast` to take `PyMutex` as argument.
* gh-115999: Add free-threaded specialization for `STORE_SUBSCR` (#127169)Sam Gross2024-11-261-0/+23
| | | | | | | | | The specialization only depends on the type, so no special thread-safety considerations there. STORE_SUBSCR_LIST_INT needs to lock the list before modifying it. `_PyDict_SetItem_Take2` already internally locks the dictionary using a critical section.
* gh-120619: Strength reduce function guards, support 2-operand uop forms ↵Ken Jin2024-11-091-1/+2
| | | | | (GH-124846) Co-authored-by: Brandt Bucher <brandtbucher@gmail.com>
* gh-115999: Implement thread-local bytecode and enable specialization for ↵mpage2024-11-041-9/+13
| | | | | | | | | `BINARY_OP` (#123926) Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads. Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization. Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
* GH-125323: Convert DECREF_INPUTS_AND_REUSE_FLOAT into a function that takes ↵Mark Shannon2024-10-141-20/+0
| | | | PyStackRefs. (GH-125439)
* GH-119866: Spill the stack around escaping calls. (GH-124392)Mark Shannon2024-10-071-0/+1
| | | | | | | * Spill the evaluation around escaping calls in the generated interpreter and JIT. * The code generator tracks live, cached values so they can be saved to memory when needed. * Spills the stack pointer around escaping calls, so that the exact stack is visible to the cycle GC.
* GH-118093: Make `CALL_ALLOC_AND_ENTER_INIT` suitable for tier 2. (GH-123140)Mark Shannon2024-08-201-1/+1
| | | | | * Convert CALL_ALLOC_AND_ENTER_INIT to micro-ops such that tier 2 supports it * Allow inexact arguments for CALL_ALLOC_AND_ENTER_INIT.
* GH-120024: Remove `CHECK_EVAL_BREAKER` macro. (GH-122968)Mark Shannon2024-08-141-10/+0
| | | | | * Factor some instructions into micro-ops to isolate CHECK_EVAL_BREAKER for escape analysis * Eliminate CHECK_EVAL_BREAKER macro
* gh-122860: Remove unused macro `_Py_atomic_load_relaxed_int32` (#122861)Sam Gross2024-08-111-7/+0
|
* gh-117657: Avoid race in `PAUSE_ADAPTIVE_COUNTER` in free-threaded build ↵Sam Gross2024-07-301-1/+2
| | | | | | | (#122190) The adaptive counter doesn't do anything currently in the free-threaded build and TSan reports a data race due to concurrent modifications to the counter.
* GH-121131: Clean up and fix some instrumented instructions. (GH-121132)Mark Shannon2024-07-261-1/+4
| | | | * Add support for 'prev_instr' to code generator and refactor some INSTRUMENTED instructions
* GH-116017: Get rid of _COLD_EXITs (GH-120960)Brandt Bucher2024-07-011-2/+1
|
* gh-117139: Convert the evaluation stack to stack refs (#118450)Ken Jin2024-06-261-2/+33
| | | | | | | | | | | | | | | | | This PR sets up tagged pointers for CPython. The general idea is to create a separate struct _PyStackRef for everything on the evaluation stack to store the bits. This forces the C compiler to warn us if we try to cast things or pull things out of the struct directly. Only for free threading: We tag the low bit if something is deferred - that means we skip incref and decref operations on it. This behavior may change in the future if Mark's plans to defer all objects in the interpreter loop pans out. This implies a strict stack reference discipline is required. ALL incref and decref operations on stackrefs must use the stackref variants. It is unsafe to untag something then do normal incref/decref ops on it. The new incref and decref variants are called dup and close. They mimic a "handle" API operating on these stackrefs. Please read Include/internal/pycore_stackref.h for more information! --------- Co-authored-by: Mark Shannon <9448417+markshannon@users.noreply.github.com>
* GH-120982: Add stack check assertions to generated interpreter code (GH-120992)Mark Shannon2024-06-251-0/+2
|
* gh-107674: Improve performance of `sys.settrace` (GH-117133)Tian Gao2024-05-031-6/+10
| | | | | | * Check tracing in RESUME_CHECK * Only change to RESUME_CHECK if not tracing
* gh-117657: Fix small issues with instrumentation and TSAN (#118064)Dino Viehland2024-04-301-1/+1
| | | Small TSAN fixups for instrumentation
* GH-118095: Add dynamic exit support and FOR_ITER_GEN support to tier 2 ↵Mark Shannon2024-04-261-0/+1
| | | | (GH-118279)