summaryrefslogtreecommitdiffstats
path: root/Python/instrumentation.c
Commit message (Collapse)AuthorAgeFilesLines
* gh-120158: Fix inconsistent monitoring state when setting events too ↵Sam Gross2025-11-231-1/+1
| | | | | | | | | | | frequently (gh-141845) If we overflowed the global version counter (i.e., after 2*24 calls to `_PyMonitoring_SetEvents`), we bailed out after setting global monitoring events but before instrumenting code objects, which led to assertion errors later on. Also add a `time.sleep()` to `test_free_threading.test_monitoring` to avoid overflowing the global version counter.
* GH-139109: Support switch/case dispatch with the tracing interpreter. ↵Mark Shannon2025-11-181-2/+2
| | | | (GH-141703)
* gh-139109: A new tracing JIT compiler frontend for CPython (GH-140310)Ken Jin2025-11-131-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This PR changes the current JIT model from trace projection to trace recording. Benchmarking: better pyperformance (about 1.7% overall) geomean versus current https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251108-3.15.0a1%2B-7e2bc1d-JIT/bm-20251108-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-7e2bc1d-vs-base.svg, 100% faster Richards on the most improved benchmark versus the current JIT. Slowdown of about 10-15% on the worst benchmark versus the current JIT. **Note: the fastest version isn't the one merged, as it relies on fixing bugs in the specializing interpreter, which is left to another PR**. The speedup in the merged version is about 1.1%. https://raw.githubusercontent.com/facebookexperimental/free-threading-benchmarking/refs/heads/main/results/bm-20251112-3.15.0a1%2B-f8a764a-JIT/bm-20251112-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-f8a764a-vs-base.svg Stats: 50% more uops executed, 30% more traces entered the last time we ran them. It also suggests our trace lengths for a real trace recording JIT are too short, as a lot of trace too long aborts https://github.com/facebookexperimental/free-threading-benchmarking/blob/main/results/bm-20251023-3.15.0a1%2B-eb73378-CLANG%2CJIT/bm-20251023-vultr-x86_64-Fidget%252dSpinner-tracing_jit-3.15.0a1%2B-eb73378-pystats-vs-base.md . This new JIT frontend is already able to record/execute significantly more instructions than the previous JIT frontend. In this PR, we are now able to record through custom dunders, simple object creation, generators, etc. None of these were done by the old JIT frontend. Some custom dunders uops were discovered to be broken as part of this work gh-140277 The optimizer stack space check is disabled, as it's no longer valid to deal with underflow. Pros: * Ignoring the generated tracer code as it's automatically created, this is only additional 1k lines of code. The maintenance burden is handled by the DSL and code generator. * `optimizer.c` is now significantly simpler, as we don't have to do strange things to recover the bytecode from a trace. * The new JIT frontend is able to handle a lot more control-flow than the old one. * Tracing is very low overhead. We use the tail calling interpreter/computed goto interpreter to switch between tracing mode and non-tracing mode. I call this mechanism dual dispatch, as we have two dispatch tables dispatching to each other. Specialization is still enabled while tracing. * Better handling of polymorphism. We leverage the specializing interpreter for this. Cons: * (For now) requires tail calling interpreter or computed gotos. This means no Windows JIT for now :(. Not to fret, tail calling is coming soon to Windows though https://github.com/python/cpython/pull/139962 Design: * After each instruction, the `record_previous_inst` function/label is executed. This does as the name suggests. * The tracing interpreter lowers bytecode to uops directly so that it can obtain "fresh" values at the point of lowering. * The tracing version behaves nearly identical to the normal interpreter, in fact it even has specialization! This allows it to run without much of a slowdown when tracing. The actual cost of tracing is only a function call and writes to memory. * The tracing interpreter uses the specializing interpreter's deopt to naturally form the side exit chains. This allows it to side exit chain effectively, without repeating much code. We force a re-specializing when tracing a deopt. * The tracing interpreter can even handle goto errors/exceptions, but I chose to disable them for now as it's not tested. * Because we do not share interpreter dispatch, there is should be no significant slowdown to the original specializing interpreter on tailcall and computed got with JIT disabled. With JIT enabled, there might be a slowdown in the form of the JIT trying to trace. * Things that could have dynamic instruction pointer effects are guarded on. The guard deopts to a new instruction --- `_DYNAMIC_EXIT`.
* gh-137400: Fix thread-safety issues when profiling all threads (gh-137518)Sam Gross2025-08-131-36/+33
| | | | | | | | | | There were a few thread-safety issues when profiling or tracing all threads via PyEval_SetProfileAllThreads or PyEval_SetTraceAllThreads: * The loop over thread states could crash if a thread exits concurrently (in both the free threading and default build) * The modification of `c_profilefunc` and `c_tracefunc` wasn't thread-safe on the free threading build.
* gh-136870: fix data races in instrumentation of bytecode (#136994)Kumar Aditya2025-07-241-8/+12
| | | De-instrumenting code objects modifies the thread local bytecode for all threads as such, holding the critical section on the code object is not sufficient and leads to data races. Now, the de-instrumentation is now performed under a stop the world pause as such no thread races with executing the thread local bytecode while it is being de-instrumented.
* gh-134411: assert `PyLong_FromLong(x) != NULL` when `x` is known to be small ↵Sergey Muraviov2025-07-211-0/+4
| | | | | | (#134415) Since `PyLong_From Long(PY_MONITORING_DEBUGGER_ID)` falls to `small_int` case and can't return `NULL`. Added `assert`s for extra confidence. https://github.com/python/cpython/issues/134411#issuecomment-2897653868
* Fix a minor indentation error (#136661)Tian Gao2025-07-151-1/+1
|
* gh-132336: Mark a few "slow path" functions used by the interpreter loop as ↵mpage2025-04-101-7/+7
| | | | | | | | | noinline (#132337) Mark a few functions used by the interpreter loop as noinline These are all the slow path and should not be inlined into the interpreter loop. Unfortunately, they end up being inlined with LTO and the current PGO task.
* gh-131763: Replace the redundant check with assert in remove_tools (#131765)Sergey Muraviov2025-03-261-2/+3
|
* gh-111178: fix UBSan failures for `Python/instrumentation.c` (#131608)Bénédikt Tran2025-03-241-10/+18
|
* gh-131238: Remove includes from pycore_interp.h (#131495)Victor Stinner2025-03-201-13/+11
| | | Remove also now unused includes in C files.
* gh-131238: Remove more includes from pycore_interp.h (#131480)Victor Stinner2025-03-191-0/+1
|
* gh-131238: Remove many includes from pycore_interp.h (#131472)Victor Stinner2025-03-191-4/+5
|
* GH-131238: Core header refactor (GH-131250)Mark Shannon2025-03-171-0/+1
| | | | | * Moves most structs in pycore_ header files into pycore_structs.h and pycore_runtime_structs.h * Removes many cross-header dependencies
* gh-131141: fix data race in instrumentation while registering callback (#131142)Kumar Aditya2025-03-121-10/+13
|
* GH-128534: Fix behavior of branch monitoring for `async for` (GH-130847)Mark Shannon2025-03-071-0/+8
| | | * Both branches in a pair now have a common source and are included in co_branches
* GH-128534: Instrument branches for `async for` loops. (GH-130569)Mark Shannon2025-02-271-0/+4
|
* GH-129715: Don't project traces that return to an unknown caller (GH-130024)Brandt Bucher2025-02-121-2/+1
|
* Revert "GH-128914: Remove conditional stack effects from `bytecodes.c` and ↵Sam Gross2025-01-231-5/+0
| | | | | | | the code generators (GH-128918)" (GH-129202) The commit introduced a ~2.5-3% regression in the free threading build. This reverts commit ab61d3f4303d14a413bc9ae6557c730ffdf7579e.
* GH-128563: Add new frame owner type for interpreter entry frames (GH-129078)Mark Shannon2025-01-211-1/+1
| | | Add new frame owner type for interpreter entry frames
* GH-127953: Make line number lookup O(1) regardless of the size of the code ↵Mark Shannon2025-01-211-157/+200
| | | | object (GH-128350)
* GH-128914: Remove conditional stack effects from `bytecodes.c` and the code ↵Mark Shannon2025-01-201-0/+5
| | | | generators (GH-128918)
* GH-128375: Better instrument for `FOR_ITER` (GH-128445)Mark Shannon2025-01-061-54/+97
|
* GH-122548: Implement branch taken and not taken events for sys.monitoring ↵Mark Shannon2024-12-191-55/+266
| | | | (GH-122564)
* gh-114940: Add _Py_FOR_EACH_TSTATE_UNLOCKED(), and Friends (gh-127077)Eric Snow2024-11-211-5/+2
| | | This is a precursor to the actual fix for gh-114940, where we will change these macros to use the new lock. This change is almost entirely mechanical; the exceptions are the loops in codeobject.c and ceval.c, which now hold the "head" lock. Note that almost all of the uses of _Py_FOR_EACH_TSTATE_UNLOCKED() here will change to _Py_FOR_EACH_TSTATE_BEGIN() once we add the new per-interpreter lock.
* gh-115999: Implement thread-local bytecode and enable specialization for ↵mpage2024-11-041-68/+91
| | | | | | | | | `BINARY_OP` (#123926) Each thread specializes a thread-local copy of the bytecode, created on the first RESUME, in free-threaded builds. All copies of the bytecode for a code object are stored in the co_tlbc array on the code object. Threads reserve a globally unique index identifying its copy of the bytecode in all co_tlbc arrays at thread creation and release the index at thread destruction. The first entry in every co_tlbc array always points to the "main" copy of the bytecode that is stored at the end of the code object. This ensures that no bytecode is copied for programs that do not use threads. Thread-local bytecode can be disabled at runtime by providing either -X tlbc=0 or PYTHON_TLBC=0. Disabling thread-local bytecode also disables specialization. Concurrent modifications to the bytecode made by the specializing interpreter and instrumentation use atomics, with specialization taking care not to overwrite an instruction that was instrumented concurrently.
* GH-125837: Split `LOAD_CONST` into three. (GH-125972)Mark Shannon2024-10-291-5/+0
| | | | | | | | * Add LOAD_CONST_IMMORTAL opcode * Add LOAD_SMALL_INT opcode * Remove RETURN_CONST opcode
* GH-116968: Remove branch from advance_backoff_counter (GH-124469)Mark Shannon2024-10-071-4/+4
|
* gh-116750: Add clear_tool_id function to unregister events and callbacks ↵Tian Gao2024-10-011-0/+84
| | | | (#124568)
* GH-122390: Replace `_Py_GetbaseOpcode` with `_Py_GetBaseCodeUnit` (GH-122942)Mark Shannon2024-08-131-24/+45
|
* gh-122247: Move instruction instrumentation sanity check after tracing check ↵Tian Gao2024-08-081-1/+1
| | | | (#122251)
* gh-117657: Fix some simple races in instrumentation.c (GH-120118)Ken Jin2024-06-131-2/+2
| | | * stop the world when setting local events
* gh-111997: Fix argument count for LINE event and clarify type of argument ↵scoder2024-05-261-10/+12
| | | | counts. (#119179)
* gh-119431: fix refleak in test_monitoring (#119444)Irit Katriel2024-05-231-0/+1
|
* gh-118692: Avoid creating unnecessary StopIteration instances for monitoring ↵Irit Katriel2024-05-211-3/+5
| | | | (#119216)
* gh-118415: Fix issues with local tracing being enabled/disabled on a ↵Dino Viehland2024-05-061-46/+45
| | | | function (#118496)
* gh-111997: C-API for signalling monitoring events (#116413)Irit Katriel2024-05-041-0/+301
|
* gh-107674: Improve performance of `sys.settrace` (GH-117133)Tian Gao2024-05-031-3/+1
| | | | | | * Check tracing in RESUME_CHECK * Only change to RESUME_CHECK if not tracing
* gh-118335: Configure Tier 2 interpreter at build time (#118339)Guido van Rossum2024-05-011-0/+6
| | | | | | | | | | | | | | | | | | | | | | The code for Tier 2 is now only compiled when configured with `--enable-experimental-jit[=yes|interpreter]`. We drop support for `PYTHON_UOPS` and -`Xuops`, but you can disable the interpreter or JIT at runtime by setting `PYTHON_JIT=0`. You can also build it without enabling it by default using `--enable-experimental-jit=yes-off`; enable with `PYTHON_JIT=1`. On Windows, the `build.bat` script supports `--experimental-jit`, `--experimental-jit-off`, `--experimental-interpreter`. In the C code, `_Py_JIT` is defined as before when the JIT is enabled; the new variable `_Py_TIER2` is defined when the JIT *or* the interpreter is enabled. It is actually a bitmask: 1: JIT; 2: default-off; 4: interpreter.
* gh-117657: Fix small issues with instrumentation and TSAN (#118064)Dino Viehland2024-04-301-3/+6
| | | Small TSAN fixups for instrumentation
* gh-107674: Lazy load line number to improve performance of tracing (GH-118127)Tian Gao2024-04-291-15/+47
|
* gh-116818: Make `sys.settrace`, `sys.setprofile`, and monitoring thread-safe ↵Dino Viehland2024-04-191-20/+121
| | | | | | | (#116775) Makes sys.settrace, sys.setprofile, and monitoring generally thread-safe. Mostly uses a stop-the-world approach and synchronization around the code object's _co_instrumentation_version. There may be a little bit of extra synchronization around the monitoring data that's required to be TSAN clean.
* gh-107674: Remove some unnecessary code in instrumentation code (GH-117393)Tian Gao2024-04-091-1/+1
|
* gh-116968: Reimplement Tier 2 counters (#117144)Guido van Rossum2024-04-041-4/+4
| | | | | | | | | | | | Introduce a unified 16-bit backoff counter type (``_Py_BackoffCounter``), shared between the Tier 1 adaptive specializer and the Tier 2 optimizer. The API used for adaptive specialization counters is changed but the behavior is (supposed to be) identical. The behavior of the Tier 2 counters is changed: - There are no longer dynamic thresholds (we never varied these). - All counters now use the same exponential backoff. - The counter for ``JUMP_BACKWARD`` starts counting down from 16. - The ``temperature`` in side exits starts counting down from 64.
* gh-115832: Fix instrumentation version mismatch during interpreter shutdown ↵Brett Simmers2024-03-041-1/+9
| | | | | | | | | | | | | (#115856) A previous commit introduced a bug to `interpreter_clear()`: it set `interp->ceval.instrumentation_version` to 0, without making the corresponding change to `tstate->eval_breaker` (which holds a thread-local copy of the version). After this happens, Python code can still run due to object finalizers during a GC, and the version check in bytecodes.c will see a different result than the one in instrumentation.c causing an infinite loop. The fix itself is straightforward: clear `tstate->eval_breaker` when clearing `interp->ceval.instrumentation_version`.
* gh-116098: Revert "gh-107674: Improve performance of `sys.settrace` ↵Tian Gao2024-03-011-1/+3
| | | | | | | (GH-114986)" (GH-116178) Revert "gh-107674: Improve performance of `sys.settrace` (GH-114986)" This reverts commit 0a61e237009bf6b833e13ac635299ee063377699.
* gh-107674: Improve performance of `sys.settrace` (GH-114986)Tian Gao2024-02-281-3/+1
|
* gh-115168: Add pystats counter for invalidated executors (GH-115169)Michael Droettboom2024-02-261-3/+3
|
* gh-112175: Add `eval_breaker` to `PyThreadState` (#115194)Brett Simmers2024-02-201-11/+38
| | | | | | | | | | | This change adds an `eval_breaker` field to `PyThreadState`. The primary motivation is for performance in free-threaded builds: with thread-local eval breakers, we can stop a specific thread (e.g., for an async exception) without interrupting other threads. The source of truth for the global instrumentation version is stored in the `instrumentation_version` field in PyInterpreterState. Threads usually read the version from their local `eval_breaker`, where it continues to be colocated with the eval breaker bits.
* GH-113486: Do not emit spurious PY_UNWIND events for optimized calls to ↵Mark Shannon2024-01-051-4/+2
| | | | classes. (GH-113680)