diff options
author | Michael Droettboom <mdboom@gmail.com> | 2024-03-13 22:13:33 (GMT) |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-13 22:13:33 (GMT) |
commit | cef0ec1a3ca40db69b56bcd736c1b3bb05a1cf48 (patch) | |
tree | 6025d8fccc74cc13e7300870db12607e6c2099c2 /Python/bytecodes.c | |
parent | 8c6db45ce34df7081d7497e638daf3e130303295 (diff) | |
download | cpython-cef0ec1a3ca40db69b56bcd736c1b3bb05a1cf48.zip cpython-cef0ec1a3ca40db69b56bcd736c1b3bb05a1cf48.tar.gz cpython-cef0ec1a3ca40db69b56bcd736c1b3bb05a1cf48.tar.bz2 |
gh-116760: Fix pystats for trace attempts (GH-116761)
There are now at least two bytecodes that may attempt to optimize,
JUMP_BACK, and more recently, COLD_EXIT.
Only the JUMP_BACK was counting the attempt in the stats.
This moves that counter to uop_optimize itself so it should
always happen no matter where it is called from.
Diffstat (limited to 'Python/bytecodes.c')
-rw-r--r-- | Python/bytecodes.c | 1 |
1 files changed, 0 insertions, 1 deletions
diff --git a/Python/bytecodes.c b/Python/bytecodes.c index ec05e40..af2e2c8 100644 --- a/Python/bytecodes.c +++ b/Python/bytecodes.c @@ -2349,7 +2349,6 @@ dummy_func( // Use '>=' not '>' so that the optimizer/backoff bits do not effect the result. // Double-check that the opcode isn't instrumented or something: if (offset_counter >= threshold && this_instr->op.code == JUMP_BACKWARD) { - OPT_STAT_INC(attempts); _Py_CODEUNIT *start = this_instr; /* Back up over EXTENDED_ARGs so optimizer sees the whole instruction */ while (oparg > 255) { |