| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
(GH-142759)
Signed-off-by: Manjusaka <me@manjusaka.me>
|
| | |
|
| |
|
|
|
|
|
|
| |
exiting jitted code. (GH-142762)
JIT: Fix crash due to incorrect caching on side exits when exiting jitted code.
* Make sure that stack is in correct state at side exits with TOS cached values
* Simplify choice of cached items for side exits
|
| |
|
|
|
|
|
|
| |
This combines most _PyStackRef functions and macros between the free
threaded and default builds.
- Remove Py_TAG_DEFERRED (same as Py_TAG_REFCNT)
- Remove PyStackRef_IsDeferred (same as !PyStackRef_RefcountOnObject)
|
| |
|
| |
Co-authored-by: Ken Jin <kenjin4096@gmail.com>
|
| |
|
|
| |
Signed-off-by: Manjusaka <me@manjusaka.me>
Co-authored-by: Ken Jin <kenjin4096@gmail.com>
|
| |
|
|
| |
(#142636)
|
| | |
|
| |
|
|
| |
(gh-142703)
|
| |
|
| |
Co-authored-by: Ken Jin <kenjin4096@gmail.com>
|
| |
|
| |
Signed-off-by: Manjusaka <me@manjusaka.me>
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This roughly follows what was done for dictobject to make a lock-free
lookup operation. With this change, the set contains operation scales much
better when used from multiple-threads. The frozenset contains performance
seems unchanged (as already lock-free).
Summary of changes:
* refactor set_lookkey() into set_do_lookup() which now takes a function
pointer that does the entry comparison. This is similar to dictobject and
do_lookup(). In an optimized build, the comparison function is inlined and
there should be no performance cost to this.
* change set_do_lookup() to return a status separately from the entry value
* add set_compare_frozenset() and use if the object is a frozenset. For the
free-threaded build, this avoids some overhead (locking, atomic operations,
incref/decref on key)
* use FT_ATOMIC_* macros as needed for atomic loads and stores
* use a deferred free on the set table array, if shared (only on free-threaded
build, normal build always does an immediate free)
* for free-threaded build, use explicit for loop to zero the table, rather than memcpy()
* when mutating the set, assign so->table to NULL while the change is a
happening. Assign the real table array after the change is done.
|
| | |
|
| |
|
|
|
|
|
| |
Deprecate functions:
* _PyObject_CallMethodId()
* _PyObject_GetAttrId()
* _PyUnicode_FromId()
|
| |
|
|
|
|
| |
There are places we use "relaxed" loads where C11 requires "consume" or
stronger. Unfortunately, compilers don't really implement "consume" so
fake it for our use in a way that avoids upsetting TSan.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
Uses three registers to cache values at the top of the evaluation stack
This significantly reduces memory traffic for smaller, more common uops.
|
| | |
|
| | |
|
| |
|
|
| |
(GH-141989)
|
| |
|
|
|
| |
MAKE_VALUE_AND_BACKOFF() macro casts its result to uint16_t.
Add pycore_backoff.h header to test_cppext tests.
|
| |
|
| |
Increase _PyOS_MIN_STACK_SIZE if Python is built in debug mode.
|
| |
|
|
|
|
|
|
|
|
|
| |
On m68k, an fmove instruction accessing %fpcr may only move from
or to a data register or a memory operand. The constraint "g" also
permits the use of address registers, which is invalid. The correct
constraint is "dm". Beginning with GCC 15, the register allocator
picks an address register in the code which causes SIGILL during
runtime.
Co-authored-by: Michael Karcher <github@mkarcher.dialup.fu-berlin.de>
|
| | |
|
| | |
|
| |
|
|
| |
sequence is mutated (#141736)
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
(#142137)
This PR implements frame caching in the RemoteUnwinder class to significantly reduce memory reads when profiling remote processes with deep call stacks.
When cache_frames=True, the unwinder stores the frame chain from each sample and reuses unchanged portions in subsequent samples. Since most profiling samples capture similar call stacks (especially the parent frames), this optimization avoids repeatedly reading the same frame data from the target process.
The implementation adds a last_profiled_frame field to the thread state that tracks where the previous sample stopped. On the next sample, if the current frame chain reaches this marker, the cached frames from that point onward are reused instead of being re-read from remote memory.
The sampling profiler now enables frame caching by default.
|
| |
|
| |
Co-authored-by: Sergey B Kirpichev <skirpichev@gmail.com>
|
| | |
|
| |
|
|
|
|
| |
* Factor out bodies of the largest uops, to reduce jit code size.
* Factor out common assert, also reducing jit code size.
* Limit size of jitted code for a single executor to 1MB.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Remove internal functions:
* _PyDict_ContainsId()
* _PyDict_DelItemId()
* _PyDict_GetItemIdWithError()
* _PyDict_SetItemId()
* _PyEval_GetBuiltinId()
* _PyObject_CallMethodIdNoArgs()
* _PyObject_CallMethodIdObjArgs()
* _PyObject_CallMethodIdOneArg()
* _PyObject_VectorcallMethodId()
* _PyUnicode_EqualToASCIIId()
These functions were not exported and so no usable outside CPython.
|
| | |
|
| |
|
| |
test_peg_generator needs the function.
|
| |
|
| |
Add _PyMem_IsULongFreed() function.
|
| |
|
| |
Replace frames[1] with frames[] in tracemalloc_traceback structure.
|
| |
|
| |
Move the private function to the internal C API (pycore_ceval.h).
|
| |
|
|
|
|
|
|
| |
Added atomic operations to `scanner_begin()` and `scanner_end()` to prevent
race conditions on the `executing` flag in free-threaded builds. Also added
tests for concurrent usage of the `re` module.
Without the atomic operations, `test_scanner_concurrent_access()` triggers
`assert(self->executing)` failures, or a thread sanitizer run emits errors.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
ABI (GH-139166)
* Make Py_{SIZE,IS_TYPE,SET_SIZE} regular functions in stable ABI
Group them together with Py_TYPE & Py_SET_TYPE to cut down
on repetitive preprocessor macros.
Format repetitive definitions in object.c more concisely.
Py_SET_TYPE is still left out of the Limited API.
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
2 (GH-141591)
|
| | |
|