| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
| |
fix UBSan failures for `propertyobject`
|
|
|
|
| |
operation (gh-128196)
|
|
|
|
| |
`Objects/unicodeobject::_copy_characters`` (#127876)
|
|
|
| |
fix UBSan failures for `_PyTupleIterObject`
|
|
|
|
| |
* fix UBSan failures for `enumobject`
* fix UBSan failures for `reversedobject`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(GH-127789)
- Unify `get_unicode` and `get_string` in a single function.
- Allow to retrieve the underlying `object` attribute, its
size, and the adjusted 'start' and 'end', all at once.
Add a new `_PyUnicodeError_GetParams` internal function for this.
(In `exceptions.c`, it's somewhat common to not need all the attributes,
but the compiler has opportunity to inline the function and optimize
unneeded work away. Outside that file, we'll usually need all or
most of them at once.)
- Use a common implementation for the following functions:
- `PyUnicode{Decode,Encode}Error_GetEncoding`
- `PyUnicode{Decode,Encode,Translate}Error_GetObject`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}Reason`
- `PyUnicode{Decode,Encode,Translate}Error_{Get,Set}{Start,End}`
|
| |
|
|
|
|
|
| |
There was a data race on the utf8 field between `PyUnicode_SET_UTF8` and
`_PyUnicode_CheckConsistency`. Use the `_PyUnicode_UTF8()` accessor,
which uses an atomic load internally, to avoid the data race.
|
|
|
|
| |
(GH-128297)
|
|
|
| |
Co-authored-by: Kumar Aditya <kumaraditya@python.org>
|
|
|
| |
It's already inside a `Py_GIL_DISABLED` block so the `#else` clause is always unused.
|
| |
|
|
|
|
| |
(GH-128121)
|
|
|
|
|
|
|
| |
Methods (functions defined in class scope) are likely to be cleaned
up by the GC anyway.
Add a new code flag, `CO_METHOD`, that is set for functions defined
in a class scope. Use that when deciding to defer functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Add `_PyDictKeys_StringLookupSplit` which does locking on dict keys and
use in place of `_PyDictKeys_StringLookup`.
* Change `_PyObject_TryGetInstanceAttribute` to use that function
in the case of split keys.
* Add `unicodekeys_lookup_split` helper which allows code sharing
between `_Py_dict_lookup` and `_PyDictKeys_StringLookupSplit`.
* Fix locking for `STORE_ATTR_INSTANCE_VALUE`. Create
`_GUARD_TYPE_VERSION_AND_LOCK` uop so that object stays locked and
`tp_version_tag` cannot change.
* Pass `tp_version_tag` to `specialize_dict_access()`, ensuring
the version we store on the cache is the correct one (in case of
it changing during the specalize analysis).
* Split `analyze_descriptor` into `analyze_descriptor_load` and
`analyze_descriptor_store` since those don't share much logic.
Add `descriptor_is_class` helper function.
* In `specialize_dict_access`, double check `_PyObject_GetManagedDict()`
in case we race and dict was materialized before the lock.
* Avoid borrowed references in `_Py_Specialize_StoreAttr()`.
* Use `specialize()` and `unspecialize()` helpers.
* Add unit tests to ensure specializing happens as expected in FT builds.
* Add unit tests to attempt to trigger data races (useful for running under TSAN).
* Add `has_split_table` function to `_testinternalcapi`.
|
|
|
|
| |
(GH-122564)
|
|
|
|
|
|
|
|
|
| |
The `PyWeakref_IsDead()` function tests if a weak reference is dead
without any side effects. Although you can also detect if a weak
reference is dead using `PyWeakref_GetRef()`, that function returns a
strong reference that must be `Py_DECREF()`'d, which can introduce side
effects if the last reference is concurrently dropped (at least in the
free threading build).
|
|
|
|
| |
(#128021)
|
| |
|
|
|
|
|
|
|
| |
Convert unicodeobject.c macros to static inline functions.
* Add _PyUnicode_SET_UTF8() and _PyUnicode_SET_UTF8_LENGTH() macros.
* Add PyUnicode_HASH() and PyUnicode_SET_HASH() macros.
* Remove unused _PyUnicode_KIND() and _PyUnicode_GET_LENGTH() macros.
|
| |
|
|
|
| |
Remove 1 branch.
|
| |
|
|
|
|
|
| |
Co-authored-by: Sergey B Kirpichev <skirpichev@gmail.com>
Co-authored-by: Steve Dower <steve.dower@microsoft.com>
Co-authored-by: Bénédikt Tran <10796600+picnixz@users.noreply.github.com>
|
| |
|
|
|
|
|
| |
* Set a bit in the unused part of the refcount on 64 bit machines and the free-threaded build.
* Use the top of the refcount range on 32 bit machines
|
|
|
|
| |
* Use a small buffer, then list when constructing a tuple from an arbitrary sequence.
|
|
|
|
| |
message to ValueError: fromhex() arg must be of even length (#127756)
|
|
|
|
| |
This fixes a UBSan failure (unaligned zero-size memcpy) in `dictobject.c`.
|
|
|
|
|
|
|
|
| |
(GH-127519)" (GH-127770)
Revert "GH-126491: Lower heap size limit with faster marking (GH-127519)"
This reverts commit 023b7d2141467017abc27de864f3f44677768cb3, which introduced
a refleak.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
free-threaded build (#127315)
Co-authored-by: Victor Stinner <vstinner@python.org>
|
|
|
|
| |
(GH-127566)
|
|
|
|
|
|
|
| |
* Faster marking of reachable objects
* Changes calculation of work to do and work done.
* Merges transitive closure calculations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In some cases, previously computed as (nan+nanj), we could recover
meaningful component values in the result, see e.g. the C11, Annex
G.5.1, routine _Cmultd():
>>> z = 1e300+1j
>>> z*(nan+infj) # was (nan+nanj)
(-inf+infj)
That also fix some complex powers for small integer exponents, computed
with optimized algorithm (by squaring):
>>> z**5 # was (nan+nanj)
Traceback (most recent call last):
File "<python-input-1>", line 1, in <module>
z**5
~^^~
OverflowError: complex exponentiation
|
|
|
|
|
|
|
|
|
|
|
|
| |
Objects may be temporarily "resurrected" in destructors when calling
finalizers or watcher callbacks. We previously undid the resurrection
by decrementing the reference count using `Py_SET_REFCNT`. This was not
thread-safe because other threads might be accessing the object
(modifying its reference count) if it was exposed by the finalizer,
watcher callback, or temporarily accessed by a racy dictionary or list
access.
This adds internal-only thread-safe functions for temporary object
resurrection during destructors.
|
|
|
|
| |
We were missing locks around some list operations in the free threading
build.
|
|
|
|
|
|
| |
(GH-123380)
Co-authored-by: Sergey B Kirpichev <skirpichev@gmail.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
builds (#127123)
The CALL family of instructions were mostly thread-safe already and only required a small number of changes, which are documented below.
A few changes were needed to make CALL_ALLOC_AND_ENTER_INIT thread-safe:
Added _PyType_LookupRefAndVersion, which returns the type version corresponding to the returned ref.
Added _PyType_CacheInitForSpecialization, which takes an init method and the corresponding type version and only populates the specialization cache if the current type version matches the supplied version. This prevents potentially caching a stale value in free-threaded builds if we race with an update to __init__.
Only cache __init__ functions that are deferred in free-threaded builds. This ensures that the reference to __init__ that is stored in the specialization cache is valid if the type version guard in _CHECK_AND_ALLOCATE_OBJECT passes.
Fix a bug in _CREATE_INIT_FRAME where the frame is pushed to the stack on failure.
A few other miscellaneous changes were also needed:
Use {LOCK,UNLOCK}_OBJECT in LIST_APPEND. This ensures that the list's per-object lock is held while we are appending to it.
Add missing co_tlbc for _Py_InitCleanup.
Stop/start the world around setting the eval frame hook. This allows us to read interp->eval_frame non-atomically and preserves the behavior of _CHECK_PEP_523 documented below.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Replace uses of `PyCell_GET` and `PyCell_SET`. These macros are not
safe to use in the free-threaded build. Use `PyCell_GetRef()` and
`PyCell_SetTakeRef()` instead.
* Since `PyCell_GetRef()` returns a strong rather than borrowed ref, some
code restructuring was required, e.g. `frame_get_var()` returns a strong
ref now.
* Add critical sections to `PyCell_GET` and `PyCell_SET`.
* Move critical_section.h earlier in the Python.h file.
* Add `PyCell_GET` to the free-threading howto table of APIs that return
borrowed refs.
* Add additional unit tests for free-threading.
|
|
|
|
|
|
| |
Use existing helpers to atomically modify the bytecode. Add unit tests
to ensure specializing is happening as expected. Add test_specialize.py
that can be used with ThreadSanitizer to detect data races.
Fix thread safety issue with cell_set_contents().
|
|
|
|
|
|
|
|
|
| |
In the free threading build, if a non-owning thread resizes a list,
it must use QSBR to free the old list array because there may be a
concurrent access (without a lock) from the owning thread.
To match the pattern in dictobject.c, we just mark the list as "shared"
before resizing if it's from a non-owning thread and not already marked
as shared.
|
|
|
| |
Raise RuntimeError instead of RuntimeWarning.
|
| |
|
| |
|
|
|
|
| |
(#127399)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"Generally, mixed-mode arithmetic combining real and complex variables should
be performed directly, not by first coercing the real to complex, lest the sign
of zero be rendered uninformative; the same goes for combinations of pure
imaginary quantities with complex variables." (c) Kahan, W: Branch cuts for
complex elementary functions.
This patch implements mixed-mode arithmetic rules, combining real and
complex variables as specified by C standards since C99 (in particular,
there is no special version for the true division with real lhs
operand). Most C compilers implementing C99+ Annex G have only these
special rules (without support for imaginary type, which is going to be
deprecated in C2y).
|