| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
| |
code unit (GH-28711)
|
|
|
|
|
| |
I'm just removing an erroneous NEWS entry I previously merged.
Automerge-Triggered-By: GH:JulienPalard
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
| |
pypi.org " The Python Package Index (PyPI) ...
|
|
|
|
| |
when the underlying file is closed (GH-28457)
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Add a private C API for deadlines: add _PyDeadline_Init() and
_PyDeadline_Get() functions.
* Add _PyTime_Add() and _PyTime_Mul() functions which compute t1+t2
and t1*t2 and clamp the result on overflow.
* _PyTime_MulDiv() now uses _PyTime_Add() and _PyTime_Mul().
|
|
|
|
|
|
|
|
|
| |
If the DEBUG_STATS debug flag is set, gc_collect_main() now uses
_PyTime_GetPerfCounter() instead of _PyTime_GetMonotonicClock() to
measure the elapsed time.
On Windows, _PyTime_GetMonotonicClock() only has a resolution of 15.6
ms, whereas _PyTime_GetPerfCounter() is closer to a resolution of 100
ns.
|
|
|
|
|
|
|
|
|
|
| |
WaitForSingleObject() accepts timeout in milliseconds in the range
[0; 0xFFFFFFFE] (DWORD type). INFINITE value (0xFFFFFFFF) means no
timeout. 0xFFFFFFFE milliseconds is around 49.7 days.
PY_TIMEOUT_MAX is (0xFFFFFFFE * 1000) milliseconds on Windows, around
49.7 days.
Partially revert commit 37b8294d6295ca12553fd7c98778be71d24f4b24.
|
| |
|
|
|
|
|
| |
(GH-28650)" (GH-28667)
This reverts commit b07fddd527efe67174ce6b0fdbe8dac390b16e4e.
|
|
|
|
|
|
| |
Add a PID to names of POSIX shared memory objects to allow
running multiprocessing tests (test_multiprocessing_fork,
test_multiprocessing_spawn, etc) in parallel.
|
|
|
|
|
|
|
|
|
| |
On Unix, if the sem_clockwait() function is available in the C
library (glibc 2.30 and newer), the threading.Lock.acquire() method
now uses the monotonic clock (time.CLOCK_MONOTONIC) for the timeout,
rather than using the system clock (time.CLOCK_REALTIME), to not be
affected by system clock changes.
configure now checks if the sem_clockwait() function is available.
|
|
|
|
|
| |
I broke some buildbots by not adding __phello__ to the list of installed packages.
https://bugs.python.org/issue45020
|
|
|
|
|
| |
I've added a number of test-only modules. Some of those cases are covered by the recently frozen stdlib modules (and some will be once we add encodings back in). However, I figured we'd play it safe by having a set of modules guaranteed to be there during tests.
https://bugs.python.org/issue45020
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Work correctly if an additional fresh module imports other
additional fresh module which imports a blocked module.
* Raises ImportError if the specified module cannot be imported
while all additional fresh modules are successfully imported.
* Support blocking packages.
* Always restore the import state of fresh and blocked modules
and their submodules.
* Fix test_decimal and test_xml_etree which depended on an undesired
side effect of import_fresh_module().
|
|
|
| |
This reverts commit d441437ee71ae174c008c23308b749b91020ba77.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PyThread_acquire_lock_timed() now clamps the timeout into the
[_PyTime_MIN; _PyTime_MAX] range (_PyTime_t type) if it is too large,
rather than calling Py_FatalError() which aborts the process.
PyThread_acquire_lock_timed() no longer uses
MICROSECONDS_TO_TIMESPEC() to compute sem_timedwait() argument, but
_PyTime_GetSystemClock() and _PyTime_AsTimespec_truncate().
Fix _thread.TIMEOUT_MAX value on Windows: the maximum timeout is
0x7FFFFFFF milliseconds (around 24.9 days), not 0xFFFFFFFF
milliseconds (around 49.7 days).
Set PY_TIMEOUT_MAX to 0x7FFFFFFF milliseconds, rather than 0xFFFFFFFF
milliseconds.
Fix PY_TIMEOUT_MAX overflow test: replace (us >= PY_TIMEOUT_MAX) with
(us > PY_TIMEOUT_MAX).
|
| |
|
| |
|
|
|
|
|
|
| |
Add pytime_add() and pytime_mul() functions to pytime.c to compute
t+t2 and t*k with clamping to [_PyTime_MIN; _PyTime_MAX].
Fix pytime.h: _PyTime_FromTimeval() is not implemented on Windows.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the _PyTime_AsTimespec_clamp() function: similar to
_PyTime_AsTimespec(), but clamp to _PyTime_t min/max and don't raise
an exception.
PyThread_acquire_lock_timed() now uses _PyTime_AsTimespec_clamp() to
remove the Py_UNREACHABLE() code path.
* Add _PyTime_AsTime_t() function.
* Add PY_TIME_T_MIN and PY_TIME_T_MAX constants.
* Replace _PyTime_AsTimeval_noraise() with _PyTime_AsTimeval_clamp().
* Add pytime_divide_round_up() function.
* Fix integer overflow in pytime_divide().
* Add pytime_divmod() function.
|
| |
|
| |
|
|
|
|
|
| |
Currently we're freezing the __init__.py twice, duplicating the built data unnecessarily With this change we do it once. There is no change in runtime behavior.
https://bugs.python.org/issue45020
|
| |
|
|
|
|
|
| |
Removed extra comma in comment that indicates state of a `Barrier` as it was confusing and breaking the flow while reading.
Co-authored-by: Priyank <5903604+cpriyank@users.noreply.github.com>
|
| |
|
|
|
|
|
|
|
|
|
| |
* fix doctest doc examples for syntax errors
* updated examples to use TypeErrors
* fixed first sentence
* unneeded comma
|
| |
|
|
|
|
| |
Add reprs for Semaphore, BoundedSemaphore, Event, and Barrier.
|
|
|
|
|
|
|
| |
* during tarfile parsing, a zlib error indicates invalid data
* tarfile.open now raises a descriptive exception from the zlib error
* this makes it clear to the user that they may be trying to open a
corrupted tar file
|
|
|
|
| |
Use "second", "millisecond", "microsecond", "nanosecond" instead of
"sec", "ms", "msec", "us", "ns", etc.
|
| |
|
| |
|
|
|
|
|
| |
During runtime startup we figure out the stdlib dir but currently throw that information away. This change preserves it and exposes it via PyConfig.stdlib_dir, _Py_GetStdlibDir(), and sys._stdlib_dir.
https://bugs.python.org/issue45211
|
| |
|
|
|
|
| |
IDLE recognizes Ctrl-D, as on other systems, instead of Ctrl-Z.
|
|
|
| |
Automerge-Triggered-By: GH:pablogsal
|
|
|
| |
"A JSONDecodeError" instead of "An JSONDecodeError".
|
| |
|
| |
|