| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
(GH-109780) (GH-111934)
Follow-up of gh-107219.
* Only close the connection writer on Windows.
* Also use existing constant _winapi.ERROR_OPERATION_ABORTED instead of
WSA_OPERATION_ABORTED.
(cherry picked from commit 0b4e090422db5f959184353d53552d1675f74212)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gh-109047: concurrent.futures catches PythonFinalizationError (#109810)
concurrent.futures: The *executor manager thread* now catches
exceptions when adding an item to the *call queue*. During Python
finalization, creating a new thread can now raise RuntimeError. Catch
the exception and call terminate_broken() in this case.
Add test_python_finalization_error() to test_concurrent_futures.
concurrent.futures._ExecutorManagerThread changes:
* terminate_broken() no longer calls shutdown_workers() since the
call queue is no longer working anymore (read and write ends of
the queue pipe are closed).
* terminate_broken() now terminates child processes, not only
wait until they complete.
* _ExecutorManagerThread.terminate_broken() now holds shutdown_lock
to prevent race conditons with ProcessPoolExecutor.submit().
multiprocessing.Queue changes:
* Add _terminate_broken() method.
* _start_thread() sets _thread to None on exception to prevent
leaking "dangling threads" even if the thread was not started
yet.
(cherry picked from commit 635184212179b0511768ea1cd57256e134ba2d75)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(GH-110037) (#110064)
gh-110036: multiprocessing Popen.terminate() catches PermissionError (GH-110037)
On Windows, multiprocessing Popen.terminate() now catchs
PermissionError and get the process exit code. If the process is
still running, raise again the PermissionError. Otherwise, the
process terminated as expected: store its exit code.
(cherry picked from commit bd4518c60c9df356cf5e05b81305e3644ebb5e70)
Co-authored-by: Victor Stinner <vstinner@python.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
(GH-109629) (#109898)
gh-109593: Fix reentrancy issue in multiprocessing resource_tracker (GH-109629)
---------
(cherry picked from commit 0eb98837b60bc58e57ad3e2b35c6b0e9ab634678)
Co-authored-by: Antoine Pitrou <antoine@python.org>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(#109254)
gh-107219: Fix concurrent.futures terminate_broken() (GH-109244)
Fix a race condition in concurrent.futures. When a process in the
process pool was terminated abruptly (while the future was running or
pending), close the connection write end. If the call queue is
blocked on sending bytes to a worker process, closing the connection
write end interrupts the send, so the queue can be closed.
Changes:
* _ExecutorManagerThread.terminate_broken() now closes
call_queue._writer.
* multiprocessing PipeConnection.close() now interrupts
WaitForMultipleObjects() in _send_bytes() by cancelling the
overlapped operation.
(cherry picked from commit a9b1f84790e977fb09f75b148c4c4f5924a6ef99)
Co-authored-by: Victor Stinner <vstinner@python.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(GH-108568) (#108691)
gh-108520: Fix bad fork detection in nested multiprocessing use case (GH-108568)
gh-107275 introduced a regression where a SemLock would fail being passed along nested child processes, as the `is_fork_ctx` attribute would be left missing after the first deserialization.
---------
(cherry picked from commit add8d45cbe46581b9748909fbbf60fdc8ee8f71e)
Co-authored-by: albanD <desmaison.alban@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Process before serializing it (GH-107275) (#108377)
gh-77377: Ensure multiprocessing SemLock is valid for spawn-based Process before serializing it (GH-107275)
Ensure multiprocessing SemLock is valid for spawn Process before serializing it.
Creating a multiprocessing SemLock with a fork context, and then trying to pass it to a spawn-created Process, would segfault if not detected early.
---------
(cherry picked from commit 1700d34d314f5304a7a75363bda295a8c15c371f)
Co-authored-by: albanD <desmaison.alban@gmail.com>
Co-authored-by: blurb-it[bot] <43283697+blurb-it[bot]@users.noreply.github.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
|
|
|
|
|
|
|
|
|
|
| |
(GH-102203) (#107990)
More actionable error message when spawn is incorrectly used. (GH-102203)
(cherry picked from commit a794ebeb028f7ef287c780d3890f816db9c21c51)
Co-authored-by: Yuxin Wu <ppwwyyxxc@gmail.com>
Co-authored-by: Yuxin Wu <ppwwyyxx@users.noreply.github.com>
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
|
|
|
|
|
|
|
|
|
|
|
| |
(GH-107965) (#107975)
gh-107963: Fix set_forkserver_preload to check the type of given list (GH-107965)
(cherry picked from commit 6515ec3d3d5acd3d0b99c88794bdec09f0831e5b)
gh-107963: Fix set_forkserver_preload to check the type of given list
Co-authored-by: Dong-hee Na <donghee.na@python.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`sys.executable` is `None` (GH-106464) (#106494)
gh-90876: Restore the ability to import multiprocessing when `sys.executable` is `None` (GH-106464)
Prevent `multiprocessing.spawn` from failing to *import* in environments
where `sys.executable` is `None`. This regressed in 3.11 with the addition
of support for path-like objects in multiprocessing.
Adds a test decorator to have tests only run when part of test_multiprocessing_spawn to `_test_multiprocessing.py` so we can start to avoid re-running the same not-global-state specific test in all 3 modes when there is no need.
(cherry picked from commit c60df361ce2d734148d503f4a711e67c110fe223)
Co-authored-by: Gregory P. Smith <greg@krypto.org>
|
|
|
|
|
|
|
|
|
|
| |
Fix a race condition in the internal `multiprocessing.process` cleanup
logic that could manifest as an unintended `AttributeError` when calling
`BaseProcess.close()`.
---------
Co-authored-by: Oleg Iarygin <oleg@arhadthedev.net>
Co-authored-by: Gregory P. Smith <greg@krypto.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bpo-17258: `multiprocessing` now supports stronger HMAC algorithms for inter-process connection authentication rather than only HMAC-MD5.
Signed-off-by: Christian Heimes <christian@python.org>
gpshead: I Reworked to be more robust while keeping the idea.
The protocol modification idea remains, but we now take advantage of the
message length as an indicator of legacy vs modern protocol version. No
more regular expression usage. We now default to HMAC-SHA256, but do so
in a way that will be compatible when communicating with older clients
or older servers. No protocol transition period is needed.
More integration tests to verify these claims remain true are required. I'm
unaware of anyone depending on multiprocessing connections between
different Python versions.
---------
Signed-off-by: Christian Heimes <christian@python.org>
Co-authored-by: Gregory P. Smith [Google] <greg@krypto.org>
|
|
|
|
| |
This reverts the core of #100618 while leaving relevant documentation
improvements and minor refactorings in place.
|
|
|
|
|
|
|
|
| |
This starts the process. Users who don't specify their own start method
and use the default on platforms where it is 'fork' will see a
DeprecationWarning upon multiprocessing.Pool() construction or upon
multiprocessing.Process.start() or concurrent.futures.ProcessPool use.
See the related issue and documentation within this change for details.
|
| |
|
| |
|
|
|
|
|
| |
In multiprocessing.shared_memory.SharedMemory(), the temporary view
returned by MapViewOfFile() should be unmapped when it is no longer
needed.
|
|
|
|
|
|
|
|
| |
(gh-99623)
Describe the multiprocessing connection protocol.
It isn't a good protocol, but it is what it is. This way we can more
easily reason about making changes to it in a backwards compatible way.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Linux abstract sockets are insecure as they lack any form of filesystem
permissions so their use allows anyone on the system to inject code into
the process.
This removes the default preference for abstract sockets in
multiprocessing introduced in Python 3.9+ via
https://github.com/python/cpython/pull/18866 while fixing
https://github.com/python/cpython/issues/84031.
Explicit use of an abstract socket by a user now generates a
RuntimeWarning. If we choose to keep this warning, it should be
backported to the 3.7 and 3.8 branches.
|
|
|
|
| |
argv[0] in virtual environments (GH-98462)
|
|
|
| |
Remove unused local variables.
|
|
|
|
|
| |
512 (#96890)
Co-authored-by: Kumar Aditya <59607654+kumaraditya303@users.noreply.github.com>
|
|
|
|
|
| |
is not None or a positive int (GH-93364)
Closes #83658.
|
|
|
|
|
|
|
| |
SharedMemory.unlink() uses the unregister() function from resource_tracker. Previously it was imported in the method, but this can fail if the method is called during interpreter shutdown, for example when unlink is part of a __del__() method.
Moving the import to the top of the file, means that the unregister() method is available during interpreter shutdown.
The register call in SharedMemory.__init__() can also use this imported resource_tracker.
|
|
|
|
|
|
| |
(#30617)
Co-authored-by: XD Trol <milestonejxd@gmail.com>
Co-authored-by: Antoine Pitrou <pitrou@free.fr>
|
|
|
|
|
|
|
|
| |
One more thing that can help prevent people from using `preexec_fn`.
Also adds conditional skips to two tests exposing ASAN flakiness on the Ubuntu 20.04 Address Sanitizer Github CI system. When that build is run on more modern systems the "problem" does not show up. It seems ASAN implementation related.
Co-authored-by: Zackery Spytz <zspytz@gmail.com>
Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
|
|
|
|
| |
collection and explicit close (#31913)
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Just in case there is ever an issue with _posixsubprocess's use of
vfork() due to the complexity of using it properly and potential
directions that Linux platforms where it defaults to on could take, this
adds a failsafe so that users can disable its use entirely by setting
a global flag.
No known reason to disable it exists. But it'd be a shame to encounter
one and not be able to use CPython without patching and rebuilding it.
See the linked issue for some discussion on reasoning.
Also documents the existing way to disable posix_spawn.
|
|
|
|
|
| |
multiprocessing.set_executable for Windows (GH-31279)
This bring the API to be on a par with Unix-like systems.
|
|
|
|
|
|
|
|
| |
Add an optional keyword 'shutdown_timeout' parameter to the
multiprocessing.BaseManager constructor. Kill the process if
terminate() takes longer than the timeout.
Multiprocessing tests pass test.support.SHORT_TIMEOUT
to BaseManager.shutdown_timeout.
|
| |
|
|
|
| |
Co-authored-by: Pablo Galindo Salgado <Pablogsal@gmail.com>
|
|
|
|
|
|
|
| |
This was causing test___all__ to fail on platforms lacking a shared
memory implementation.
Co-Authored-By: Xavier de Gaye <xdegaye@gmail.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
|
| |
|
|
|
|
| |
Co-authored-by: Jordan Speicher <jordan@jspeicher.com>
|
|
|
| |
Co-authored-by: Terry Jan Reedy <tjreedy@udel.edu>
|
|
|
|
|
| |
The multiprocessing Server class now explicitly catchs SystemExit and
closes the client connection in this case. It happens when the
Server.serve_client() method reachs the end of file (EOF).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the case of multiprocessing.synchronize() being missing, the
test_concurrent_futures test suite now skips only the tests that
require multiprocessing.synchronize().
Validate that multiprocessing.synchronize exists as part of
_check_system_limits(), allowing ProcessPoolExecutor to raise
NotImplementedError during __init__, rather than crashing with
ImportError during __init__ when creating a lock imported from
multiprocessing.synchronize.
Use _check_system_limits() to disable tests of
ProcessPoolExecutor on systems without multiprocessing.synchronize.
Running the test suite without multiprocessing.synchronize reveals
that Lib/compileall.py crashes when it uses a ProcessPoolExecutor.
Therefore, change Lib/compileall.py to call _check_system_limits()
before creating the ProcessPoolExecutor.
Note that both Lib/compileall.py and Lib/test/test_compileall.py
were attempting to sanity-check ProcessPoolExecutor by expecting
ImportError. In multiprocessing.resource_tracker, sem_unlink() is also absent
on platforms where POSIX semaphores aren't available. Avoid using
sem_unlink() if it, too, does not exist.
Co-authored-by: Pablo Galindo <Pablogsal@gmail.com>
|
| |
|
|
|
| |
The error message was missing space between the action "acquire" and "_wait_semaphore" which is an attribute for instances of Condition.
|
| |
|
| |
|
|
|
|
|
|
| |
Add a new close() method to multiprocessing.SimpleQueue to explicitly
close the queue.
Automerge-Triggered-By: @pitrou
|
|
|
|
|
| |
The item size must be checked after encoding to bytes, not before.
Automerge-Triggered-By: @pitrou
|
|
|
|
|
|
|
|
| |
Avoid linear runtime of ShareableList.__getitem__ and
ShareableList.__setitem__ by storing running allocated bytes in
ShareableList._allocated_bytes instead of the number of bytes for
a particular stored item.
Co-authored-by: Antoine Pitrou <antoine@python.org>
|
| |
|
| |
|