| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
| |
(GH-11801)
Keeping references to processes and managers between tests makes them count as dangling processes.
|
|
|
|
| |
multiprocessing: provide unittests for manager classes and shareable types
|
|
|
|
| |
multiprocessing.Pool destructor now emits ResourceWarning
if the pool is still running.
|
|
|
|
|
|
|
| |
Replace time.time() with time.monotonic() in tests to measure time
delta.
test_zipfile64: display progress every minute (60 secs) rather than
every 5 minutes (5*60 seconds).
|
|
|
|
| |
multiprocessing.Pool.__enter__() now fails if the pool is not
running: "with pool:" fails if used more than once.
|
|
|
|
|
|
| |
Join 3 pools in these tests:
* test.test_multiprocessing_spawn.WithProcessesTestPool.test_context
* test.test_multiprocessing_spawn.WithProcessesTestPool.test_traceback
|
|
|
|
|
| |
(GH-8450)" (GH-10971)
This reverts commit 97bfe8d3ebb0a54c8798f57555cb4152f9b2e1d0.
|
|
|
|
|
|
|
| |
Fix WithThreadsTestPool.test_wrapped_exception()
of test_multiprocessing_fork: join the pool.
WithThreadsTestPool.test_del_pool() is now also decorated
with @support.reap_threads.
|
| |
|
| |
|
|
|
|
|
|
|
| |
the queue is closed. (GH-9010)
Previously, put() and get() would raise AssertionError and OSError,
respectively.
|
|
|
|
|
|
| |
Tests involving sending signals to the semaphore_tracker will not fail anymore due to
the fact that running the test suite with -Werror propagates warnings as errors.
Fix a missing assertion when the semaphore_tracker is expected to die.
|
|
|
| |
Fix a reference issue inside multiprocessing.Pool that caused the pool to remain alive if it was deleted without being closed or terminated explicitly.
|
|
|
| |
Support for threadless builds was removed in a6a4dc81.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fail `test_semaphore_tracker_sigint` if no warnings are expected and one is received.
Fix race condition when the child receives SIGINT before it can register signal handlers for it.
The race condition occurs when the parent calls
`_semaphore_tracker.ensure_running()` (which in turn spawns the
semaphore_tracker using `_posixsubprocess.fork_exec`), the child
registers the signal handlers and the parent tries to kill the child.
What seem to happen is that in some slow systems, the parent sends the
signal to kill the child before the child protects against the signal.
|
|
|
|
|
| |
Multiprocessing test_timeout() now accepts a delta of 100 ms instead
of just 50 ms, since the test failed with 135.8 ms instead of the
expected 200 ms.
|
| |
|
|
|
|
|
|
|
|
| |
Fix test_forkserver_sigkill() of test_multiprocessing_forkserver:
give more time to the first child process to complete, double the
sleep in the parent process.
Reduce also the child process sleep from 1000 ms to 500 ms, to not change
the total duration of the test.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When hunting memory leaks using -R 3:3, test_imap_unordered() of
test_multiprocessing leaks randomly a few memory blocks. It is a
false alarm: when testing using -R 3:20 for example, no leak is
detected.
Modify test_imap_unordered() to be closer to test_imap():
* Only test 10 numbers instead of 1000: it's a pool of 4 processes, so
10 is enough to test at least one number per process
* Use chunksize=100 instead of chunksize=53 to mimick test_imap()
|
|
|
| |
Increase timeouts from 10 seconds to 1 minute.
|
|
|
|
|
| |
test_mymanager_context() now also accepts -SIGTERM as an expected
exitcode for the manager process. The process is killed with SIGTERM
if it takes longer than 1 second to stop.
|
|
|
|
|
|
|
|
|
|
|
| |
Tolerate a different of 50 ms, instead of just 30 ms, in
test_timeout() of multiprocessing tests. This change should fix such
test failure on Windows:
FAIL: test_timeout (test.test_multiprocessing_spawn.WithProcessesTestQueue)
Traceback (most recent call last):
File "lib\test\_test_multiprocessing.py", line 753, in test_timeout
self.assertGreaterEqual(delta, 0.170)
AssertionError: 0.16138982772827148 not greater than or equal to 0.17
|
|
|
| |
This reverts commit 8fbbdf0c3107c3052659e166f73990b466eacbb0.
|
|
|
|
|
|
|
|
| |
* Add support.MS_WINDOWS: True if Python is running on Microsoft Windows.
* Add support.MACOS: True if Python is running on Apple macOS.
* Replace support.is_android with support.ANDROID
* Replace support.is_jython with support.JYTHON
* Cleanup code to initialize unix_shell
|
|
|
|
| |
Use also support.SOCK_MAX_SIZE, not only support.PIPE_MAX_SIZE, to
get the size for a blocking send into a multiprocessing pipe.
|
|
|
|
|
| |
Fix test_ignore() of multiprocessing tests like
test_multiprocessing_forkserver: use support.PIPE_MAX_SIZE to make
sure that send_bytes() blocks.
|
| |
|
|
|
|
|
| |
Large shared arrays allocated using multiprocessing would remain allocated
until the process ends.
|
| |
|
| |
|
|
|
|
|
| |
In some conditions the standard streams will be None or closed in the child process (for example if using "pythonw" instead of "python" on Windows). Avoid failing with a non-0 exit code in those conditions.
Report and initial patch by poxthegreat.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix socket(fileno=fd) by auto-detecting the socket's family, type,
and proto from the file descriptor. The auto-detection can be overruled
by passing in family, type, and proto explicitly.
Without the fix, all socket except for TCP/IP over IPv4 are basically broken:
>>> s = socket.create_connection(('www.python.org', 443))
>>> s
<socket.socket fd=3, family=AddressFamily.AF_INET6, type=SocketKind.SOCK_STREAM, proto=6, laddr=('2003:58:bc4a:3b00:56ee:75ff:fe47:ca7b', 59730, 0, 0), raddr=('2a04:4e42:1b::223', 443, 0, 0)>
>>> socket.socket(fileno=s.fileno())
<socket.socket fd=3, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0, laddr=('2003:58:bc4a:3b00::%2550471192', 59730, 0, 2550471192), raddr=('2a04:4e42:1b:0:700c:e70b:ff7f:0%2550471192', 443, 0, 2550471192)>
Signed-off-by: Christian Heimes <christian@python.org>
|
|
|
|
|
|
| |
pickling error (#3895)
Fix deadlocks in :class:`concurrent.futures.ProcessPoolExecutor` when task arguments or results cause pickling or unpickling errors.
This should make sure that calls to the :class:`ProcessPoolExecutor` API always eventually return.
|
|
|
|
| |
Run the child process with -E option to ignore the PYTHONWARNINGS
environment variable.
|
|
|
|
|
|
|
|
| |
* Fix multiple typos in code comments
* Add spacing in comments (test_logging.py, test_math.py)
* Fix spaces at the beginning of comments in test_logging.py
|
|
|
|
|
|
|
|
|
|
|
| |
kB (*kilo* byte) unit means 1000 bytes, whereas KiB ("kibibyte")
means 1024 bytes. KB was misused: replace kB or KB with KiB when
appropriate.
Same change for MB and GB which become MiB and GiB.
Change the output of Tools/iobench/iobench.py.
Round also the size of the documentation from 5.5 MB to 5 MiB.
|
|
|
|
|
|
|
|
|
|
|
| |
crashed (#3247)
* bpo-31310: multiprocessing's semaphore tracker should be launched again if crashed
* Avoid mucking with process state in test.
Add a warning if the semaphore process died, as semaphores may then be leaked.
* Add NEWS entry
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
necessary (#3246)
* bpo-31308: If multiprocessing's forkserver dies, launch it again when necessary.
* Fix test on Windows
* Add NEWS entry
* Adopt a different approach: ignore SIGINT and SIGTERM, as in semaphore tracker.
* Fix comment
* Make sure the test doesn't muck with process state
* Also test previously-started processes
* Update 2017-08-30-17-59-36.bpo-31308.KbexyC.rst
* Avoid masking SIGTERM in forkserver. It's not necessary and causes a race condition in test_many_processes.
|
|
|
|
| |
or None. (#4073)
|
|
|
|
| |
On macOS, a process can exit with -SIGKILL if it is killed "early"
with SIGTERM.
|
|
|
|
| |
Give 30 seconds to join_process(), instead of 5 or 10 seconds, to
wait until the process completes.
|
|
|
|
| |
join_thread() joins a thread but raises an AssertionError if the
thread is still alive after timeout seconds.
|
|
|
|
|
|
|
|
| |
be joined on exit (#3111)
* bpo-18966: non-daemonic threads created by a multiprocessing.Process should be joined on exit
* Add NEWS blurb
|
|
|
|
| |
Fix a warning about dangling processes in test_rapid_restart() of
_test_multiprocessing: join the process.
|
|
|
|
| |
_test_multiprocessing now marks the test as ENV_CHANGED on dangling
process or thread.
|
|
|
|
|
|
| |
* Close explicitly queues to make sure that we don't leave dangling
threads
* test_queue_in_process(): remove unused queue
* test_access() joins also the process to fix a random warning
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bpo-26762: test_multiprocessing now detects dangling processes and
threads per test case classes:
* setUpClass()/tearDownClass() of mixin classes now check if
multiprocessing.process._dangling or threading._dangling was
modified to detect "dangling" processses and threads.
* ManagerMixin.tearDownClass() now also emits a warning if it still
has more than one active child process after 5 seconds.
* tearDownModule() now checks for dangling processes and threads
before sleep 500 ms. And it now only sleeps if there is a least one
dangling process or thread.
|
|
|
|
|
|
|
| |
bpo-26762: Fix more dangling processes and threads in
test_multiprocessing:
* Queue: call close() followed by join_thread()
* Process: call join() or self.addCleanup(p.join)
|
|
|
|
|
| |
test_level() of _test_multiprocessing._TestLogging now uses regular
processes rather than daemon processes to prevent zombi processes
(to not "leak" processes).
|