summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorFilip Ɓajszczak <filip@lajszczak.dev>2024-05-29 16:39:34 (GMT)
committerGitHub <noreply@github.com>2024-05-29 16:39:34 (GMT)
commit659cb7e6b8e83e1541fc27fd29d4846e940b600e (patch)
treeb4ee7f0ecb1f2c04053f1bfedbd5750fc058eb3e
parent78d697b7d5ec2a6fa046b0e1c34e804f49e750b4 (diff)
downloadcpython-659cb7e6b8e83e1541fc27fd29d4846e940b600e.zip
cpython-659cb7e6b8e83e1541fc27fd29d4846e940b600e.tar.gz
cpython-659cb7e6b8e83e1541fc27fd29d4846e940b600e.tar.bz2
gh-119721: Integrate documentation fixes into heapq module docstring. (gh-119722)
-rw-r--r--Lib/heapq.py12
1 files changed, 6 insertions, 6 deletions
diff --git a/Lib/heapq.py b/Lib/heapq.py
index c53cb55..9649da2 100644
--- a/Lib/heapq.py
+++ b/Lib/heapq.py
@@ -78,7 +78,7 @@ items while the sort is going on, provided that the inserted items are
not "better" than the last 0'th element you extracted. This is
especially useful in simulation contexts, where the tree holds all
incoming events, and the "win" condition means the smallest scheduled
-time. When an event schedule other events for execution, they are
+time. When an event schedules other events for execution, they are
scheduled into the future, so they can easily go into the heap. So, a
heap is a good structure for implementing schedulers (this is what I
used for my MIDI sequencer :-).
@@ -91,14 +91,14 @@ are more efficient overall, yet the worst cases might be terrible.
Heaps are also very useful in big disk sorts. You most probably all
know that a big sort implies producing "runs" (which are pre-sorted
-sequences, which size is usually related to the amount of CPU memory),
+sequences, whose size is usually related to the amount of CPU memory),
followed by a merging passes for these runs, which merging is often
very cleverly organised[1]. It is very important that the initial
sort produces the longest runs possible. Tournaments are a good way
-to that. If, using all the memory available to hold a tournament, you
-replace and percolate items that happen to fit the current run, you'll
-produce runs which are twice the size of the memory for random input,
-and much better for input fuzzily ordered.
+to achieve that. If, using all the memory available to hold a
+tournament, you replace and percolate items that happen to fit the
+current run, you'll produce runs which are twice the size of the
+memory for random input, and much better for input fuzzily ordered.
Moreover, if you output the 0'th item on disk and get an input which
may not fit in the current tournament (because the value "wins" over