summaryrefslogtreecommitdiffstats
path: root/Doc/library/asyncio-queue.rst
blob: 289ad1b014c356a7d2c860f3e07d5faf9eb982fa (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
.. currentmodule:: asyncio

.. _asyncio-queues:

======
Queues
======

**Source code:** :source:`Lib/asyncio/queues.py`

------------------------------------------------

asyncio queues are designed to be similar to classes of the
:mod:`queue` module.  Although asyncio queues are not thread-safe,
they are designed to be used specifically in async/await code.

Note that methods of asyncio queues don't have a *timeout* parameter;
use :func:`asyncio.wait_for` function to do queue operations with a
timeout.

See also the `Examples`_ section below.

Queue
=====

.. class:: Queue(maxsize=0)

   A first in, first out (FIFO) queue.

   If *maxsize* is less than or equal to zero, the queue size is
   infinite.  If it is an integer greater than ``0``, then
   ``await put()`` blocks when the queue reaches *maxsize*
   until an item is removed by :meth:`get`.

   Unlike the standard library threading :mod:`queue`, the size of
   the queue is always known and can be returned by calling the
   :meth:`qsize` method.


   This class is :ref:`not thread safe <asyncio-multithreading>`.

   .. attribute:: maxsize

      Number of items allowed in the queue.

   .. method:: empty()

      Return ``True`` if the queue is empty, ``False`` otherwise.

   .. method:: full()

      Return ``True`` if there are :attr:`maxsize` items in the queue.

      If the queue was initialized with ``maxsize=0`` (the default),
      then :meth:`full()` never returns ``True``.

   .. coroutinemethod:: get()

      Remove and return an item from the queue. If queue is empty,
      wait until an item is available.

   .. method:: get_nowait()

      Return an item if one is immediately available, else raise
      :exc:`QueueEmpty`.

   .. coroutinemethod:: join()

      Block until all items in the queue have been received and processed.

      The count of unfinished tasks goes up whenever an item is added
      to the queue. The count goes down whenever a consumer coroutine calls
      :meth:`task_done` to indicate that the item was retrieved and all
      work on it is complete.  When the count of unfinished tasks drops
      to zero, :meth:`join` unblocks.

   .. coroutinemethod:: put(item)

      Put an item into the queue. If the queue is full, wait until a
      free slot is available before adding the item.

   .. method:: put_nowait(item)

      Put an item into the queue without blocking.

      If no free slot is immediately available, raise :exc:`QueueFull`.

   .. method:: qsize()

      Return the number of items in the queue.

   .. method:: task_done()

      Indicate that a formerly enqueued task is complete.

      Used by queue consumers. For each :meth:`~Queue.get` used to
      fetch a task, a subsequent call to :meth:`task_done` tells the
      queue that the processing on the task is complete.

      If a :meth:`join` is currently blocking, it will resume when all
      items have been processed (meaning that a :meth:`task_done`
      call was received for every item that had been :meth:`~Queue.put`
      into the queue).

      Raises :exc:`ValueError` if called more times than there were
      items placed in the queue.


Priority Queue
==============

.. class:: PriorityQueue

   A variant of :class:`Queue`; retrieves entries in priority order
   (lowest first).

   Entries are typically tuples of the form
   ``(priority_number, data)``.


LIFO Queue
==========

.. class:: LifoQueue

   A variant of :class:`Queue` that retrieves most recently added
   entries first (last in, first out).


Exceptions
==========

.. exception:: QueueEmpty

   This exception is raised when the :meth:`~Queue.get_nowait` method
   is called on an empty queue.


.. exception:: QueueFull

   Exception raised when the :meth:`~Queue.put_nowait` method is called
   on a queue that has reached its *maxsize*.


Examples
========

.. _asyncio_example_queue_dist:

Queues can be used to distribute workload between several
concurrent tasks::

   import asyncio
   import random
   import time


   async def worker(name, queue):
       while True:
           # Get a "work item" out of the queue.
           sleep_for = await queue.get()

           # Sleep for the "sleep_for" seconds.
           await asyncio.sleep(sleep_for)

           # Notify the queue that the "work item" has been processed.
           queue.task_done()

           print(f'{name} has slept for {sleep_for:.2f} seconds')


   async def main():
       # Create a queue that we will use to store our "workload".
       queue = asyncio.Queue()

       # Generate random timings and put them into the queue.
       total_sleep_time = 0
       for _ in range(20):
           sleep_for = random.uniform(0.05, 1.0)
           total_sleep_time += sleep_for
           queue.put_nowait(sleep_for)

       # Create three worker tasks to process the queue concurrently.
       tasks = []
       for i in range(3):
           task = asyncio.create_task(worker(f'worker-{i}', queue))
           tasks.append(task)

       # Wait until the queue is fully processed.
       started_at = time.monotonic()
       await queue.join()
       total_slept_for = time.monotonic() - started_at

       # Cancel our worker tasks.
       for task in tasks:
           task.cancel()
       # Wait until all worker tasks are cancelled.
       await asyncio.gather(*tasks, return_exceptions=True)

       print('====')
       print(f'3 workers slept in parallel for {total_slept_for:.2f} seconds')
       print(f'total expected sleep time: {total_sleep_time:.2f} seconds')


   asyncio.run(main())