1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
|
\section{\module{asyncore} ---
Asyncronous socket handler}
\declaremodule{builtin}{asyncore}
\modulesynopsis{A base class for developing asyncronous socket
handling services.}
\moduleauthor{Sam Rushing}{rushing@nightmare.com}
\sectionauthor{Christopher Petrilli}{petrilli@amber.org}
% Heavily adapted from original documentation by Sam Rushing.
This module provides the basic infrastructure for writing asyncronous
socket service clients and servers.
%\subsection{Why Asyncronous?}
There are only two ways to have a program on a single processor do
``more than one thing at a time.'' Multi-threaded programming is the
simplest and most popular way to do it, but there is another very
different technique, that lets you have nearly all the advantages of
multi-threading, without actually using multiple threads. It's really
only practical if your program is largely I/O bound. If your program
is CPU bound, then pre-emptive scheduled threads are probably what
you really need. Network servers are rarely CPU-bound, however.
If your operating system supports the \cfunction{select()} system call
in its I/O library (and nearly all do), then you can use it to juggle
multiple communication channels at once; doing other work while your
I/O is taking place in the ``background.'' Although this strategy can
seem strange and complex, especially at first, it is in many ways
easier to understand and control than multi-threaded programming.
The module documented here solves many of the difficult problems for
you, making the task of building sophisticated high-performance
network servers and clients a snap.
\begin{classdesc}{dispatcher}{}
The first class we will introduce is the \class{dispatcher} class.
This is a thin wrapper around a low-level socket object. To make
it more useful, it has a few methods for event-handling on it.
Otherwise, it can be treated as a normal non-blocking socket object.
The direct interface between the select loop and the socket object
are the \method{handle_read_event()} and
\method{handle_write_event()} methods. These are called whenever an
object `fires' that event.
The firing of these low-level events can tell us whether certain
higher-level events have taken place, depending on the timing and
the state of the connection. For example, if we have asked for a
socket to connect to another host, we know that the connection has
been made when the socket fires a write event (at this point you
know that you may write to it with the expectation of success).
The implied higher-level events are:
\begin{tableii}{l|l}{code}{Event}{Description}
\lineii{handle_connect()}{Implied by a write event}
\lineii{handle_close()}{Implied by a read event with no data available}
\lineii{handle_accept()}{Implied by a read event on a listening socket}
\end{tableii}
\end{classdesc}
This set of user-level events is larger than the basics. The
full set of methods that can be overridden in your subclass are:
\begin{methoddesc}{handle_read}{}
Called when there is new data to be read from a socket.
\end{methoddesc}
\begin{methoddesc}{handle_write}{}
Called when there is an attempt to write data to the object.
Often this method will implement the necessary buffering for
performance. For example:
\begin{verbatim}
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
\end{verbatim}
\end{methoddesc}
\begin{methoddesc}{handle_expt}{}
Called when there is out of band (OOB) data for a socket
connection. This will almost never happen, as OOB is
tenuously supported and rarely used.
\end{methoddesc}
\begin{methoddesc}{handle_connect}{}
Called when the socket actually makes a connection. This
might be used to send a ``welcome'' banner, or something
similar.
\end{methoddesc}
\begin{methoddesc}{handle_close}{}
Called when the socket is closed.
\end{methoddesc}
\begin{methoddesc}{handle_accept}{}
Called on listening sockets when they actually accept a new
connection.
\end{methoddesc}
\begin{methoddesc}{readable}{}
Each time through the \method{select()} loop, the set of sockets
is scanned, and this method is called to see if there is any
interest in reading. The default method simply returns \code{1},
indicating that by default, all channels will be interested.
\end{methoddesc}
\begin{methoddesc}{writeable}{}
Each time through the \method{select()} loop, the set of sockets
is scanned, and this method is called to see if there is any
interest in writing. The default method simply returns \code{1},
indiciating that by default, all channels will be interested.
\end{methoddesc}
In addition, there are the basic methods needed to construct and
manipulate ``channels,'' which are what we will call the socket
connections in this context. Note that most of these are nearly
identical to their socket partners.
\begin{methoddesc}{create_socket}{family, type}
This is identical to the creation of a normal socket, and
will use the same options for creation. Refer to the
\refmodule{socket} documentation for information on creating
sockets.
\end{methoddesc}
\begin{methoddesc}{connect}{address}
As with the normal socket object, \var{address} is a
tuple with the first element the host to connect to, and the
second the port.
\end{methoddesc}
\begin{methoddesc}{send}{data}
Send \var{data} out the socket.
\end{methoddesc}
\begin{methoddesc}{recv}{buffer_size}
Read at most \var{buffer_size} bytes from the socket.
\end{methoddesc}
\begin{methoddesc}{listen}{\optional{backlog}}
Listen for connections made to the socket. The \var{backlog}
argument specifies the maximum number of queued connections
and should be at least 1; the maximum value is
system-dependent (usually 5).
\end{methoddesc}
\begin{methoddesc}{bind}{address}
Bind the socket to \var{address}. The socket must not already
be bound. (The format of \var{address} depends on the address
family --- see above.)
\end{methoddesc}
\begin{methoddesc}{accept}{}
Accept a connection. The socket must be bound to an address
and listening for connections. The return value is a pair
\code{(\var{conn}, \var{address})} where \var{conn} is a
\emph{new} socket object usable to send and receive data on
the connection, and \var{address} is the address bound to the
socket on the other end of the connection.
\end{methoddesc}
\begin{methoddesc}{close}{}
Close the socket. All future operations on the socket object
will fail. The remote end will receive no more data (after
queued data is flushed). Sockets are automatically closed
when they are garbage-collected.
\end{methoddesc}
\subsection{Example basic HTTP client \label{asyncore-example}}
As a basic example, below is a very basic HTTP client that uses the
\class{dispatcher} class to implement its socket handling:
\begin{verbatim}
class http_client(asyncore.dispatcher):
def __init__(self, host,path):
asyncore.dispatcher.__init__(self)
self.path = path
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect( (host, 80) )
self.buffer = 'GET %s HTTP/1.0\r\b\r\n' % self.path
def handle_connect(self):
pass
def handle_read(self):
data = self.recv(8192)
print data
def writeable(self):
return (len(self.buffer) > 0)
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
\end{verbatim}
|