summaryrefslogtreecommitdiffstats
path: root/Doc/howto/functional.rst
diff options
context:
space:
mode:
authorChristian Heimes <christian@cheimes.de>2008-03-23 21:54:12 (GMT)
committerChristian Heimes <christian@cheimes.de>2008-03-23 21:54:12 (GMT)
commitfe337bfd0d89c62917e3625111c65f4aa187c6b4 (patch)
tree5b2b30e195ce4e9b43fc6defe9482fb9f6eabd21 /Doc/howto/functional.rst
parentfae759fb276b9e17fe09ecf37ecce618bc9bbb58 (diff)
downloadcpython-fe337bfd0d89c62917e3625111c65f4aa187c6b4.zip
cpython-fe337bfd0d89c62917e3625111c65f4aa187c6b4.tar.gz
cpython-fe337bfd0d89c62917e3625111c65f4aa187c6b4.tar.bz2
Merged revisions 61724-61725,61731-61735,61737,61739,61741,61743-61744,61753,61761,61765-61767,61769,61773,61776-61778,61780-61783,61788,61793,61796,61807,61813 via svnmerge from
svn+ssh://pythondev@svn.python.org/python/trunk ................ r61724 | martin.v.loewis | 2008-03-22 01:01:12 +0100 (Sat, 22 Mar 2008) | 49 lines Merged revisions 61602-61723 via svnmerge from svn+ssh://pythondev@svn.python.org/sandbox/trunk/2to3/lib2to3 ........ r61626 | david.wolever | 2008-03-19 17:19:16 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line Added fixer for implicit local imports. See #2414. ........ r61628 | david.wolever | 2008-03-19 17:57:43 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line Added a class for tests which should not run if a particular import is found. ........ r61629 | collin.winter | 2008-03-19 17:58:19 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line Two more relative import fixes in pgen2. ........ r61635 | david.wolever | 2008-03-19 20:16:03 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line Fixed print fixer so it will do the Right Thing when it encounters __future__.print_function. 2to3 gets upset, though, so the tests have been commented out. ........ r61637 | david.wolever | 2008-03-19 21:37:17 +0100 (Mi, 19 M?\195?\164r 2008) | 3 lines Added a fixer for itertools imports (from itertools import imap, ifilterfalse --> from itertools import filterfalse) ........ r61645 | david.wolever | 2008-03-19 23:22:35 +0100 (Mi, 19 M?\195?\164r 2008) | 1 line SVN is happier when you add the files you create... -_-' ........ r61654 | david.wolever | 2008-03-20 01:09:56 +0100 (Do, 20 M?\195?\164r 2008) | 1 line Added an explicit sort order to fixers -- fixes problems like #2427 ........ r61664 | david.wolever | 2008-03-20 04:32:40 +0100 (Do, 20 M?\195?\164r 2008) | 3 lines Fixes #2428 -- comments are no longer eatten by __future__ fixer. ........ r61673 | david.wolever | 2008-03-20 17:22:40 +0100 (Do, 20 M?\195?\164r 2008) | 1 line Added 2to3 node pretty-printer ........ r61679 | david.wolever | 2008-03-20 20:50:42 +0100 (Do, 20 M?\195?\164r 2008) | 1 line Made node printing a little bit prettier ........ r61723 | martin.v.loewis | 2008-03-22 00:59:27 +0100 (Sa, 22 M?\195?\164r 2008) | 2 lines Fix whitespace. ........ ................ r61725 | martin.v.loewis | 2008-03-22 01:02:41 +0100 (Sat, 22 Mar 2008) | 2 lines Install lib2to3. ................ r61731 | facundo.batista | 2008-03-22 03:45:37 +0100 (Sat, 22 Mar 2008) | 4 lines Small fix that complicated the test actually when that test failed. ................ r61732 | alexandre.vassalotti | 2008-03-22 05:08:44 +0100 (Sat, 22 Mar 2008) | 2 lines Added warning for the removal of 'hotshot' in Py3k. ................ r61733 | georg.brandl | 2008-03-22 11:07:29 +0100 (Sat, 22 Mar 2008) | 4 lines #1918: document that weak references *to* an object are cleared before the object's __del__ is called, to ensure that the weak reference callback (if any) finds the object healthy. ................ r61734 | georg.brandl | 2008-03-22 11:56:23 +0100 (Sat, 22 Mar 2008) | 2 lines Activate the Sphinx doctest extension and convert howto/functional to use it. ................ r61735 | georg.brandl | 2008-03-22 11:58:38 +0100 (Sat, 22 Mar 2008) | 2 lines Allow giving source names on the cmdline. ................ r61737 | georg.brandl | 2008-03-22 12:00:48 +0100 (Sat, 22 Mar 2008) | 2 lines Fixup this HOWTO's doctest blocks so that they can be run with sphinx' doctest builder. ................ r61739 | georg.brandl | 2008-03-22 12:47:10 +0100 (Sat, 22 Mar 2008) | 2 lines Test decimal.rst doctests as far as possible with sphinx doctest. ................ r61741 | georg.brandl | 2008-03-22 13:04:26 +0100 (Sat, 22 Mar 2008) | 2 lines Make doctests in re docs usable with sphinx' doctest. ................ r61743 | georg.brandl | 2008-03-22 13:59:37 +0100 (Sat, 22 Mar 2008) | 2 lines Make more doctests in pprint docs testable. ................ r61744 | georg.brandl | 2008-03-22 14:07:06 +0100 (Sat, 22 Mar 2008) | 2 lines No need to specify explicit "doctest_block" anymore. ................ r61753 | georg.brandl | 2008-03-22 21:08:43 +0100 (Sat, 22 Mar 2008) | 2 lines Fix-up syntax problems. ................ r61761 | georg.brandl | 2008-03-22 22:06:20 +0100 (Sat, 22 Mar 2008) | 4 lines Make collections' doctests executable. (The <BLANKLINE>s will be stripped from presentation output.) ................ r61765 | georg.brandl | 2008-03-22 22:21:57 +0100 (Sat, 22 Mar 2008) | 2 lines Test doctests in datetime docs. ................ r61766 | georg.brandl | 2008-03-22 22:26:44 +0100 (Sat, 22 Mar 2008) | 2 lines Test doctests in operator docs. ................ r61767 | georg.brandl | 2008-03-22 22:38:33 +0100 (Sat, 22 Mar 2008) | 2 lines Enable doctests in functions.rst. Already found two errors :) ................ r61769 | georg.brandl | 2008-03-22 23:04:10 +0100 (Sat, 22 Mar 2008) | 3 lines Enable doctest running for several other documents. We have now over 640 doctests that are run with "make doctest". ................ r61773 | raymond.hettinger | 2008-03-23 01:55:46 +0100 (Sun, 23 Mar 2008) | 1 line Simplify demo code. ................ r61776 | neal.norwitz | 2008-03-23 04:43:33 +0100 (Sun, 23 Mar 2008) | 7 lines Try to make this test a little more robust and not fail with: timeout (10.0025) is more than 2 seconds more than expected (0.001) I'm assuming this problem is caused by DNS lookup. This change does a DNS lookup of the hostname before trying to connect, so the time is not included. ................ r61777 | neal.norwitz | 2008-03-23 05:08:30 +0100 (Sun, 23 Mar 2008) | 1 line Speed up the test by avoiding socket timeouts. ................ r61778 | neal.norwitz | 2008-03-23 05:43:09 +0100 (Sun, 23 Mar 2008) | 1 line Skip the epoll test if epoll() does not work ................ r61780 | neal.norwitz | 2008-03-23 06:47:20 +0100 (Sun, 23 Mar 2008) | 1 line Suppress failure (to avoid a flaky test) if we cannot connect to svn.python.org ................ r61781 | neal.norwitz | 2008-03-23 07:13:25 +0100 (Sun, 23 Mar 2008) | 4 lines Move itertools before future_builtins since the latter depends on the former. From a clean build importing future_builtins would fail since itertools wasn't built yet. ................ r61782 | neal.norwitz | 2008-03-23 07:16:04 +0100 (Sun, 23 Mar 2008) | 1 line Try to prevent the alarm going off early in tearDown ................ r61783 | neal.norwitz | 2008-03-23 07:19:57 +0100 (Sun, 23 Mar 2008) | 4 lines Remove compiler warnings (on Alpha at least) about using chars as array subscripts. Using chars are dangerous b/c they are signed on some platforms and unsigned on others. ................ r61788 | georg.brandl | 2008-03-23 09:05:30 +0100 (Sun, 23 Mar 2008) | 2 lines Make the doctests presentation-friendlier. ................ r61793 | amaury.forgeotdarc | 2008-03-23 10:55:29 +0100 (Sun, 23 Mar 2008) | 4 lines #1477: ur'\U0010FFFF' raised in narrow unicode builds. Corrected the raw-unicode-escape codec to use UTF-16 surrogates in this case, just like the unicode-escape codec. ................ r61796 | raymond.hettinger | 2008-03-23 14:32:32 +0100 (Sun, 23 Mar 2008) | 1 line Issue 1681432: Add triangular distribution the random module. ................ r61807 | raymond.hettinger | 2008-03-23 20:37:53 +0100 (Sun, 23 Mar 2008) | 4 lines Adopt Nick's suggestion for useful default arguments. Clean-up floating point issues by adding true division and float constants. ................ r61813 | gregory.p.smith | 2008-03-23 22:04:43 +0100 (Sun, 23 Mar 2008) | 6 lines Fix gzip to deal with CRC's being signed values in Python 2.x properly and to read 32bit values as unsigned to start with rather than applying signedness fixups allover the place afterwards. This hopefully fixes the test_tarfile failure on the alpha/tru64 buildbot. ................
Diffstat (limited to 'Doc/howto/functional.rst')
-rw-r--r--Doc/howto/functional.rst380
1 files changed, 190 insertions, 190 deletions
diff --git a/Doc/howto/functional.rst b/Doc/howto/functional.rst
index e7b23b7..a81e5eb 100644
--- a/Doc/howto/functional.rst
+++ b/Doc/howto/functional.rst
@@ -2,7 +2,7 @@
Functional Programming HOWTO
********************************
-:Author: \A. M. Kuchling
+:Author: A. M. Kuchling
:Release: 0.31
(This is a first draft. Please send comments/error reports/suggestions to
@@ -98,6 +98,7 @@ to the functional style:
* Composability.
* Ease of debugging and testing.
+
Formal provability
------------------
@@ -133,6 +134,7 @@ down or generated a proof, there would then be the question of verifying the
proof; maybe there's an error in it, and you wrongly believe you've proved the
program correct.
+
Modularity
----------
@@ -159,7 +161,6 @@ running a test; instead you only have to synthesize the right input and then
check that the output matches expectations.
-
Composability
-------------
@@ -175,7 +176,6 @@ new programs by arranging existing functions in a new configuration and writing
a few functions specialized for the current task.
-
Iterators
=========
@@ -197,12 +197,12 @@ built-in data types support iteration, the most common being lists and
dictionaries. An object is called an **iterable** object if you can get an
iterator for it.
-You can experiment with the iteration interface manually::
+You can experiment with the iteration interface manually:
>>> L = [1,2,3]
>>> it = iter(L)
>>> it
- <iterator object at 0x8116870>
+ <...iterator object at ...>
>>> it.next()
1
>>> it.next()
@@ -220,14 +220,15 @@ important being the ``for`` statement. In the statement ``for X in Y``, Y must
be an iterator or some object for which ``iter()`` can create an iterator.
These two statements are equivalent::
- for i in iter(obj):
- print(i)
- for i in obj:
- print(i)
+ for i in iter(obj):
+ print i
+
+ for i in obj:
+ print i
Iterators can be materialized as lists or tuples by using the :func:`list` or
-:func:`tuple` constructor functions::
+:func:`tuple` constructor functions:
>>> L = [1,2,3]
>>> iterator = iter(L)
@@ -236,7 +237,7 @@ Iterators can be materialized as lists or tuples by using the :func:`list` or
(1, 2, 3)
Sequence unpacking also supports iterators: if you know an iterator will return
-N elements, you can unpack them into an N-tuple::
+N elements, you can unpack them into an N-tuple:
>>> L = [1,2,3]
>>> iterator = iter(L)
@@ -269,7 +270,11 @@ sequence type, such as strings, will automatically support creation of an
iterator.
Calling :func:`iter` on a dictionary returns an iterator that will loop over the
-dictionary's keys::
+dictionary's keys:
+
+.. not a doctest since dict ordering varies across Pythons
+
+::
>>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6,
... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12}
@@ -279,11 +284,11 @@ dictionary's keys::
Feb 2
Aug 8
Sep 9
- May 5
+ Apr 4
Jun 6
Jul 7
Jan 1
- Apr 4
+ May 5
Nov 11
Dec 12
Oct 10
@@ -297,7 +302,7 @@ over values or key/value pairs, you can explicitly call the
:meth:`values` or :meth:`items` methods to get an appropriate iterator.
The :func:`dict` constructor can accept an iterator that returns a finite stream
-of ``(key, value)`` tuples::
+of ``(key, value)`` tuples:
>>> L = [('Italy', 'Rome'), ('France', 'Paris'), ('US', 'Washington DC')]
>>> dict(iter(L))
@@ -334,18 +339,18 @@ List comprehensions and generator expressions (short form: "listcomps" and
functional programming language Haskell (http://www.haskell.org). You can strip
all the whitespace from a stream of strings with the following code::
- line_list = [' line 1\n', 'line 2 \n', ...]
+ line_list = [' line 1\n', 'line 2 \n', ...]
- # Generator expression -- returns iterator
- stripped_iter = (line.strip() for line in line_list)
+ # Generator expression -- returns iterator
+ stripped_iter = (line.strip() for line in line_list)
- # List comprehension -- returns list
- stripped_list = [line.strip() for line in line_list]
+ # List comprehension -- returns list
+ stripped_list = [line.strip() for line in line_list]
You can select only certain elements by adding an ``"if"`` condition::
- stripped_list = [line.strip() for line in line_list
- if line != ""]
+ stripped_list = [line.strip() for line in line_list
+ if line != ""]
With a list comprehension, you get back a Python list; ``stripped_list`` is a
list containing the resulting lines, not an iterator. Generator expressions
@@ -378,7 +383,7 @@ Generator expressions always have to be written inside parentheses, but the
parentheses signalling a function call also count. If you want to create an
iterator that will be immediately passed to a function you can write::
- obj_total = sum(obj.count for obj in list_all_objects())
+ obj_total = sum(obj.count for obj in list_all_objects())
The ``for...in`` clauses contain the sequences to be iterated over. The
sequences do not have to be the same length, because they are iterated over from
@@ -406,11 +411,14 @@ equivalent to the following Python code::
This means that when there are multiple ``for...in`` clauses but no ``if``
clauses, the length of the resulting output will be equal to the product of the
lengths of all the sequences. If you have two lists of length 3, the output
-list is 9 elements long::
+list is 9 elements long:
- seq1 = 'abc'
- seq2 = (1,2,3)
- >>> [ (x,y) for x in seq1 for y in seq2]
+.. doctest::
+ :options: +NORMALIZE_WHITESPACE
+
+ >>> seq1 = 'abc'
+ >>> seq2 = (1,2,3)
+ >>> [(x,y) for x in seq1 for y in seq2]
[('a', 1), ('a', 2), ('a', 3),
('b', 1), ('b', 2), ('b', 3),
('c', 1), ('c', 2), ('c', 3)]
@@ -441,7 +449,9 @@ variables. But, what if the local variables weren't thrown away on exiting a
function? What if you could later resume the function where it left off? This
is what generators provide; they can be thought of as resumable functions.
-Here's the simplest example of a generator function::
+Here's the simplest example of a generator function:
+
+.. testcode::
def generate_ints(N):
for i in range(N):
@@ -459,11 +469,11 @@ statement is that on reaching a ``yield`` the generator's state of execution is
suspended and local variables are preserved. On the next call to the
generator's ``.next()`` method, the function will resume executing.
-Here's a sample usage of the ``generate_ints()`` generator::
+Here's a sample usage of the ``generate_ints()`` generator:
>>> gen = generate_ints(3)
>>> gen
- <generator object at 0x8117f90>
+ <generator object at ...>
>>> gen.next()
0
>>> gen.next()
@@ -496,9 +506,7 @@ can be much messier.
The test suite included with Python's library, ``test_generators.py``, contains
a number of more interesting examples. Here's one generator that implements an
-in-order traversal of a tree using generators recursively.
-
-::
+in-order traversal of a tree using generators recursively. ::
# A recursive generator that generates Tree leaves in in-order.
def inorder(t):
@@ -553,7 +561,7 @@ returns ``None``.
Here's a simple counter that increments by 1 and allows changing the value of
the internal counter.
-::
+.. testcode::
def counter (maximum):
i = 0
@@ -622,15 +630,14 @@ features of generator expressions:
``map(f, iterA, iterB, ...)`` returns an iterator over the sequence
``f(iterA[0], iterB[0]), f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...``.
-::
+ >>> def upper(s):
+ ... return s.upper()
- def upper(s):
- return s.upper()
- list(map(upper, ['sentence', 'fragment'])) =>
- ['SENTENCE', 'FRAGMENT']
- list(upper(s) for s in ['sentence', 'fragment']) =>
- ['SENTENCE', 'FRAGMENT']
+ >>> map(upper, ['sentence', 'fragment'])
+ ['SENTENCE', 'FRAGMENT']
+ >>> [upper(s) for s in ['sentence', 'fragment']]
+ ['SENTENCE', 'FRAGMENT']
You can of course achieve the same effect with a list comprehension.
@@ -640,15 +647,14 @@ comprehensions. A **predicate** is a function that returns the truth value of
some condition; for use with :func:`filter`, the predicate must take a single
value.
-::
+ >>> def is_even(x):
+ ... return (x % 2) == 0
- def is_even(x):
- return (x % 2) == 0
+ >>> filter(is_even, range(10))
+ [0, 2, 4, 6, 8]
- list(filter(is_even, range(10))) =>
- [0, 2, 4, 6, 8]
-This can also be written as a generator expression::
+This can also be written as a list comprehension:
>>> list(x for x in range(10) if is_even(x))
[0, 2, 4, 6, 8]
@@ -664,27 +670,41 @@ If the iterable returns no values at all, a :exc:`TypeError` exception is
raised. If the initial value is supplied, it's used as a starting point and
``func(initial_value, A)`` is the first calculation. ::
- import operator
- import functools
- functools.reduce(operator.concat, ['A', 'BB', 'C']) =>
- 'ABBC'
- functools.reduce(operator.concat, []) =>
- TypeError: reduce() of empty sequence with no initial value
- functools.reduce(operator.mul, [1,2,3], 1) =>
- 6
- functools.reduce(operator.mul, [], 1) =>
- 1
-
-If you use :func:`operator.add` with :func:`functools.reduce`, you'll add up all
-the elements of the iterable. This case is so common that there's a special
-built-in called :func:`sum` to compute it::
-
- functools.reduce(operator.add, [1,2,3,4], 0) =>
- 10
- sum([1,2,3,4]) =>
- 10
- sum([]) =>
- 0
+
+``reduce(func, iter, [initial_value])`` doesn't have a counterpart in the
+:mod:`itertools` module because it cumulatively performs an operation on all the
+iterable's elements and therefore can't be applied to infinite iterables.
+``func`` must be a function that takes two elements and returns a single value.
+:func:`reduce` takes the first two elements A and B returned by the iterator and
+calculates ``func(A, B)``. It then requests the third element, C, calculates
+``func(func(A, B), C)``, combines this result with the fourth element returned,
+and continues until the iterable is exhausted. If the iterable returns no
+values at all, a :exc:`TypeError` exception is raised. If the initial value is
+supplied, it's used as a starting point and ``func(initial_value, A)`` is the
+first calculation.
+
+ >>> import operator
+ >>> reduce(operator.concat, ['A', 'BB', 'C'])
+ 'ABBC'
+ >>> reduce(operator.concat, [])
+ Traceback (most recent call last):
+ ...
+ TypeError: reduce() of empty sequence with no initial value
+ >>> reduce(operator.mul, [1,2,3], 1)
+ 6
+ >>> reduce(operator.mul, [], 1)
+ 1
+
+If you use :func:`operator.add` with :func:`reduce`, you'll add up all the
+elements of the iterable. This case is so common that there's a special
+built-in called :func:`sum` to compute it:
+
+ >>> reduce(operator.add, [1,2,3,4], 0)
+ 10
+ >>> sum([1,2,3,4])
+ 10
+ >>> sum([])
+ 0
For many uses of :func:`reduce`, though, it can be clearer to just write the
obvious :keyword:`for` loop::
@@ -701,8 +721,11 @@ obvious :keyword:`for` loop::
``enumerate(iter)`` counts off the elements in the iterable, returning 2-tuples
containing the count and each element. ::
- enumerate(['subject', 'verb', 'object']) =>
- (0, 'subject'), (1, 'verb'), (2, 'object')
+ >>> for item in enumerate(['subject', 'verb', 'object']):
+ ... print item
+ (0, 'subject')
+ (1, 'verb')
+ (2, 'object')
:func:`enumerate` is often used when looping through a list and recording the
indexes at which certain conditions are met::
@@ -712,20 +735,21 @@ indexes at which certain conditions are met::
if line.strip() == '':
print('Blank line at line #%i' % i)
-``sorted(iterable, [key=None], [reverse=False)`` collects all the elements of
-the iterable into a list, sorts the list, and returns the sorted result. The
-``key``, and ``reverse`` arguments are passed through to the constructed list's
-``sort()`` method. ::
-
- import random
- # Generate 8 random numbers between [0, 10000)
- rand_list = random.sample(range(10000), 8)
- rand_list =>
- [769, 7953, 9828, 6431, 8442, 9878, 6213, 2207]
- sorted(rand_list) =>
- [769, 2207, 6213, 6431, 7953, 8442, 9828, 9878]
- sorted(rand_list, reverse=True) =>
- [9878, 9828, 8442, 7953, 6431, 6213, 2207, 769]
+
+``sorted(iterable, [cmp=None], [key=None], [reverse=False)`` collects all the
+elements of the iterable into a list, sorts the list, and returns the sorted
+result. The ``cmp``, ``key``, and ``reverse`` arguments are passed through to
+the constructed list's ``.sort()`` method. ::
+
+ >>> import random
+ >>> # Generate 8 random numbers between [0, 10000)
+ >>> rand_list = random.sample(range(10000), 8)
+ >>> rand_list
+ [769, 7953, 9828, 6431, 8442, 9878, 6213, 2207]
+ >>> sorted(rand_list)
+ [769, 2207, 6213, 6431, 7953, 8442, 9828, 9878]
+ >>> sorted(rand_list, reverse=True)
+ [9878, 9828, 8442, 7953, 6431, 6213, 2207, 769]
(For a more detailed discussion of sorting, see the Sorting mini-HOWTO in the
Python wiki at http://wiki.python.org/moin/HowTo/Sorting.)
@@ -733,20 +757,20 @@ Python wiki at http://wiki.python.org/moin/HowTo/Sorting.)
The ``any(iter)`` and ``all(iter)`` built-ins look at the truth values of an
iterable's contents. :func:`any` returns True if any element in the iterable is
a true value, and :func:`all` returns True if all of the elements are true
-values::
-
- any([0,1,0]) =>
- True
- any([0,0,0]) =>
- False
- any([1,1,1]) =>
- True
- all([0,1,0]) =>
- False
- all([0,0,0]) =>
- False
- all([1,1,1]) =>
- True
+values:
+
+ >>> any([0,1,0])
+ True
+ >>> any([0,0,0])
+ False
+ >>> any([1,1,1])
+ True
+ >>> all([0,1,0])
+ False
+ >>> all([0,0,0])
+ False
+ >>> all([1,1,1])
+ True
Small functions and the lambda expression
@@ -758,31 +782,31 @@ act as predicates or that combine elements in some way.
If there's a Python built-in or a module function that's suitable, you don't
need to define a new function at all::
- stripped_lines = [line.strip() for line in lines]
- existing_files = filter(os.path.exists, file_list)
+ stripped_lines = [line.strip() for line in lines]
+ existing_files = filter(os.path.exists, file_list)
If the function you need doesn't exist, you need to write it. One way to write
small functions is to use the ``lambda`` statement. ``lambda`` takes a number
of parameters and an expression combining these parameters, and creates a small
function that returns the value of the expression::
- lowercase = lambda x: x.lower()
+ lowercase = lambda x: x.lower()
- print_assign = lambda name, value: name + '=' + str(value)
+ print_assign = lambda name, value: name + '=' + str(value)
- adder = lambda x, y: x+y
+ adder = lambda x, y: x+y
An alternative is to just use the ``def`` statement and define a function in the
usual way::
- def lowercase(x):
- return x.lower()
+ def lowercase(x):
+ return x.lower()
- def print_assign(name, value):
- return name + '=' + str(value)
+ def print_assign(name, value):
+ return name + '=' + str(value)
- def adder(x,y):
- return x + y
+ def adder(x,y):
+ return x + y
Which alternative is preferable? That's a style question; my usual course is to
avoid using ``lambda``.
@@ -853,24 +877,20 @@ Creating new iterators
``itertools.count(n)`` returns an infinite stream of integers, increasing by 1
each time. You can optionally supply the starting number, which defaults to 0::
- itertools.count() =>
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
- itertools.count(10) =>
- 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
+ itertools.count() =>
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...
+ itertools.count(10) =>
+ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ...
``itertools.cycle(iter)`` saves a copy of the contents of a provided iterable
and returns a new iterator that returns its elements from first to last. The
-new iterator will repeat these elements infinitely.
-
-::
+new iterator will repeat these elements infinitely. ::
- itertools.cycle([1,2,3,4,5]) =>
- 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ...
+ itertools.cycle([1,2,3,4,5]) =>
+ 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ...
``itertools.repeat(elem, [n])`` returns the provided element ``n`` times, or
-returns the element endlessly if ``n`` is not provided.
-
-::
+returns the element endlessly if ``n`` is not provided. ::
itertools.repeat('abc') =>
abc, abc, abc, abc, abc, abc, abc, abc, abc, abc, ...
@@ -879,9 +899,7 @@ returns the element endlessly if ``n`` is not provided.
``itertools.chain(iterA, iterB, ...)`` takes an arbitrary number of iterables as
input, and returns all the elements of the first iterator, then all the elements
-of the second, and so on, until all of the iterables have been exhausted.
-
-::
+of the second, and so on, until all of the iterables have been exhausted. ::
itertools.chain(['a', 'b', 'c'], (1, 2, 3)) =>
a, b, c, 1, 2, 3
@@ -900,9 +918,7 @@ term for this behaviour is `lazy evaluation
This iterator is intended to be used with iterables that are all of the same
length. If the iterables are of different lengths, the resulting stream will be
-the same length as the shortest iterable.
-
-::
+the same length as the shortest iterable. ::
itertools.izip(['a', 'b'], (1, 2, 3)) =>
('a', 1), ('b', 2)
@@ -916,9 +932,7 @@ slice of the iterator. With a single ``stop`` argument, it will return the
first ``stop`` elements. If you supply a starting index, you'll get
``stop-start`` elements, and if you supply a value for ``step``, elements will
be skipped accordingly. Unlike Python's string and list slicing, you can't use
-negative values for ``start``, ``stop``, or ``step``.
-
-::
+negative values for ``start``, ``stop``, or ``step``. ::
itertools.islice(range(10), 8) =>
0, 1, 2, 3, 4, 5, 6, 7
@@ -932,9 +946,7 @@ independent iterators that will all return the contents of the source iterator.
If you don't supply a value for ``n``, the default is 2. Replicating iterators
requires saving some of the contents of the source iterator, so this can consume
significant memory if the iterator is large and one of the new iterators is
-consumed more than the others.
-
-::
+consumed more than the others. ::
itertools.tee( itertools.count() ) =>
iterA, iterB
@@ -1116,87 +1128,76 @@ This section contains an introduction to some of the most important functions in
The ``compose()`` function implements function composition. In other words, it
returns a wrapper around the ``outer`` and ``inner`` callables, such that the
-return value from ``inner`` is fed directly to ``outer``. That is,
-
-::
-
- >>> def add(a, b):
- ... return a + b
- ...
- >>> def double(a):
- ... return 2 * a
- ...
- >>> compose(double, add)(5, 6)
- 22
+return value from ``inner`` is fed directly to ``outer``. That is, ::
-is equivalent to
+ >>> def add(a, b):
+ ... return a + b
+ ...
+ >>> def double(a):
+ ... return 2 * a
+ ...
+ >>> compose(double, add)(5, 6)
+ 22
-::
+is equivalent to ::
- >>> double(add(5, 6))
- 22
+ >>> double(add(5, 6))
+ 22
The ``unpack`` keyword is provided to work around the fact that Python functions
are not always `fully curried <http://en.wikipedia.org/wiki/Currying>`__. By
default, it is expected that the ``inner`` function will return a single object
and that the ``outer`` function will take a single argument. Setting the
``unpack`` argument causes ``compose`` to expect a tuple from ``inner`` which
-will be expanded before being passed to ``outer``. Put simply,
-
-::
+will be expanded before being passed to ``outer``. Put simply, ::
- compose(f, g)(5, 6)
+ compose(f, g)(5, 6)
is equivalent to::
- f(g(5, 6))
+ f(g(5, 6))
-while
+while ::
-::
-
- compose(f, g, unpack=True)(5, 6)
+ compose(f, g, unpack=True)(5, 6)
is equivalent to::
- f(*g(5, 6))
+ f(*g(5, 6))
Even though ``compose()`` only accepts two functions, it's trivial to build up a
version that will compose any number of functions. We'll use ``functools.reduce()``,
``compose()`` and ``partial()`` (the last of which is provided by both
-``functional`` and ``functools``).
-
-::
+``functional`` and ``functools``). ::
- from functional import compose, partial
+ from functional import compose, partial
- multi_compose = partial(functools.reduce, compose)
+
+ multi_compose = partial(reduce, compose)
We can also use ``map()``, ``compose()`` and ``partial()`` to craft a version of
``"".join(...)`` that converts its arguments to string::
- from functional import compose, partial
+ from functional import compose, partial
- join = compose("".join, partial(map, str))
+ join = compose("".join, partial(map, str))
``flip(func)``
``flip()`` wraps the callable in ``func`` and causes it to receive its
-non-keyword arguments in reverse order.
-
-::
-
- >>> def triple(a, b, c):
- ... return (a, b, c)
- ...
- >>> triple(5, 6, 7)
- (5, 6, 7)
- >>>
- >>> flipped_triple = flip(triple)
- >>> flipped_triple(5, 6, 7)
- (7, 6, 5)
+non-keyword arguments in reverse order. ::
+
+ >>> def triple(a, b, c):
+ ... return (a, b, c)
+ ...
+ >>> triple(5, 6, 7)
+ (5, 6, 7)
+ >>>
+ >>> flipped_triple = flip(triple)
+ >>> flipped_triple(5, 6, 7)
+ (7, 6, 5)
``foldl(func, start, iterable)``
@@ -1207,35 +1208,34 @@ list, then the result of that and the third element of the list, and so on.
This means that a call such as::
- foldl(f, 0, [1, 2, 3])
+ foldl(f, 0, [1, 2, 3])
is equivalent to::
- f(f(f(0, 1), 2), 3)
+ f(f(f(0, 1), 2), 3)
``foldl()`` is roughly equivalent to the following recursive function::
- def foldl(func, start, seq):
- if len(seq) == 0:
- return start
+ def foldl(func, start, seq):
+ if len(seq) == 0:
+ return start
- return foldl(func, func(start, seq[0]), seq[1:])
+ return foldl(func, func(start, seq[0]), seq[1:])
Speaking of equivalence, the above ``foldl`` call can be expressed in terms of
the built-in ``reduce`` like so::
- reduce(f, [1, 2, 3], 0)
+ reduce(f, [1, 2, 3], 0)
We can use ``foldl()``, ``operator.concat()`` and ``partial()`` to write a
cleaner, more aesthetically-pleasing version of Python's ``"".join(...)``
idiom::
- from functional import foldl, partial
- from operator import concat
-
- join = partial(foldl, concat, "")
+ from functional import foldl, partial from operator import concat
+
+ join = partial(foldl, concat, "")
Revision History and Acknowledgements