summaryrefslogtreecommitdiffstats
path: root/Doc/library/tokenize.rst
diff options
context:
space:
mode:
authorGeorg Brandl <georg@python.org>2007-08-15 14:28:22 (GMT)
committerGeorg Brandl <georg@python.org>2007-08-15 14:28:22 (GMT)
commit116aa62bf54a39697e25f21d6cf6799f7faa1349 (patch)
tree8db5729518ed4ca88e26f1e26cc8695151ca3eb3 /Doc/library/tokenize.rst
parent739c01d47b9118d04e5722333f0e6b4d0c8bdd9e (diff)
downloadcpython-116aa62bf54a39697e25f21d6cf6799f7faa1349.zip
cpython-116aa62bf54a39697e25f21d6cf6799f7faa1349.tar.gz
cpython-116aa62bf54a39697e25f21d6cf6799f7faa1349.tar.bz2
Move the 3k reST doc tree in place.
Diffstat (limited to 'Doc/library/tokenize.rst')
-rw-r--r--Doc/library/tokenize.rst122
1 files changed, 122 insertions, 0 deletions
diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst
new file mode 100644
index 0000000..61f2c4d
--- /dev/null
+++ b/Doc/library/tokenize.rst
@@ -0,0 +1,122 @@
+
+:mod:`tokenize` --- Tokenizer for Python source
+===============================================
+
+.. module:: tokenize
+ :synopsis: Lexical scanner for Python source code.
+.. moduleauthor:: Ka Ping Yee
+.. sectionauthor:: Fred L. Drake, Jr. <fdrake@acm.org>
+
+
+The :mod:`tokenize` module provides a lexical scanner for Python source code,
+implemented in Python. The scanner in this module returns comments as tokens as
+well, making it useful for implementing "pretty-printers," including colorizers
+for on-screen displays.
+
+The primary entry point is a generator:
+
+
+.. function:: generate_tokens(readline)
+
+ The :func:`generate_tokens` generator requires one argment, *readline*, which
+ must be a callable object which provides the same interface as the
+ :meth:`readline` method of built-in file objects (see section
+ :ref:`bltin-file-objects`). Each call to the function should return one line of
+ input as a string.
+
+ The generator produces 5-tuples with these members: the token type; the token
+ string; a 2-tuple ``(srow, scol)`` of ints specifying the row and column where
+ the token begins in the source; a 2-tuple ``(erow, ecol)`` of ints specifying
+ the row and column where the token ends in the source; and the line on which the
+ token was found. The line passed is the *logical* line; continuation lines are
+ included.
+
+ .. versionadded:: 2.2
+
+An older entry point is retained for backward compatibility:
+
+
+.. function:: tokenize(readline[, tokeneater])
+
+ The :func:`tokenize` function accepts two parameters: one representing the input
+ stream, and one providing an output mechanism for :func:`tokenize`.
+
+ The first parameter, *readline*, must be a callable object which provides the
+ same interface as the :meth:`readline` method of built-in file objects (see
+ section :ref:`bltin-file-objects`). Each call to the function should return one
+ line of input as a string. Alternately, *readline* may be a callable object that
+ signals completion by raising :exc:`StopIteration`.
+
+ .. versionchanged:: 2.5
+ Added :exc:`StopIteration` support.
+
+ The second parameter, *tokeneater*, must also be a callable object. It is
+ called once for each token, with five arguments, corresponding to the tuples
+ generated by :func:`generate_tokens`.
+
+All constants from the :mod:`token` module are also exported from
+:mod:`tokenize`, as are two additional token type values that might be passed to
+the *tokeneater* function by :func:`tokenize`:
+
+
+.. data:: COMMENT
+
+ Token value used to indicate a comment.
+
+
+.. data:: NL
+
+ Token value used to indicate a non-terminating newline. The NEWLINE token
+ indicates the end of a logical line of Python code; NL tokens are generated when
+ a logical line of code is continued over multiple physical lines.
+
+Another function is provided to reverse the tokenization process. This is useful
+for creating tools that tokenize a script, modify the token stream, and write
+back the modified script.
+
+
+.. function:: untokenize(iterable)
+
+ Converts tokens back into Python source code. The *iterable* must return
+ sequences with at least two elements, the token type and the token string. Any
+ additional sequence elements are ignored.
+
+ The reconstructed script is returned as a single string. The result is
+ guaranteed to tokenize back to match the input so that the conversion is
+ lossless and round-trips are assured. The guarantee applies only to the token
+ type and token string as the spacing between tokens (column positions) may
+ change.
+
+ .. versionadded:: 2.5
+
+Example of a script re-writer that transforms float literals into Decimal
+objects::
+
+ def decistmt(s):
+ """Substitute Decimals for floats in a string of statements.
+
+ >>> from decimal import Decimal
+ >>> s = 'print +21.3e-5*-.1234/81.7'
+ >>> decistmt(s)
+ "print +Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7')"
+
+ >>> exec(s)
+ -3.21716034272e-007
+ >>> exec(decistmt(s))
+ -3.217160342717258261933904529E-7
+
+ """
+ result = []
+ g = generate_tokens(StringIO(s).readline) # tokenize the string
+ for toknum, tokval, _, _, _ in g:
+ if toknum == NUMBER and '.' in tokval: # replace NUMBER tokens
+ result.extend([
+ (NAME, 'Decimal'),
+ (OP, '('),
+ (STRING, repr(tokval)),
+ (OP, ')')
+ ])
+ else:
+ result.append((toknum, tokval))
+ return untokenize(result)
+