summaryrefslogtreecommitdiffstats
path: root/Doc/library/tokenize.rst
diff options
context:
space:
mode:
authorMartin Panter <vadmium+py@gmail.com>2016-01-16 04:32:52 (GMT)
committerMartin Panter <vadmium+py@gmail.com>2016-01-16 04:32:52 (GMT)
commit20b1bfa6fb609e91343dfdd454841f50d6d5e140 (patch)
tree1daba32afe1fae01ce2e724ee1838522d52c3059 /Doc/library/tokenize.rst
parenta3a58331a5e523e17eef964be3ee95ba1c45d977 (diff)
downloadcpython-20b1bfa6fb609e91343dfdd454841f50d6d5e140.zip
cpython-20b1bfa6fb609e91343dfdd454841f50d6d5e140.tar.gz
cpython-20b1bfa6fb609e91343dfdd454841f50d6d5e140.tar.bz2
Issue #26127: Fix links in tokenize documentation; patch by Silent Ghost
Diffstat (limited to 'Doc/library/tokenize.rst')
-rw-r--r--Doc/library/tokenize.rst14
1 files changed, 7 insertions, 7 deletions
diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst
index c9cb518..a5f3be3 100644
--- a/Doc/library/tokenize.rst
+++ b/Doc/library/tokenize.rst
@@ -27,7 +27,7 @@ The primary entry point is a :term:`generator`:
.. function:: tokenize(readline)
- The :func:`tokenize` generator requires one argument, *readline*, which
+ The :func:`.tokenize` generator requires one argument, *readline*, which
must be a callable object which provides the same interface as the
:meth:`io.IOBase.readline` method of file objects. Each call to the
function should return one line of input as bytes.
@@ -52,7 +52,7 @@ The primary entry point is a :term:`generator`:
.. versionchanged:: 3.3
Added support for ``exact_type``.
- :func:`tokenize` determines the source encoding of the file by looking for a
+ :func:`.tokenize` determines the source encoding of the file by looking for a
UTF-8 BOM or encoding cookie, according to :pep:`263`.
@@ -74,7 +74,7 @@ All constants from the :mod:`token` module are also exported from
.. data:: ENCODING
Token value that indicates the encoding used to decode the source bytes
- into text. The first token returned by :func:`tokenize` will always be an
+ into text. The first token returned by :func:`.tokenize` will always be an
ENCODING token.
@@ -96,17 +96,17 @@ write back the modified script.
positions) may change.
It returns bytes, encoded using the ENCODING token, which is the first
- token sequence output by :func:`tokenize`.
+ token sequence output by :func:`.tokenize`.
-:func:`tokenize` needs to detect the encoding of source files it tokenizes. The
+:func:`.tokenize` needs to detect the encoding of source files it tokenizes. The
function it uses to do this is available:
.. function:: detect_encoding(readline)
The :func:`detect_encoding` function is used to detect the encoding that
should be used to decode a Python source file. It requires one argument,
- readline, in the same way as the :func:`tokenize` generator.
+ readline, in the same way as the :func:`.tokenize` generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (not decoded from bytes) it has read
@@ -120,7 +120,7 @@ function it uses to do this is available:
If no encoding is specified, then the default of ``'utf-8'`` will be
returned.
- Use :func:`open` to open Python source files: it uses
+ Use :func:`.open` to open Python source files: it uses
:func:`detect_encoding` to detect the file encoding.