diff options
author | Windson yang <wiwindson@outlook.com> | 2020-01-25 19:23:00 (GMT) |
---|---|---|
committer | Berker Peksag <berker.peksag@gmail.com> | 2020-01-25 19:23:00 (GMT) |
commit | 4b09dc79f4d08d85f2cc945563e9c8ef1e531d7b (patch) | |
tree | a480998f95d0bb8e46d64642bbddd6071d996821 /Doc | |
parent | 7de617455ed788e6730c40cf854c4b72b0432194 (diff) | |
download | cpython-4b09dc79f4d08d85f2cc945563e9c8ef1e531d7b.zip cpython-4b09dc79f4d08d85f2cc945563e9c8ef1e531d7b.tar.gz cpython-4b09dc79f4d08d85f2cc945563e9c8ef1e531d7b.tar.bz2 |
bpo-36654: Add examples for using tokenize module programmically (#12947)
Diffstat (limited to 'Doc')
-rw-r--r-- | Doc/library/tokenize.rst | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst index b208ba4..96778f2 100644 --- a/Doc/library/tokenize.rst +++ b/Doc/library/tokenize.rst @@ -278,3 +278,22 @@ The exact token type names can be displayed using the :option:`-e` option: 4,10-4,11: RPAR ')' 4,11-4,12: NEWLINE '\n' 5,0-5,0: ENDMARKER '' + +Example of tokenizing a file programmatically, reading unicode +strings instead of bytes with :func:`generate_tokens`:: + + import tokenize + + with tokenize.open('hello.py') as f: + tokens = tokenize.generate_tokens(f.readline) + for token in tokens: + print(token) + +Or reading bytes directly with :func:`.tokenize`:: + + import tokenize + + with open('hello.py', 'rb') as f: + tokens = tokenize.tokenize(f.readline) + for token in tokens: + print(token) |