From 972cfb9169bdf7fa6c9758a5c49659429c84b5a8 Mon Sep 17 00:00:00 2001 From: Meador Inge Date: Thu, 19 Jan 2012 00:22:22 -0600 Subject: Issue #2134: Clarify token.OP handling rationale in tokenize documentation. --- Doc/library/tokenize.rst | 6 ++++++ Misc/NEWS | 3 +++ 2 files changed, 9 insertions(+) diff --git a/Doc/library/tokenize.rst b/Doc/library/tokenize.rst index 577d7cc..70919ca 100644 --- a/Doc/library/tokenize.rst +++ b/Doc/library/tokenize.rst @@ -15,6 +15,12 @@ implemented in Python. The scanner in this module returns comments as tokens as well, making it useful for implementing "pretty-printers," including colorizers for on-screen displays. +To simplify token stream handling, all :ref:`operators` and :ref:`delimiters` +tokens are returned using the generic :data:`token.OP` token type. The exact +type can be determined by checking the token ``string`` field on the +:term:`named tuple` returned from :func:`tokenize.tokenize` for the character +sequence that identifies a specific operator token. + The primary entry point is a :term:`generator`: .. function:: tokenize(readline) diff --git a/Misc/NEWS b/Misc/NEWS index 7d8cb7b..ed6c1e0 100644 --- a/Misc/NEWS +++ b/Misc/NEWS @@ -418,6 +418,9 @@ Extension Modules Documentation ------------- +- Issue #2134: The tokenize documentation has been clarified to explain why + all operator and delimiter tokens are treated as token.OP tokens. + - Issue #13513: Fix io.IOBase documentation to correctly link to the io.IOBase.readline method instead of the readline module. -- cgit v0.12