summaryrefslogtreecommitdiffstats
path: root/Lib/tokenize.py
Commit message (Expand)AuthorAgeFilesLines
* Updated tokenize to support the inverse byte literals new in 3.3Armin Ronacher2012-03-041-6/+16
* Basic support for PEP 414 without docs or tests.Armin Ronacher2012-03-041-8/+22
* Issue #2134: Add support for tokenize.TokenInfo.exact_type.Meador Inge2012-01-191-1/+58
* Issue #13150: The tokenize module doesn't compile large regular expressions a...Antoine Pitrou2011-10-111-19/+16
* Issue #12943: python -m tokenize support has been added to tokenize.Meador Inge2011-10-071-23/+56
* Issue #11074: Make 'tokenize' so it can be reloaded.Brett Cannon2011-02-221-3/+2
* Issue #10386: Added __all__ to token module; this simplifies importingAlexander Belopolsky2010-11-111-3/+2
* Issue #10335: Add tokenize.open(), detect the file encoding usingVictor Stinner2010-11-091-0/+15
* A little bit more readable repr method.Raymond Hettinger2010-09-091-3/+3
* Experiment: Let collections.namedtuple() do the work. This should work now t...Raymond Hettinger2010-09-091-39/+3
* Improve the repr for the TokenInfo named tuple.Raymond Hettinger2010-09-091-1/+28
* Remove unused import, fix typo and rewrap docstrings.Florent Xicluna2010-09-031-17/+18
* handle names starting with non-ascii characters correctly #9712Benjamin Peterson2010-08-301-5/+10
* fix for files with coding cookies and BOMsBenjamin Peterson2010-03-181-3/+5
* in tokenize.detect_encoding(), return utf-8-sig when a BOM is foundBenjamin Peterson2010-03-181-6/+12
* use some more itertools magic to make '' be yielded after readline is doneBenjamin Peterson2009-11-141-3/+4
* simply by using itertools.chain()Benjamin Peterson2009-11-141-10/+5
* Merged revisions 75149,75260-75263,75265-75267,75292,75300,75376,75405,75429-...Benjamin Peterson2009-11-131-13/+20
* normalize latin-1 and utf-8 variant encodings like the builtin tokenizer doesBenjamin Peterson2009-10-091-1/+12
* Remove dependency on the collections module.Raymond Hettinger2009-04-291-3/+41
* Issue #5857: tokenize.tokenize() now returns named tuples.Raymond Hettinger2009-04-291-19/+22
* reuse tokenize.detect_encoding in linecache instead of a custom solutionBenjamin Peterson2009-03-241-3/+4
* raise a SyntaxError in detect_encoding() when a codec lookup fails like the b...Benjamin Peterson2008-12-121-13/+20
* #2834: Change re module semantics, so that str and bytes mixing is forbidden,Antoine Pitrou2008-08-191-6/+8
* use the more idomatic (and Py3k faster) while TrueBenjamin Peterson2008-06-051-1/+1
* Merged revisions 61964-61979 via svnmerge fromChristian Heimes2008-03-281-0/+9
* - Issue #719888: Updated tokenize to use a bytes API. generate_tokens has beenTrent Nelson2008-03-181-57/+130
* Merged revisions 60080-60089,60091-60093 via svnmerge fromGeorg Brandl2008-01-191-3/+3
* Patch 1420 by Ron Adam.Guido van Rossum2007-11-121-18/+18
* Raise statement normalization in Lib/.Collin Winter2007-08-301-2/+2
* Merged revisions 55817-55961 via svnmerge fromGuido van Rossum2007-06-131-4/+5
* Merged revisions 55328-55341 via svnmerge fromGuido van Rossum2007-05-151-1/+2
* PEP 3114: rename .next() to .__next__() and add next() builtin.Georg Brandl2007-04-211-2/+2
* Hide list comp variables and support set comprehensionsNick Coghlan2007-04-151-1/+0
* Make ELLIPSIS a separate token. This makes it a syntax error to write ". . ...Georg Brandl2007-03-181-3/+3
* Fix most trivially-findable print statements.Guido van Rossum2007-02-091-2/+2
* PEP 3107 - Function Annotations thanks to Tony LowndsNeal Norwitz2006-12-281-1/+1
* Four months of trunk changes (including a few releases...)Thomas Wouters2006-12-131-32/+77
* Remove support for backticks from the grammar and compiler.Brett Cannon2006-08-251-1/+1
* Killed the <> operator. You must now use !=.Guido van Rossum2006-08-241-1/+1
* Merge current trunk into p3yk. This includes the PyNumber_Index API change,Thomas Wouters2006-08-211-1/+2
* Merge p3yk branch with the trunk up to revision 45595. This breaks a fairThomas Wouters2006-04-211-1/+1
* SF bug #1224621: tokenize module does not detect inconsistent dedentsRaymond Hettinger2005-06-211-0/+3
* Add untokenize() function to allow full round-trip tokenization.Raymond Hettinger2005-06-101-3/+49
* PEP-0318, @decorator-style. In Guido's words:Anthony Baxter2004-08-021-1/+1
* Get rid of many apply() calls.Guido van Rossum2003-02-271-3/+3
* SF 633560: tokenize.__all__ needs "generate_tokens"Raymond Hettinger2002-11-051-1/+2
* Speed up the most egregious "if token in (long tuple)" cases by usingGuido van Rossum2002-08-241-10/+19
* Whitespace normalization.Tim Peters2002-05-231-5/+5
* Added docstrings excerpted from Python Library Reference.Raymond Hettinger2002-05-151-0/+25