summaryrefslogtreecommitdiffstats
path: root/Lib/tokenize.py
Commit message (Collapse)AuthorAgeFilesLines
* Issue #20387: Merge test and patch from 3.4.4Jason R. Coombs2015-06-281-0/+17
|\
| * Issue #20387: Restore retention of indentation during untokenize.Dingyuan Wang2015-06-221-0/+17
| |
* | (Merge 3.5) Issue #23840: tokenize.open() now closes the temporary binary fileVictor Stinner2015-05-251-5/+9
|\ \ | |/ | | | | on error to fix a resource warning.
| * Issue #23840: tokenize.open() now closes the temporary binary file on error toVictor Stinner2015-05-251-5/+9
| | | | | | | | fix a resource warning.
* | PEP 0492 -- Coroutines with async and await syntax. Issue #24017.Yury Selivanov2015-05-121-2/+54
| |
* | Issue #23615: Modules bz2, tarfile and tokenize now can be reloaded withSerhiy Storchaka2015-03-111-2/+1
|\ \ | |/ | | | | imp.reload(). Patch by Thomas Kluyver.
| * Issue #23615: Modules bz2, tarfile and tokenize now can be reloaded withSerhiy Storchaka2015-03-111-2/+1
| | | | | | | | imp.reload(). Patch by Thomas Kluyver.
* | Removed duplicated dict entries.Serhiy Storchaka2015-01-111-1/+0
| |
* | (Merge 3.4) Issue #22599: Enhance tokenize.open() to be able to call it duringVictor Stinner2014-12-051-3/+4
|\ \ | |/ | | | | | | | | | | | | | | | | | | | | | | Python finalization. Before the module kept a reference to the builtins module, but the module attributes are cleared during Python finalization. Instead, keep directly a reference to the open() function. This enhancement is not perfect, calling tokenize.open() can still fail if called very late during Python finalization. Usually, the function is called by the linecache module which is called to display a traceback or emit a warning.
| * Issue #22599: Enhance tokenize.open() to be able to call it during PythonVictor Stinner2014-12-051-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | finalization. Before the module kept a reference to the builtins module, but the module attributes are cleared during Python finalization. Instead, keep directly a reference to the open() function. This enhancement is not perfect, calling tokenize.open() can still fail if called very late during Python finalization. Usually, the function is called by the linecache module which is called to display a traceback or emit a warning.
* | PEP 465: a dedicated infix operator for matrix multiplication (closes #21176)Benjamin Peterson2014-04-101-2/+3
|/
* Merge with 3.3Terry Jan Reedy2014-02-241-1/+1
|\
| * whitespaceTerry Jan Reedy2014-02-241-1/+1
| |
* | Merge with 3.3Terry Jan Reedy2014-02-241-0/+6
|\ \ | |/
| * Issue #9974: When untokenizing, use row info to insert backslash+newline.Terry Jan Reedy2014-02-241-0/+6
| | | | | | | | Original patches by A. Kuchling and G. Rees (#12691).
* | Merge with 3.3Terry Jan Reedy2014-02-181-13/+11
|\ \ | |/
| * Issue #8478: Untokenizer.compat now processes first token from iterator input.Terry Jan Reedy2014-02-181-13/+11
| | | | | | | | Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees.
* | Untokenize, bad assert: Merge with 3.3Terry Jan Reedy2014-02-171-1/+3
|\ \ | |/
| * Untokenize: An logically incorrect assert tested user input validity.Terry Jan Reedy2014-02-171-1/+3
| | | | | | | | | | | | Replace it with correct logic that raises ValueError for bad input. Issues #8478 and #12691 reported the incorrect logic. Add an Untokenize test case and an initial test method.
* | Issue #18960: Fix bugs with Python source code encoding in the second line.Serhiy Storchaka2014-01-091-0/+3
|\ \ | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * The first line of Python script could be executed twice when the source encoding (not equal to 'utf-8') was specified on the second line. * Now the source encoding declaration on the second line isn't effective if the first line contains anything except a comment. * As a consequence, 'python -x' works now again with files with the source encoding declarations specified on the second file, and can be used again to make Python batch files on Windows. * The tokenize module now ignore the source encoding declaration on the second line if the first line contains anything except a comment. * IDLE now ignores the source encoding declaration on the second line if the first line contains anything except a comment. * 2to3 and the findnocoding.py script now ignore the source encoding declaration on the second line if the first line contains anything except a comment.
| * Issue #18960: Fix bugs with Python source code encoding in the second line.Serhiy Storchaka2014-01-091-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * The first line of Python script could be executed twice when the source encoding (not equal to 'utf-8') was specified on the second line. * Now the source encoding declaration on the second line isn't effective if the first line contains anything except a comment. * As a consequence, 'python -x' works now again with files with the source encoding declarations specified on the second file, and can be used again to make Python batch files on Windows. * The tokenize module now ignore the source encoding declaration on the second line if the first line contains anything except a comment. * IDLE now ignores the source encoding declaration on the second line if the first line contains anything except a comment. * 2to3 and the findnocoding.py script now ignore the source encoding declaration on the second line if the first line contains anything except a comment.
* | #19620: merge with 3.3.Ezio Melotti2013-11-251-1/+1
|\ \ | |/
| * #19620: Fix typo in docstring (noticed by Christopher Welborn).Ezio Melotti2013-11-251-1/+1
| |
* | Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py scriptSerhiy Storchaka2013-09-161-4/+4
|\ \ | |/ | | | | now detect Python source code encoding only in comment lines.
| * Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py scriptSerhiy Storchaka2013-09-161-4/+4
| | | | | | | | now detect Python source code encoding only in comment lines.
* | Replace IOError with OSError (#16715)Andrew Svetlov2012-12-251-1/+1
|/
* #16152: merge with 3.2.Ezio Melotti2012-11-031-1/+3
|\
| * #16152: fix tokenize to ignore whitespace at the end of the code when no ↵Ezio Melotti2012-11-031-1/+3
| | | | | | | | newline is found. Patch by Ned Batchelder.
* | Merge branchFlorent Xicluna2012-07-071-1/+1
|\ \ | |/
| * Issue #14990: tokenize: correctly fail with SyntaxError on invalid encoding ↵Florent Xicluna2012-07-071-1/+1
| | | | | | | | declaration.
* | Issue #15096: Drop support for the ur string prefixChristian Heimes2012-06-201-9/+3
| |
* | Issue #15054: Fix incorrect tokenization of 'b' string literals.Meador Inge2012-06-171-1/+1
| | | | | | | | Patch by Serhiy Storchaka.
* | Issue #14629: Mention the filename in SyntaxError exceptions fromBrett Cannon2012-04-201-3/+19
| | | | | | | | tokenizer.detect_encoding() (when available).
* | merge 3.2: issue 14629Martin v. Löwis2012-04-201-2/+5
|\ \ | |/
| * Issue #14629: Raise SyntaxError in tokenizer.detect_encodingMartin v. Löwis2012-04-201-2/+5
| | | | | | | | if the first two lines have non-UTF-8 characters without an encoding declaration.
| * Merged revisions 88498 via svnmerge fromBrett Cannon2011-02-221-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | svn+ssh://pythondev@svn.python.org/python/branches/py3k ........ r88498 | brett.cannon | 2011-02-21 19:25:12 -0800 (Mon, 21 Feb 2011) | 8 lines Issue #11074: Make 'tokenize' so it can be reloaded. The module stored away the 'open' object as found in the global namespace (which fell through to the built-in namespace) since it defined its own 'open'. Problem is that if you reloaded the module it then grabbed the 'open' defined in the previous load, leading to code that infinite recursed. Switched to simply call builtins.open directly. ........
* | Updated tokenize to support the inverse byte literals new in 3.3Armin Ronacher2012-03-041-6/+16
| |
* | Basic support for PEP 414 without docs or tests.Armin Ronacher2012-03-041-8/+22
| |
* | Issue #2134: Add support for tokenize.TokenInfo.exact_type.Meador Inge2012-01-191-1/+58
| |
* | Issue #13150: The tokenize module doesn't compile large regular expressions ↵Antoine Pitrou2011-10-111-19/+16
| | | | | | | | | | | | at startup anymore. Instead, the re module's standard caching does its work.
* | Issue #12943: python -m tokenize support has been added to tokenize.Meador Inge2011-10-071-23/+56
| |
* | Issue #11074: Make 'tokenize' so it can be reloaded.Brett Cannon2011-02-221-3/+2
|/ | | | | | | | The module stored away the 'open' object as found in the global namespace (which fell through to the built-in namespace) since it defined its own 'open'. Problem is that if you reloaded the module it then grabbed the 'open' defined in the previous load, leading to code that infinite recursed. Switched to simply call builtins.open directly.
* Issue #10386: Added __all__ to token module; this simplifies importingAlexander Belopolsky2010-11-111-3/+2
| | | | | in tokenize module and prevents leaking of private names through import *.
* Issue #10335: Add tokenize.open(), detect the file encoding usingVictor Stinner2010-11-091-0/+15
| | | | tokenize.detect_encoding() and open it in read only mode.
* A little bit more readable repr method.Raymond Hettinger2010-09-091-3/+3
|
* Experiment: Let collections.namedtuple() do the work. This should work now ↵Raymond Hettinger2010-09-091-39/+3
| | | | that _collections is pre-built. The buildbots will tell us shortly.
* Improve the repr for the TokenInfo named tuple.Raymond Hettinger2010-09-091-1/+28
|
* Remove unused import, fix typo and rewrap docstrings.Florent Xicluna2010-09-031-17/+18
|
* handle names starting with non-ascii characters correctly #9712Benjamin Peterson2010-08-301-5/+10
|
* fix for files with coding cookies and BOMsBenjamin Peterson2010-03-181-3/+5
|