Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | bpo-31029: test_tokenize Add missing import unittest (#2998) | Rajath Agasthya | 2017-08-05 | 1 | -0/+1 |
| | |||||
* | Issue #26331: Implement the parsing part of PEP 515. | Brett Cannon | 2016-09-09 | 1 | -7/+23 |
| | | | | Thanks to Georg Brandl for the patch. | ||||
* | Rename test_pep####.py files | Zachary Ware | 2016-09-09 | 1 | -5/+6 |
| | |||||
* | Fix running test_tokenize directly | Zachary Ware | 2016-09-09 | 1 | -2/+2 |
| | |||||
* | Issue 25311: Add support for f-strings to tokenize.py. Also added some ↵ | Eric V. Smith | 2015-10-26 | 1 | -0/+17 |
| | | | | comments to explain what's happening, since it's not so obvious. | ||||
* | Issue 25422: Add tests for multi-line string tokenization. Also remove ↵ | Eric V. Smith | 2015-10-17 | 1 | -6/+32 |
| | | | | truncated tokens. | ||||
* | Issue #25317: Converted doctests in test_tokenize to unittests. | Serhiy Storchaka | 2015-10-06 | 1 | -419/+398 |
|\ | | | | | | | Made test_tokenize discoverable. | ||||
| * | Issue #25317: Converted doctests in test_tokenize to unittests. | Serhiy Storchaka | 2015-10-06 | 1 | -357/+332 |
| | | | | | | | | Made test_tokenize discoverable. | ||||
* | | Issue #24619: Simplify async/await tokenization. | Yury Selivanov | 2015-07-23 | 1 | -0/+73 |
| | | | | | | | | | | | | | | | | | | | | This commit simplifies async/await tokenization in tokenizer.c, tokenize.py & lib2to3/tokenize.py. Previous solution was to keep a stack of async-def & def blocks, whereas the new approach is just to remember position of the outermost async-def block. This change won't bring any parsing performance improvements, but it makes the code much easier to read and validate. | ||||
* | | Issue #24619: New approach for tokenizing async/await. | Yury Selivanov | 2015-07-22 | 1 | -2/+13 |
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes how one-line async-defs and defs are tracked by tokenizer. It allows to correctly parse invalid code such as: >>> async def f(): ... def g(): pass ... async = 10 and valid code such as: >>> async def f(): ... async def g(): pass ... await z As a consequence, is is now possible to have one-line 'async def foo(): await ..' functions: >>> async def foo(): return await bar() | ||||
* | | Issue #20387: Merge test and patch from 3.4.4 | Jason R. Coombs | 2015-06-28 | 1 | -1/+20 |
|\ \ | |/ | |||||
| * | Issue #20387: Correct test to properly capture expectation. | Jason R. Coombs | 2015-06-26 | 1 | -2/+2 |
| | | |||||
| * | Issue #20387: Add test capturing failure to roundtrip indented code in ↵ | Jason R. Coombs | 2015-06-20 | 1 | -0/+17 |
| | | | | | | | | tokenize module. | ||||
| * | Remove unused import and remove doctest-only import into doctests. | Jason R. Coombs | 2015-06-20 | 1 | -1/+3 |
| | | |||||
* | | (Merge 3.5) Issue #23840: tokenize.open() now closes the temporary binary file | Victor Stinner | 2015-05-25 | 1 | -1/+9 |
|\ \ | |/ | | | | | on error to fix a resource warning. | ||||
| * | Issue #23840: tokenize.open() now closes the temporary binary file on error to | Victor Stinner | 2015-05-25 | 1 | -1/+9 |
| | | | | | | | | fix a resource warning. | ||||
* | | Issue 24226: Fix parsing of many sequential one-line 'def' statements. | Yury Selivanov | 2015-05-18 | 1 | -0/+11 |
| | | |||||
* | | PEP 0492 -- Coroutines with async and await syntax. Issue #24017. | Yury Selivanov | 2015-05-12 | 1 | -0/+186 |
| | | |||||
* | | Issue #23681: Fixed Python 2 to 3 poring bugs. | Serhiy Storchaka | 2015-03-20 | 1 | -3/+4 |
|\ \ | |/ | | | | | Indexing bytes retiurns an integer, not bytes. | ||||
| * | Issue #23681: Fixed Python 2 to 3 poring bugs. | Serhiy Storchaka | 2015-03-20 | 1 | -3/+4 |
| | | | | | | | | Indexing bytes retiurns an integer, not bytes. | ||||
* | | PEP 465: a dedicated infix operator for matrix multiplication (closes #21176) | Benjamin Peterson | 2014-04-10 | 1 | -1/+4 |
|/ | |||||
* | Issue #9974: When untokenizing, use row info to insert backslash+newline. | Terry Jan Reedy | 2014-02-24 | 1 | -1/+16 |
| | | | | Original patches by A. Kuchling and G. Rees (#12691). | ||||
* | Issue #20750, Enable roundtrip tests for new 5-tuple untokenize. The | Terry Jan Reedy | 2014-02-23 | 1 | -14/+38 |
| | | | | | | constructed examples and all but 7 of the test/test_*.py files (run with -ucpu) pass. Remove those that fail the new test from the selection list. Patch partly based on patches by G. Brandl (#8478) and G. Rees (#12691). | ||||
* | Issue #8478: Untokenizer.compat now processes first token from iterator input. | Terry Jan Reedy | 2014-02-18 | 1 | -0/+13 |
| | | | | Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees. | ||||
* | whitespace | Terry Jan Reedy | 2014-02-17 | 1 | -2/+2 |
| | |||||
* | Untokenize: An logically incorrect assert tested user input validity. | Terry Jan Reedy | 2014-02-17 | 1 | -1/+15 |
| | | | | | | Replace it with correct logic that raises ValueError for bad input. Issues #8478 and #12691 reported the incorrect logic. Add an Untokenize test case and an initial test method. | ||||
* | Issue #18960: Fix bugs with Python source code encoding in the second line. | Serhiy Storchaka | 2014-01-09 | 1 | -0/+33 |
| | | | | | | | | | | | | | | | | | | | | | | * The first line of Python script could be executed twice when the source encoding (not equal to 'utf-8') was specified on the second line. * Now the source encoding declaration on the second line isn't effective if the first line contains anything except a comment. * As a consequence, 'python -x' works now again with files with the source encoding declarations specified on the second file, and can be used again to make Python batch files on Windows. * The tokenize module now ignore the source encoding declaration on the second line if the first line contains anything except a comment. * IDLE now ignores the source encoding declaration on the second line if the first line contains anything except a comment. * 2to3 and the findnocoding.py script now ignore the source encoding declaration on the second line if the first line contains anything except a comment. | ||||
* | Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py script | Serhiy Storchaka | 2013-09-16 | 1 | -0/+7 |
| | | | | now detect Python source code encoding only in comment lines. | ||||
* | #16152: merge with 3.2. | Ezio Melotti | 2012-11-03 | 1 | -0/+4 |
|\ | |||||
| * | #16152: fix tokenize to ignore whitespace at the end of the code when no ↵ | Ezio Melotti | 2012-11-03 | 1 | -0/+5 |
| | | | | | | | | newline is found. Patch by Ned Batchelder. | ||||
* | | Merge branch | Florent Xicluna | 2012-07-07 | 1 | -0/+4 |
|\ \ | |/ | |||||
| * | Issue #14990: tokenize: correctly fail with SyntaxError on invalid encoding ↵ | Florent Xicluna | 2012-07-07 | 1 | -0/+4 |
| | | | | | | | | declaration. | ||||
* | | Issue #15096: Drop support for the ur string prefix | Christian Heimes | 2012-06-20 | 1 | -20/+2 |
| | | |||||
* | | Issue #15054: Fix incorrect tokenization of 'b' string literals. | Meador Inge | 2012-06-17 | 1 | -0/+76 |
| | | | | | | | | Patch by Serhiy Storchaka. | ||||
* | | Issue #14629: Mention the filename in SyntaxError exceptions from | Brett Cannon | 2012-04-20 | 1 | -0/+29 |
| | | | | | | | | tokenizer.detect_encoding() (when available). | ||||
* | | merge 3.2: issue 14629 | Martin v. Löwis | 2012-04-20 | 1 | -0/+10 |
|\ \ | |/ | |||||
| * | Issue #14629: Raise SyntaxError in tokenizer.detect_encoding | Martin v. Löwis | 2012-04-20 | 1 | -0/+10 |
| | | | | | | | | if the first two lines have non-UTF-8 characters without an encoding declaration. | ||||
* | | Updated tokenize to support the inverse byte literals new in 3.3 | Armin Ronacher | 2012-03-04 | 1 | -0/+12 |
| | | |||||
* | | Issue #2134: Add support for tokenize.TokenInfo.exact_type. | Meador Inge | 2012-01-19 | 1 | -1/+74 |
| | | |||||
* | | #13012: use splitlines(keepends=True/False) instead of splitlines(0/1). | Ezio Melotti | 2011-09-28 | 1 | -1/+1 |
|/ | |||||
* | tokenize is just broken on test_pep3131.py | Benjamin Peterson | 2011-08-13 | 1 | -0/+3 |
| | |||||
* | Issue #12587: Correct faulty test file and reference in test_tokenize. | Ned Deily | 2011-07-19 | 1 | -1/+1 |
| | | | | (Patch by Robert Xiao) | ||||
* | #9424: Replace deprecated assert* methods in the Python test suite. | Ezio Melotti | 2010-11-20 | 1 | -29/+29 |
| | |||||
* | test_tokenize: use self.assertEqual() instead of plain assert | Victor Stinner | 2010-11-09 | 1 | -4/+4 |
| | |||||
* | Issue #10335: Add tokenize.open(), detect the file encoding using | Victor Stinner | 2010-11-09 | 1 | -1/+22 |
| | | | | tokenize.detect_encoding() and open it in read only mode. | ||||
* | Fix #10258 - clean up resource warning | Brian Curtin | 2010-10-30 | 1 | -2/+4 |
| | |||||
* | Replace the "compiler" resource with the more generic "cpu", so | Antoine Pitrou | 2010-10-14 | 1 | -2/+2 |
| | | | | as to mark CPU-heavy tests. | ||||
* | handle names starting with non-ascii characters correctly #9712 | Benjamin Peterson | 2010-08-30 | 1 | -0/+13 |
| | |||||
* | remove pointless coding cookie | Benjamin Peterson | 2010-08-30 | 1 | -2/+0 |
| | |||||
* | Issue #9337: Make float.__str__ identical to float.__repr__. | Mark Dickinson | 2010-08-04 | 1 | -2/+2 |
| | | | | (And similarly for complex numbers.) |