summaryrefslogtreecommitdiffstats
path: root/Lib/test/test_tokenize.py
Commit message (Collapse)AuthorAgeFilesLines
* bpo-31029: test_tokenize Add missing import unittest (#2998)Rajath Agasthya2017-08-051-0/+1
|
* Issue #26331: Implement the parsing part of PEP 515.Brett Cannon2016-09-091-7/+23
| | | | Thanks to Georg Brandl for the patch.
* Rename test_pep####.py filesZachary Ware2016-09-091-5/+6
|
* Fix running test_tokenize directlyZachary Ware2016-09-091-2/+2
|
* Issue 25311: Add support for f-strings to tokenize.py. Also added some ↵Eric V. Smith2015-10-261-0/+17
| | | | comments to explain what's happening, since it's not so obvious.
* Issue 25422: Add tests for multi-line string tokenization. Also remove ↵Eric V. Smith2015-10-171-6/+32
| | | | truncated tokens.
* Issue #25317: Converted doctests in test_tokenize to unittests.Serhiy Storchaka2015-10-061-419/+398
|\ | | | | | | Made test_tokenize discoverable.
| * Issue #25317: Converted doctests in test_tokenize to unittests.Serhiy Storchaka2015-10-061-357/+332
| | | | | | | | Made test_tokenize discoverable.
* | Issue #24619: Simplify async/await tokenization.Yury Selivanov2015-07-231-0/+73
| | | | | | | | | | | | | | | | | | | | This commit simplifies async/await tokenization in tokenizer.c, tokenize.py & lib2to3/tokenize.py. Previous solution was to keep a stack of async-def & def blocks, whereas the new approach is just to remember position of the outermost async-def block. This change won't bring any parsing performance improvements, but it makes the code much easier to read and validate.
* | Issue #24619: New approach for tokenizing async/await.Yury Selivanov2015-07-221-2/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit fixes how one-line async-defs and defs are tracked by tokenizer. It allows to correctly parse invalid code such as: >>> async def f(): ... def g(): pass ... async = 10 and valid code such as: >>> async def f(): ... async def g(): pass ... await z As a consequence, is is now possible to have one-line 'async def foo(): await ..' functions: >>> async def foo(): return await bar()
* | Issue #20387: Merge test and patch from 3.4.4Jason R. Coombs2015-06-281-1/+20
|\ \ | |/
| * Issue #20387: Correct test to properly capture expectation.Jason R. Coombs2015-06-261-2/+2
| |
| * Issue #20387: Add test capturing failure to roundtrip indented code in ↵Jason R. Coombs2015-06-201-0/+17
| | | | | | | | tokenize module.
| * Remove unused import and remove doctest-only import into doctests.Jason R. Coombs2015-06-201-1/+3
| |
* | (Merge 3.5) Issue #23840: tokenize.open() now closes the temporary binary fileVictor Stinner2015-05-251-1/+9
|\ \ | |/ | | | | on error to fix a resource warning.
| * Issue #23840: tokenize.open() now closes the temporary binary file on error toVictor Stinner2015-05-251-1/+9
| | | | | | | | fix a resource warning.
* | Issue 24226: Fix parsing of many sequential one-line 'def' statements.Yury Selivanov2015-05-181-0/+11
| |
* | PEP 0492 -- Coroutines with async and await syntax. Issue #24017.Yury Selivanov2015-05-121-0/+186
| |
* | Issue #23681: Fixed Python 2 to 3 poring bugs.Serhiy Storchaka2015-03-201-3/+4
|\ \ | |/ | | | | Indexing bytes retiurns an integer, not bytes.
| * Issue #23681: Fixed Python 2 to 3 poring bugs.Serhiy Storchaka2015-03-201-3/+4
| | | | | | | | Indexing bytes retiurns an integer, not bytes.
* | PEP 465: a dedicated infix operator for matrix multiplication (closes #21176)Benjamin Peterson2014-04-101-1/+4
|/
* Issue #9974: When untokenizing, use row info to insert backslash+newline.Terry Jan Reedy2014-02-241-1/+16
| | | | Original patches by A. Kuchling and G. Rees (#12691).
* Issue #20750, Enable roundtrip tests for new 5-tuple untokenize. TheTerry Jan Reedy2014-02-231-14/+38
| | | | | | constructed examples and all but 7 of the test/test_*.py files (run with -ucpu) pass. Remove those that fail the new test from the selection list. Patch partly based on patches by G. Brandl (#8478) and G. Rees (#12691).
* Issue #8478: Untokenizer.compat now processes first token from iterator input.Terry Jan Reedy2014-02-181-0/+13
| | | | Patch based on lines from Georg Brandl, Eric Snow, and Gareth Rees.
* whitespaceTerry Jan Reedy2014-02-171-2/+2
|
* Untokenize: An logically incorrect assert tested user input validity.Terry Jan Reedy2014-02-171-1/+15
| | | | | | Replace it with correct logic that raises ValueError for bad input. Issues #8478 and #12691 reported the incorrect logic. Add an Untokenize test case and an initial test method.
* Issue #18960: Fix bugs with Python source code encoding in the second line.Serhiy Storchaka2014-01-091-0/+33
| | | | | | | | | | | | | | | | | | | | | | * The first line of Python script could be executed twice when the source encoding (not equal to 'utf-8') was specified on the second line. * Now the source encoding declaration on the second line isn't effective if the first line contains anything except a comment. * As a consequence, 'python -x' works now again with files with the source encoding declarations specified on the second file, and can be used again to make Python batch files on Windows. * The tokenize module now ignore the source encoding declaration on the second line if the first line contains anything except a comment. * IDLE now ignores the source encoding declaration on the second line if the first line contains anything except a comment. * 2to3 and the findnocoding.py script now ignore the source encoding declaration on the second line if the first line contains anything except a comment.
* Issue #18873: The tokenize module, IDLE, 2to3, and the findnocoding.py scriptSerhiy Storchaka2013-09-161-0/+7
| | | | now detect Python source code encoding only in comment lines.
* #16152: merge with 3.2.Ezio Melotti2012-11-031-0/+4
|\
| * #16152: fix tokenize to ignore whitespace at the end of the code when no ↵Ezio Melotti2012-11-031-0/+5
| | | | | | | | newline is found. Patch by Ned Batchelder.
* | Merge branchFlorent Xicluna2012-07-071-0/+4
|\ \ | |/
| * Issue #14990: tokenize: correctly fail with SyntaxError on invalid encoding ↵Florent Xicluna2012-07-071-0/+4
| | | | | | | | declaration.
* | Issue #15096: Drop support for the ur string prefixChristian Heimes2012-06-201-20/+2
| |
* | Issue #15054: Fix incorrect tokenization of 'b' string literals.Meador Inge2012-06-171-0/+76
| | | | | | | | Patch by Serhiy Storchaka.
* | Issue #14629: Mention the filename in SyntaxError exceptions fromBrett Cannon2012-04-201-0/+29
| | | | | | | | tokenizer.detect_encoding() (when available).
* | merge 3.2: issue 14629Martin v. Löwis2012-04-201-0/+10
|\ \ | |/
| * Issue #14629: Raise SyntaxError in tokenizer.detect_encodingMartin v. Löwis2012-04-201-0/+10
| | | | | | | | if the first two lines have non-UTF-8 characters without an encoding declaration.
* | Updated tokenize to support the inverse byte literals new in 3.3Armin Ronacher2012-03-041-0/+12
| |
* | Issue #2134: Add support for tokenize.TokenInfo.exact_type.Meador Inge2012-01-191-1/+74
| |
* | #13012: use splitlines(keepends=True/False) instead of splitlines(0/1).Ezio Melotti2011-09-281-1/+1
|/
* tokenize is just broken on test_pep3131.pyBenjamin Peterson2011-08-131-0/+3
|
* Issue #12587: Correct faulty test file and reference in test_tokenize.Ned Deily2011-07-191-1/+1
| | | | (Patch by Robert Xiao)
* #9424: Replace deprecated assert* methods in the Python test suite.Ezio Melotti2010-11-201-29/+29
|
* test_tokenize: use self.assertEqual() instead of plain assertVictor Stinner2010-11-091-4/+4
|
* Issue #10335: Add tokenize.open(), detect the file encoding usingVictor Stinner2010-11-091-1/+22
| | | | tokenize.detect_encoding() and open it in read only mode.
* Fix #10258 - clean up resource warningBrian Curtin2010-10-301-2/+4
|
* Replace the "compiler" resource with the more generic "cpu", soAntoine Pitrou2010-10-141-2/+2
| | | | as to mark CPU-heavy tests.
* handle names starting with non-ascii characters correctly #9712Benjamin Peterson2010-08-301-0/+13
|
* remove pointless coding cookieBenjamin Peterson2010-08-301-2/+0
|
* Issue #9337: Make float.__str__ identical to float.__repr__.Mark Dickinson2010-08-041-2/+2
| | | | (And similarly for complex numbers.)