summaryrefslogtreecommitdiffstats
path: root/Lib/test/test_tokenize.py
Commit message (Collapse)AuthorAgeFilesLines
* Remove unused imports in test modules.Georg Brandl2010-02-071-1/+1
|
* change test to what I intendedBenjamin Peterson2009-10-151-2/+2
|
* use floor division and add a test that exercises the tabsize codepathBenjamin Peterson2009-10-151-0/+17
|
* Issue2495: tokenize.untokenize did not insert space between two consecutive ↵Amaury Forgeot d'Arc2008-03-271-3/+8
| | | | | | | | string literals: "" "" => """", which is invalid code. Will backport
* Fixed tokenize testsChristian Heimes2008-03-271-1/+7
| | | | The tokenize module doesn't understand __future__.unicode_literals yet
* Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS.Eric Smith2008-03-171-2/+10
|
* Move test_tokenize to doctest.Brett Cannon2008-03-131-161/+498
| | | | Done as GHOP 238 by Josip Dzolonga.
* Hmm, this test has failed at least twice recently on the OpenBSD andNeal Norwitz2006-09-021-1/+12
| | | | | | | Debian sparc buildbots. Since this goes through a lot of tests and hits the disk a lot it could be slow (especially if NFS is involved). I'm not sure if that's the problem, but printing periodic msgs shouldn't hurt. The code was stolen from test_compiler.
* A new test here relied on preserving invisible trailingTim Peters2006-08-251-2/+3
| | | | whitespace in expected output. Stop that.
* Whitespace normalization.Tim Peters2006-08-251-3/+3
|
* Bug fixes large and small for tokenize.Jeremy Hylton2006-08-231-10/+64
| | | | | | | | | | | | | | | | | | | | Small: Always generate a NL or NEWLINE token following a COMMENT token. The old code did not generate an NL token if the comment was on a line by itself. Large: The output of untokenize() will now match the input exactly if it is passed the full token sequence. The old, crufty output is still generated if a limited input sequence is provided, where limited means that it does not include position information for tokens. Remaining bug: There is no CONTINUATION token (\) so there is no way for untokenize() to handle such code. Also, expanded the number of doctests in hopes of eventually removing the old-style tests that compare against a golden file. Bug fix candidate for Python 2.5.1. (Sigh.)
* Baby steps towards better tests for tokenizeJeremy Hylton2006-08-231-3/+46
|
* Repaired a number of errors in this test:Tim Peters2006-03-311-60/+74
| | | | | | | | | | | | | | | | | | - The doctests in decistmt() weren't run at all when test_tokenize was run via regrtest.py. - Some expected output in decistmt() was Windows-specific (but nobody noticed because the doctests weren't getting run). - test_roundtrip() didn't actually test anything when running the tests with -O. Now it does. - Changed test_roundtrip() to show the name of the input file when it fails. That would have saved a lot of time earlier today. - Added a bunch of comments.
* SF bug #1224621: tokenize module does not detect inconsistent dedentsRaymond Hettinger2005-06-211-1/+19
|
* Add untokenize() function to allow full round-trip tokenization.Raymond Hettinger2005-06-101-3/+73
| | | | | | Should significantly enhance the utility of the module by supporting the creation of tools that modify the token stream and writeback the modified result.
* Effectively renamed tokenize_tests.py to have a txt extension instead.Tim Peters2003-05-121-1/+1
| | | | | | | | | This file isn't meant to be executed, it's data input for test_tokenize.py. The problem with the .py extension is that it uses "non-standard" indentation, and it's good to test that, but reindent.py keeps wanting to fix it. But fixing the indentation causes the expected-output file to change, since exact line and column numbers are part of the tokenize.tokenize() output getting tested.
* Close the file after tokenizing it. Because the open file object wasTim Peters2003-05-121-2/+5
| | | | | bound to a module global, the file object remained opened throughout the test suite run.
* Get rid of relative imports in all unittests. Now anything thatBarry Warsaw2002-07-231-1/+1
| | | | | | | | | | | imports e.g. test_support must do so using an absolute package name such as "import test.test_support" or "from test import test_support". This also updates the README in Lib/test, and gets rid of the duplicate data dirctory in Lib/test/data (replaced by Lib/email/test/data). Now Tim and Jack can have at it. :)
* SF patch #474590 -- RISC OS supportGuido van Rossum2001-10-241-1/+1
|
* a bold attempt to fix things broken by MAL's verify patch: importFredrik Lundh2001-01-171-1/+1
| | | | 'verify' iff it's used by a test module...
* This patch removes all uses of "assert" in the regression test suiteMarc-André Lemburg2001-01-171-1/+1
| | | | | | | and replaces them with a new API verify(). As a result the regression suite will also perform its tests in optimization mode. Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
* Make reindent.py happy (convert everything to 4-space indents!).Fred Drake2000-10-231-1/+0
|
* Move unified findfile() into test_support.pyGuido van Rossum1998-04-231-13/+1
|
* Tests for tokenize.py (Ka-Ping Yee)Guido van Rossum1997-10-271-0/+22