| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
svn+ssh://pythondev@svn.python.org/python/branches/py3k
........
r85482 | antoine.pitrou | 2010-10-14 17:34:31 +0200 (jeu., 14 oct. 2010) | 4 lines
Replace the "compiler" resource with the more generic "cpu", so
as to mark CPU-heavy tests.
........
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
string literals:
"" "" => """", which is invalid code.
Will backport
|
|
|
|
| |
The tokenize module doesn't understand __future__.unicode_literals yet
|
| |
|
|
|
|
| |
Done as GHOP 238 by Josip Dzolonga.
|
|
|
|
|
|
|
| |
Debian sparc buildbots. Since this goes through a lot of tests
and hits the disk a lot it could be slow (especially if NFS is involved).
I'm not sure if that's the problem, but printing periodic msgs shouldn't hurt.
The code was stolen from test_compiler.
|
|
|
|
| |
whitespace in expected output. Stop that.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Small: Always generate a NL or NEWLINE token following
a COMMENT token. The old code did not generate an NL token if
the comment was on a line by itself.
Large: The output of untokenize() will now match the
input exactly if it is passed the full token sequence. The
old, crufty output is still generated if a limited input
sequence is provided, where limited means that it does not
include position information for tokens.
Remaining bug: There is no CONTINUATION token (\) so there is no way
for untokenize() to handle such code.
Also, expanded the number of doctests in hopes of eventually removing
the old-style tests that compare against a golden file.
Bug fix candidate for Python 2.5.1. (Sigh.)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- The doctests in decistmt() weren't run at all when
test_tokenize was run via regrtest.py.
- Some expected output in decistmt() was Windows-specific
(but nobody noticed because the doctests weren't getting
run).
- test_roundtrip() didn't actually test anything when
running the tests with -O. Now it does.
- Changed test_roundtrip() to show the name of the input
file when it fails. That would have saved a lot of
time earlier today.
- Added a bunch of comments.
|
| |
|
|
|
|
|
|
| |
Should significantly enhance the utility of the module by supporting
the creation of tools that modify the token stream and writeback the
modified result.
|
|
|
|
|
|
|
|
|
| |
This file isn't meant to be executed, it's data input for test_tokenize.py.
The problem with the .py extension is that it uses "non-standard"
indentation, and it's good to test that, but reindent.py keeps wanting
to fix it. But fixing the indentation causes the expected-output file to
change, since exact line and column numbers are part of the
tokenize.tokenize() output getting tested.
|
|
|
|
|
| |
bound to a module global, the file object remained opened throughout
the test suite run.
|
|
|
|
|
|
|
|
|
|
|
| |
imports e.g. test_support must do so using an absolute package name
such as "import test.test_support" or "from test import test_support".
This also updates the README in Lib/test, and gets rid of the
duplicate data dirctory in Lib/test/data (replaced by
Lib/email/test/data).
Now Tim and Jack can have at it. :)
|
| |
|
|
|
|
| |
'verify' iff it's used by a test module...
|
|
|
|
|
|
|
| |
and replaces them with a new API verify(). As a result the regression
suite will also perform its tests in optimization mode.
Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
|
| |
|
| |
|
|
|