Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | some cleanups | Benjamin Peterson | 2009-10-15 | 1 | -10/+10 |
| | |||||
* | use floor division and add a test that exercises the tabsize codepath | Benjamin Peterson | 2009-10-15 | 1 | -1/+1 |
| | |||||
* | pep8ify if blocks | Benjamin Peterson | 2009-10-15 | 1 | -9/+18 |
| | |||||
* | Remove a tuple unpacking in a parameter list to remove a SyntaxWarning raised | Brett Cannon | 2008-08-02 | 1 | -1/+3 |
| | | | | while running under -3. | ||||
* | revert 63965 for preformance reasons | Benjamin Peterson | 2008-06-05 | 1 | -1/+1 |
| | |||||
* | use the more idomatic while True | Benjamin Peterson | 2008-06-05 | 1 | -1/+1 |
| | |||||
* | Issue2495: tokenize.untokenize did not insert space between two consecutive ↵ | Amaury Forgeot d'Arc | 2008-03-27 | 1 | -1/+10 |
| | | | | | | | | string literals: "" "" => """", which is invalid code. Will backport | ||||
* | Added PEP 3127 support to tokenize (with tests); added PEP 3127 to NEWS. | Eric Smith | 2008-03-17 | 1 | -2/+3 |
| | |||||
* | Fix #1679: "0x" was taken as a valid integer literal. | Georg Brandl | 2008-01-19 | 1 | -1/+1 |
| | | | | | Fixes the tokenizer, tokenize.py and int() to reject this. Patches by Malte Helmert. | ||||
* | Added bytes and b'' as aliases for str and '' | Christian Heimes | 2008-01-18 | 1 | -3/+16 |
| | |||||
* | Add name to credits (for untokenize). | Raymond Hettinger | 2006-12-02 | 1 | -1/+1 |
| | |||||
* | Replace dead code with an assert. | Jeremy Hylton | 2006-08-23 | 1 | -4/+1 |
| | | | | | Now that COMMENT tokens are reliably followed by NL or NEWLINE, there is never a need to add extra newlines in untokenize. | ||||
* | Bug fixes large and small for tokenize. | Jeremy Hylton | 2006-08-23 | 1 | -31/+79 |
| | | | | | | | | | | | | | | | | | | | | Small: Always generate a NL or NEWLINE token following a COMMENT token. The old code did not generate an NL token if the comment was on a line by itself. Large: The output of untokenize() will now match the input exactly if it is passed the full token sequence. The old, crufty output is still generated if a limited input sequence is provided, where limited means that it does not include position information for tokens. Remaining bug: There is no CONTINUATION token (\) so there is no way for untokenize() to handle such code. Also, expanded the number of doctests in hopes of eventually removing the old-style tests that compare against a golden file. Bug fix candidate for Python 2.5.1. (Sigh.) | ||||
* | Make tabnanny recognize IndentationErrors raised by tokenize. | Georg Brandl | 2006-08-14 | 1 | -1/+2 |
| | | | | | Add a test to test_inspect to make sure indented source is recognized correctly. (fixes #1224621) | ||||
* | Insert a safety space after numbers as well as names in untokenize(). | Guido van Rossum | 2006-03-30 | 1 | -1/+1 |
| | |||||
* | SF bug #1224621: tokenize module does not detect inconsistent dedents | Raymond Hettinger | 2005-06-21 | 1 | -0/+3 |
| | |||||
* | Add untokenize() function to allow full round-trip tokenization. | Raymond Hettinger | 2005-06-10 | 1 | -3/+49 |
| | | | | | | Should significantly enhance the utility of the module by supporting the creation of tools that modify the token stream and writeback the modified result. | ||||
* | PEP-0318, @decorator-style. In Guido's words: | Anthony Baxter | 2004-08-02 | 1 | -1/+1 |
| | | | | | "@ seems the syntax that everybody can hate equally" Implementation by Mark Russell, from SF #979728. | ||||
* | Get rid of many apply() calls. | Guido van Rossum | 2003-02-27 | 1 | -3/+3 |
| | |||||
* | SF 633560: tokenize.__all__ needs "generate_tokens" | Raymond Hettinger | 2002-11-05 | 1 | -1/+2 |
| | |||||
* | Speed up the most egregious "if token in (long tuple)" cases by using | Guido van Rossum | 2002-08-24 | 1 | -10/+19 |
| | | | | | a dict instead. (Alas, using a Set would be slower instead of faster.) | ||||
* | Whitespace normalization. | Tim Peters | 2002-05-23 | 1 | -5/+5 |
| | |||||
* | Added docstrings excerpted from Python Library Reference. | Raymond Hettinger | 2002-05-15 | 1 | -0/+25 |
| | | | | Closes patch 556161. | ||||
* | Remove some now-obsolete generator future statements. | Tim Peters | 2002-04-01 | 1 | -2/+0 |
| | | | | | I left the email pkg alone; I'm not sure how Barry would like to handle that. | ||||
* | Cleanup x so it is not left in module | Neal Norwitz | 2002-03-26 | 1 | -0/+1 |
| | |||||
* | SF patch #455966: Allow leading 0 in float/imag literals. | Tim Peters | 2001-08-30 | 1 | -2/+2 |
| | | | | Consequences for Jython still unknown (but raised on Jython-Dev). | ||||
* | Add new tokens // and //=, in support of PEP 238. | Guido van Rossum | 2001-08-08 | 1 | -0/+1 |
| | |||||
* | Use string.ascii_letters instead of string.letters (SF bug #226706). | Fred Drake | 2001-07-20 | 1 | -1/+1 |
| | |||||
* | Preliminary support for "from __future__ import generators" to enable | Guido van Rossum | 2001-07-15 | 1 | -0/+2 |
| | | | | | | | | the yield statement. I figure we have to have this in before I can release 2.2a1 on Wednesday. Note: test_generators is currently broken, I'm counting on Tim to fix this. | ||||
* | Turns out Neil didn't intend for *all* of his gen-branch work to get | Tim Peters | 2001-06-29 | 1 | -8/+21 |
| | | | | | | | | | | | | | | | | | | | | committed. tokenize.py: I like these changes, and have tested them extensively without even realizing it, so I just updated the docstring and the docs. tabnanny.py: Also liked this, but did a little code fiddling. I should really rewrite this to *exploit* generators, but that's near the bottom of my effort/benefit scale so doubt I'll get to it anytime soon (it would be most useful as a non-trivial example of ideal use of generators; but test_generators.py has already grown plenty of food-for-thought examples). inspect.py: I'm sure Ping intended for this to continue running even under 1.5.2, so I reverted this to the last pre-gen-branch version. The "bugfix" I checked in in-between was actually repairing a bug *introduced* by the conversion to generators, so it's OK that the reverted version doesn't reflect that checkin. | ||||
* | Merging the gen-branch into the main line, at Guido's direction. Yay! | Tim Peters | 2001-06-18 | 1 | -15/+20 |
| | | | | | Bugfix candidate in inspect.py: it was referencing "self" outside of a method. | ||||
* | Provide a StopTokenizing exception for conveniently exiting the loop. | Ka-Ping Yee | 2001-03-23 | 1 | -10/+11 |
| | |||||
* | Better __credits__. | Ka-Ping Yee | 2001-03-01 | 1 | -1/+2 |
| | |||||
* | Add __author__ and __credits__ variables. | Ka-Ping Yee | 2001-03-01 | 1 | -1/+2 |
| | |||||
* | final round of __all__ lists (I hope) - skipped urllib2 because Moshe may be | Skip Montanaro | 2001-03-01 | 1 | -1/+5 |
| | | | | giving it a slight facelift | ||||
* | String method conversion. | Eric S. Raymond | 2001-02-09 | 1 | -1/+1 |
| | |||||
* | Add tokenizer support and tests for u'', U"", uR'', Ur"", etc. | Ka-Ping Yee | 2001-01-15 | 1 | -9/+25 |
| | |||||
* | Whitespace normalization. | Tim Peters | 2001-01-15 | 1 | -1/+0 |
| | |||||
* | Possible fix for Skip's bug 116136 (sre recursion limit hit in tokenize.py). | Tim Peters | 2000-10-07 | 1 | -12/+20 |
| | | | | | | | | | | | tokenize.py has always used naive regexps for matching string literals, and that appears to trigger the sre recursion limit on Skip's platform (he has very long single-line string literals). Replaced all of tokenize.py's string regexps with the "unrolled" forms used in IDLE, where they're known to handle even absurd (multi-megabyte!) string literals without trouble. See Friedl's book for explanation (at heart, the naive regexps create a backtracking choice point for each character in the literal, while the unrolled forms create none). | ||||
* | Update for augmented assignment, tested & approved by Guido. | Thomas Wouters | 2000-08-24 | 1 | -2/+5 |
| | |||||
* | Convert some old-style string exceptions to class exceptions. | Fred Drake | 2000-08-17 | 1 | -1/+4 |
| | |||||
* | Differentiate between NEWLINE token (an official newline) and NL token | Guido van Rossum | 1998-04-03 | 1 | -5/+15 |
| | | | | (a newline that the grammar ignores). | ||||
* | New, fixed version with proper r"..." and R"..." support from Ka-Ping. | Guido van Rossum | 1997-10-27 | 1 | -7/+10 |
| | |||||
* | Redone (by Ka-Ping) using the new re module, and adding recognition | Guido van Rossum | 1997-10-27 | 1 | -57/+55 |
| | | | | for r"..." raw strings. (And R"..." string support added by Guido.) | ||||
* | Correct typo in last line (test program invocation). | Guido van Rossum | 1997-06-03 | 1 | -1/+1 |
| | |||||
* | Ping's latest. Fixes triple quoted strings ending in odd | Guido van Rossum | 1997-04-09 | 1 | -20/+31 |
| | | | | #backslashes, and other stuff I don't know. | ||||
* | Ka-Ping's muich improved version of March 26, 1997: | Guido van Rossum | 1997-04-08 | 1 | -74/+98 |
| | | | | | | # Ignore now accepts \f as whitespace. Operator now includes '**'. # Ignore and Special now accept \n or \r\n at the end of a line. # Imagnumber is new. Expfloat is corrected to reject '0e4'. | ||||
* | Added support for imaginary constants (e.g. 0j, 1j, 1.0j). | Guido van Rossum | 1997-03-10 | 1 | -4/+5 |
| | |||||
* | Fixed doc string, added __version__, fixed 1 bug. | Guido van Rossum | 1997-03-07 | 1 | -11/+18 |
| | |||||
* | Ka-Ping's version. | Guido van Rossum | 1997-03-07 | 1 | -45/+132 |
| |