summaryrefslogtreecommitdiffstats
path: root/Lib/test/tokenize_tests-utf8-coding-cookie-and-no-utf8-bom-sig.txt
Commit message (Collapse)AuthorAgeFilesLines
* Issue #12587: Correct faulty test file and reference in test_tokenize.Ned Deily2011-07-191-1/+1
| | | | (Patch by Robert Xiao)
* Ran svneol.pyMartin v. Löwis2008-06-131-13/+13
|
* - Issue #719888: Updated tokenize to use a bytes API. generate_tokens has beenTrent Nelson2008-03-181-0/+13
renamed tokenize and now works with bytes rather than strings. A new detect_encoding function has been added for determining source file encoding according to PEP-0263. Token sequences returned by tokenize always start with an ENCODING token which specifies the encoding used to decode the file. This token is used to encode the output of untokenize back to bytes. Credit goes to Michael "I'm-going-to-name-my-first-child-unittest" Foord from Resolver Systems for this work.