summaryrefslogtreecommitdiffstats
path: root/Parser/tokenizer.c
Commit message (Collapse)AuthorAgeFilesLines
* use our own locale independent ctype macrosBenjamin Peterson2010-04-031-19/+3
| | | | requires building pyctype.o into pgen
* ensure that the locale does not affect the tokenization of identifiersBenjamin Peterson2010-04-031-4/+18
|
* Issue #3137: Don't ignore errors at startup, especially a keyboard interruptVictor Stinner2010-03-101-1/+5
| | | | | | (SIGINT). If an error occurs while importing the site module, the error is printed and Python exits. Initialize the GIL before importing the site module.
* Issue #7820: The parser tokenizer restores all bytes in the right if the BOMVictor Stinner2010-03-021-22/+32
| | | | | | check fails. Fix an assertion in pydebug mode.
* rewrite translate_newlines for clarityBenjamin Peterson2009-12-061-12/+11
|
* fix several compile() issues by translating newlines in the tokenizerBenjamin Peterson2009-11-121-16/+66
|
* spellingBenjamin Peterson2009-11-071-1/+1
|
* fix some coding styleBenjamin Peterson2009-10-091-13/+30
|
* don't mask encoding errors when decoding a string #6289Benjamin Peterson2009-06-161-4/+1
|
* #3367: revert rev. 65539: this change causes test_parser to failAndrew M. Kuchling2008-08-051-1/+1
|
* #3367 from Kristjan Valur Jonsson:Andrew M. Kuchling2008-08-051-1/+1
| | | | | | | If a PyTokenizer_FromString() is called with an empty string, the tokenizer's line_start member never gets initialized. Later, it is compared with the token pointer 'a' in parsetok.c:193 and that behavior can result in undefined behavior.
* This reverts r63675 based on the discussion in this thread:Gregory P. Smith2008-06-091-16/+16
| | | | | | | http://mail.python.org/pipermail/python-dev/2008-June/079988.html Python 2.6 should stick with PyString_* in its codebase. The PyBytes_* names in the spirit of 3.0 are available via a #define only. See the email thread.
* Renamed PyString to PyBytesChristian Heimes2008-05-261-16/+16
|
* Issue2681: the literal 0o8 was wrongly accepted, and evaluated as float(0.0).Amaury Forgeot d'Arc2008-04-241-1/+1
| | | | | This happened only when 8 is the first digit. Credits go to Lukas Meuser.
* Revert r61969 which added casts to Py_CHARMASK to avoid compiler warnings.Neal Norwitz2008-03-281-8/+0
| | | | | | Rather than sprinkle casts throughout the code, change Py_CHARMASK to always cast it's result to an unsigned char. This should ensure we do the right thing when accessing an array with the result.
* Make Py3k warnings consistent w.r.t. punctuation; also respect theGeorg Brandl2008-03-251-1/+1
| | | | EOL 80 limit and supply more alternatives in warning messages.
* Finished backporting PEP 3127, Integer Literal Support and Syntax.Eric Smith2008-03-171-1/+25
| | | | | | | | Added 0b and 0o literals to tokenizer. Modified PyOS_strtoul to support 0b and 0o inputs. Modified PyLong_FromString to support guessing 0b and 0o inputs. Renamed test_hexoct.py to test_int_literal.py and added binary tests. Added upper and lower case 0b, 0O, and 0X tests to test_int_literal.py
* Add assertion that we do not blow out newlNeal Norwitz2008-01-271-0/+1
|
* Fixed bug #1915: Python compiles with --enable-unicode=no again. However ↵Christian Heimes2008-01-231-2/+1
| | | | several extension methods and modules do not work without unicode support.
* Add a "const" to make gcc happy.Georg Brandl2008-01-211-1/+1
|
* Issue #1882: when compiling code from a string, encoding cookies in theGeorg Brandl2008-01-211-2/+13
| | | | second line of code were not always recognized correctly.
* Fix #1679: "0x" was taken as a valid integer literal.Georg Brandl2008-01-191-0/+7
| | | | | Fixes the tokenizer, tokenize.py and int() to reject this. Patches by Malte Helmert.
* Added bytes and b'' as aliases for str and ''Christian Heimes2008-01-181-0/+8
|
* Fix #define ordering.Georg Brandl2008-01-071-3/+2
|
* Make Python compile with --disable-unicode.Georg Brandl2008-01-071-0/+2
|
* Warning "<> not supported in 3.x" should be enabled only when the -3 option ↵Amaury Forgeot d'Arc2007-11-241-1/+1
| | | | is set.
* Fixed problems in the last commit. Filenames and line numbers weren't ↵Christian Heimes2007-11-231-9/+11
| | | | | | reported correctly. Backquotes still don't report the correct file. The AST nodes only contain the line number but not the file name.
* Applied patch #1754273 and #1754271 from Thomas GleeChristian Heimes2007-11-231-1/+10
| | | | The patches are adding deprecation warnings for back ticks and <>
* Change a PyErr_Print() into a PyErr_Clear(),Guido van Rossum2007-10-151-1/+1
| | | | per discussion in issue 1031213.
* Patch #1031213: Decode source line in SyntaxErrors back to its originalMartin v. Löwis2007-09-041-0/+62
| | | | source encoding. Will backport to 2.5.
* Comment grammarAndrew M. Kuchling2006-10-061-1/+1
|
* Don't truncate if size_t is bigger than uintNeal Norwitz2006-06-121-1/+1
|
* Patch #1357836:Neal Norwitz2006-06-021-9/+11
| | | | | | | | | | Prevent an invalid memory read from test_coding in case the done flag is set. In that case, the loop isn't entered. I wonder if rather than setting the done flag in the cases before the loop, if they should just exit early. This code looks like it should be refactored. Backport candidate (also the early break above if decoding_fgets fails)
* C++ compiler cleanup: cast signed to unsignedSkip Montanaro2006-04-181-1/+1
|
* As discussed on python-dev, really fix the PyMem_*/PyObject_* memory APINeal Norwitz2006-04-111-22/+22
| | | | | | | | | | | | | | | | mismatches. At least I hope this fixes them all. This reverts part of my change from yesterday that converted everything in Parser/*.c to use PyObject_* API. The encoding doesn't really need to use PyMem_*, however, it uses new_string() which must return PyMem_* for handling the result of PyOS_Readline() which returns PyMem_* memory. If there were 2 versions of new_string() one that returned PyMem_* for tokens and one that return PyObject_* for encodings that could also fix this problem. I'm not sure which version would be clearer. This seems to fix both Guido's and Phillip's problems, so it's good enough for now. After this change, it would be good to review Parser/*.c for consistent use of the 2 memory APIs.
* Fix the code in Parser/ to also compile with C++. This was mostly casts forAnthony Baxter2006-04-111-12/+13
| | | | | | | malloc/realloc type functions, as well as renaming one variable called 'new' in tokensizer.c. Still lots more to be done, going to be checking in one chunk at a time or the patch will be massively huge. Still compiles ok with gcc.
* SF patch #1467512, fix double free with triple quoted string in standard build.Neal Norwitz2006-04-101-6/+6
| | | | | | This was the result of inconsistent use of PyMem_* and PyObject_* allocators. By changing to use PyObject_* allocator almost everywhere, this removes the inconsistency.
* Years in the making.Tim Peters2006-03-261-39/+44
| | | | | | | | | | | | | | | | | | | | | | | | objimpl.h, pymem.h: Stop mapping PyMem_{Del, DEL} and PyMem_{Free, FREE} to PyObject_{Free, FREE} in a release build. They're aliases for the system free() now. _subprocess.c/sp_handle_dealloc(): Since the memory was originally obtained via PyObject_NEW, it must be released via PyObject_FREE (or _DEL). pythonrun.c, tokenizer.c, parsermodule.c: I lost count of the number of PyObject vs PyMem mismatches in these -- it's like the specific function called at each site was picked at random, sometimes even with memory obtained via PyMem getting released via PyObject. Changed most to use PyObject uniformly, since the blobs allocated are predictably small in most cases, and obmalloc is generally faster than system mallocs then. If extension modules in real life prove as sloppy as Python's front end, we'll have to revert the objimpl.h + pymem.h part of this patch. Note that no problems will show up in a debug build (all calls still go thru obmalloc then). Problems will show up only in a release build, most likely segfaults.
* Use macro versions instead of function versions when we already know the type.Neal Norwitz2006-03-201-1/+3
| | | | | | | | This will hopefully get rid of some Coverity warnings, be a hint to developers, and be marginally faster. Some asserts were added when the type is currently known, but depends on values from another function.
* Fix crashing bug in tokenizer, when tokenizing files with non-ASCII bytesThomas Wouters2006-03-021-0/+5
| | | | | | | | | | | | | | | | | | | but without a specified encoding: decoding_fgets() (and decoding_feof()) can return NULL and fiddle with the 'tok' struct, making tok->buf NULL. This is okay in the other cases of calls to decoding_*(), it seems, but not in this one. This should get a test added, somewhere, but the testsuite doesn't seem to test encoding anywhere (although plenty of tests use it.) It seems to me that decoding errors in other places in the code (like at the start of a token, instead of in the middle of one) make the code end up adding small integers to NULL pointers, but happen to check for error states before using the calculated new pointers. I haven't been able to trigger any other crashes, in any case. I would nominate this file for a comlete rewrite for Py3k. The whole decoding trick is too bolted-on for my tastes.
* Patch #1440601: Add col_offset attribute to AST nodes.Martin v. Löwis2006-03-011-0/+5
|
* Change non-ASCII warning into a SyntaxError.Martin v. Löwis2006-02-281-10/+6
|
* Use Py_ssize_t to count the length.Martin v. Löwis2006-02-161-1/+1
|
* Merge ssize_t branch.Martin v. Löwis2006-02-151-10/+10
|
* Fix SF bug #1072182, problems with signed characters.Neal Norwitz2005-12-191-1/+1
| | | | Most of these can be backported.
* Fix Bug #1378022, UTF-8 files with a leading BOM crashed the interpreter.Neal Norwitz2005-12-181-0/+6
| | | | Needs backport.
* Fix some more memory leaks.Neal Norwitz2005-11-161-6/+11
| | | | | | Call error_ret() in decode_str(). It was called in some other places, but seemed inconsistent. It is safe to call PyTokenizer_Free() after calling error_ret().
* Free coding spec (cs) if there was an error to prevent mem leak. Maybe ↵Neal Norwitz2005-10-211-0/+3
| | | | backport candidate
* - Fix segfault with invalid coding.Neal Norwitz2005-10-021-1/+4
| | | | | | | - SF Bug #772896, unknown encoding results in MemoryError, which is not helpful I will only backport the segfault fix. I'll let Anthony decide if he wants the other changes backported. I will do the backport if asked.
* Apply SF patch #1101726: Fix buffer overrun in tokenizer.c when a source fileWalter Dörwald2005-07-121-27/+45
| | | | with a PEP 263 encoding declaration results in long decoded line.