| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes a problem on AIX with the signed int case code in
getargs.c, after Trent Mick's intervention about MIN/MAX overflow
checks. The AIX compiler/optimizer generates bogus code with the
default flags "-g -O" causing test_builtin to fail: int("10", 16) <>
16L. Swapping the two checks in the signed int code makes the problem
go away.
Also, make the error messages fit in 80 char lines in the
source.
|
|
|
|
|
|
| |
Avoid calling the dealloc function, previously triggered with
DECREF(inst). This caused a segfault in PyDict_GetItem, called with a
NULL dict, whenever inst->in_dict fails under low-memory conditions.
|
|
|
|
|
|
|
|
|
|
|
| |
The depth field was never decremented inside w_object(), and it was
never initialized in PyMarshal_WriteObjectToFile().
This caused imports from .pyc files to fil mysteriously when the .pyc
file was written by the broken code -- w_object() would bail out
early, but PyMarshal_WriteObjectToFile() doesn't check the error or
return an error code, and apparently the marshalling code doesn't call
PyErr_Check() either. (That's a separate patch if I feel like it.)
|
|
|
|
|
|
|
|
|
|
| |
mislabeled.
(Using -c and then -e rearranges some comments, so I won't check that
in -- but it's a good test anyway.
Note that pindent is not perfect -- e.g. it doesn't know about
triple-quoted strings!)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
A Python program can be completed and reformatted using
Tools/scripts/pindent.py. Unfortunately there is no option for removal
of the generated "# end"-tags. Although a few Python commands or a
"grep -v '# end '" can do wonders here, there are two drawbacks:
- not everyone has grep/time to write a Python script
- it is not checked whether the "# end"-tags were used validly
Solution:
add extra option "-e" (eliminate) to pindent.py
|
|
|
|
|
|
|
|
|
| |
Fix warnings on 64-bit build build of signalmodule.c
- Though I know that SIG_DFL and SIG_IGN are just small constants,
there are cast to function pointers so the appropriate Python call is
PyLong_FromVoidPtr so that the pointer value cannot overflow on Win64
where sizeof(long) < sizeof(void*).
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch fixes cPickle.c for 64-bit platforms.
- The false assumption sizeof(long) == size(void*) exists where
PyInt_FromLong is used to represent a pointer. The safe Python call
for this is PyLong_FromVoidPtr. (On platforms where the above
assumption *is* true a PyInt is returned as before so there is no
effective change.)
- use size_t instead of int for some variables
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
tests.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Added an example of using an HTTP POST request.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
fromfile(), to hold fread() result.)
|
|
|
|
| |
to hold strlen() outcome).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patches fixes a possible overflow of the optional timeout
parameter for the select() function (selectmodule.c). This timeout is
passed in as a double and then truncated to an int. If the double is
sufficiently large you can get unexpected results as it
overflows. This patch raises an overflow if the given select timeout
overflows.
[GvR: To my embarrassment, the original code was assuming an int could
always hold a million. Note that the overflow check doesn't test for
a very large *negative* timeout passed in -- but who in the world
would do such a thing?]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Various small fixes to the builtin module to ensure no buffer
overflows.
- chunk #1:
Proper casting to ensure no truncation, and hence no surprises, in the
comparison.
- chunk #2:
The id() function guarantees a unique return value for different
objects. It does this by returning the pointer to the object. By
returning a PyInt, on Win64 (sizeof(long) < sizeof(void*)) the pointer
is truncated and the guarantee may be proven false. The appropriate
return function is PyLong_FromVoidPtr, this returns a PyLong if that
is necessary to return the pointer without truncation.
[GvR: note that this means that id() can now return a long on Win32
platforms. This *might* break some code...]
- chunk #3:
Ensure no overflow in raw_input(). Granted the user would have to pass
in >2GB of data but it *is* a possible buffer overflow condition.
|
|
|
|
| |
tableii & friends markup family.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
are and are not turned into bound methods; some confusion was noted by
Andrew Dalke.
In particular, it has to be noted that functions located on the class
instance are not turned into any sort of method, only those which are
found via the underlying class.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As I really do not have anything better to do at the moment, I have written
a patch to Python/marshal.c that prevents Python dumping core when trying
to marshal stack bustingly deep (or recursive) data structure.
It just throws an exception; even slightly clever handling of recursive
data is what pickle is for...
[Fred Drake:] Moved magic constant 5000 to a #define.
This closes SourceForge patch #100645.
|
|
|
|
|
|
|
|
| |
Testing: test_array.py was also extended to check that one can set the
full range of values for each of the integral signed and unsigned
array types.
This closes SourceForge patch #100506.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cause: Relatively recent (last month) patches to getargs.c added
overflow checking to the PyArg_Parse*() integral formatters thereby
restricting 'b' to unsigned char value and 'h','i', and 'l' to signed
integral values (i.e. if the incoming value is outside of the
specified bounds you get an OverflowError, previous it silently
overflowed).
The problem: This broke the array module (as Fredrik pointed out)
because *its* formatters relied on the loose allowance of signed and
unsigned ranges being able to pass through PyArg_Parse*()'s
formatters.
The fix: This patch fixes the array module to work with the more
strict bounds checking now in PyArg_Parse*().
How: If the type signature of a formatter in the arraymodule exactly
matches one in PyArg_Parse*(), then use that directly. If there is no
equivalent type signature in PyArg_Parse*() (e.g. there is no unsigned
int formatter in PyArg_Parse*()), then use the next one up and do some
extra bounds checking in the array module.
This partially closes SourceForge patch #100506.
|
|
|
|
| |
Documentation updates related to the addition of openpty() and forkpty().
|
|
|
|
|
|
|
|
|
|
|
|
| |
Perfect hash table generator. Outputs a Python extension module
which provides access to the hash table (which is stored in static
C data) using custom code.
This module can currently only generates code for the ucnhash
module, but can easily be adapted to produce perfect hash tables
for other tasks where fast lookup in large tables is needed.
By Bill Tutt.
|
|
|
|
|
| |
Generator for the new ucnhash module (ucnhash.h|c). Uses perfect_hash.py
to create the ucnhash module.
|
|
|
|
|
|
| |
Utility extension module needed by perfect_hash.py
By Bill Tutt.
|
|
|
|
|
|
|
|
| |
Patch to the standard unicode-escape codec which dynamically
loads the Unicode name to ordinal mapping from the module
ucnhash.
By Bill Tutt.
|
|
|
|
| |
Added new ucnhash module by Bill Tutt.
|
|
|
|
| |
Added new ucnhash module.
|
|
|
|
| |
Updated test output.
|
|
|
|
|
| |
Added tests for the new Unicode character name support in the
standard unicode-escape codec.
|