| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
New buffer overflow checks for formatting strings.
By Trent Mick.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
errors in some of the hash algorithms. For exmaple, in float_hash and
complex_hash a certain part of the value is not included in the hash
calculation. See Tim's, Guido's, and my discussion of this on
python-dev in May under the title "fix float_hash and complex_hash for
64-bit *nix"
(2) The hash algorithms that use pointers (e.g. func_hash, code_hash)
are universally not correct on Win64 (they assume that sizeof(long) ==
sizeof(void*))
As well, this patch significantly cleans up the hash code. It adds the
two function _Py_HashDouble and _PyHash_VoidPtr that the various
hashing routine are changed to use.
These help maintain the hash function invariant: (a==b) =>
(hash(a)==hash(b))) I have added Lib/test/test_hash.py and
Lib/test/output/test_hash to test this for some cases.
|
| |
|
|
|
|
|
|
| |
Avoid calling the dealloc function, previously triggered with
DECREF(inst). This caused a segfault in PyDict_GetItem, called with a
NULL dict, whenever inst->in_dict fails under low-memory conditions.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
Patch to the standard unicode-escape codec which dynamically
loads the Unicode name to ordinal mapping from the module
ucnhash.
By Bill Tutt.
|
|
|
|
|
| |
Better error message for "1 in unicodestring". Submitted
by Andrew Kuchling.
|
|
|
|
|
|
|
|
| |
This patch modifies the type structures of objects that
participate in GC. The object's tp_basicsize is increased when
GC is enabled. GC information is prefixed to the object to
maintain binary compatibility. GC objects also define the
tp_flag Py_TPFLAGS_GC.
|
| |
|
| |
|
|
|
|
|
| |
This patch adds the type methods traverse and clear necessary for GC
implementation.
|
|
|
|
|
| |
Simplify find code; this is a performance improvement on at least some
platforms.
|
|
|
|
|
|
|
|
| |
Fixed a bug in PyUnicode_Count() which would have caused a
core dump in case of substring coercion failure.
Synchronized .count() with the string method of the same name
to return len(s)+1 for s.count('').
|
|
|
|
|
| |
this patch introduces PySequence_Fast and PySequence_Fast_GET_ITEM,
and modifies the list.extend method to accept any kind of sequence.
|
|
|
|
|
| |
This patch fixes an optimisation mystery in _PyUnicodeNew causing segfaults
on AIX when the interpreter is compiled with -O.
|
|
|
|
|
| |
The error message refers to "append", yet the operation in
question is "concat".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The following patch adds "sq_contains" support to rangeobject, and enables
the already-written support for sq_contains in listobject and tupleobject.
The rangeobject "contains" code should be a bit more efficient than the
current default "in" implementation ;-) It might not get used much, but it's
not that much to add.
listobject.c and tupleobject.c already had code for sq_contains, and the
proper struct member was set, but the PyType structure was not extended to
include tp_flags, so the object-specific code was not getting called (Go
ahead, test it ;-). I also did this for the immutable_list_type in
listobject.c, eventhough it is probably never used. Symmetry and all that.
|
|
|
|
| |
Added code so that .isXXX() testing returns 0 for emtpy strings.
|
|
|
|
|
| |
Fixed a typo and removed a debug printf(). Thanks to Finn Bock
for finding these.
|
|
|
|
| |
reported by Mark Hammon
|
| |
|
|
|
|
|
|
|
|
| |
Fixed %c formatting to check for one character arguments. Thanks
to Finn Bock for finding this bug.
Added a fix for bug PR#348 which originated from not resetting
the globals correctly in _PyUnicode_Fini().
|
|
|
|
|
|
|
|
|
|
| |
Change the default encoding to 'ascii' (it was previously
defined as UTF-8).
Note: The implementation still uses UTF-8 to implement
the buffer protocol, so C APIs will still see UTF-8. This
is on purpose: rather than fixing the Unicode implementation,
the C APIs should be made Unicode aware.
|
|
|
|
|
|
|
| |
This patch correct bounds checking in PyLong_FromLongLong. Currently, it does
not check properly for negative values when checking to see if the incoming
value fits in a long or unsigned long. This results in possible silent
truncation of the value for very large negative values.
|
| |
|
|
|
|
|
| |
Removed PyErr_BadArgument() calls and replaced them with more useful
error messages.
|
|
|
|
|
| |
M.-A. Lemburg <mal@lemburg.com>:
Fixed a core dump in PyUnicode_Format().
|
|
|
|
|
|
|
| |
Added support for user settable default encodings. The
current implementation uses a per-process global which
defines the value of the encoding parameter in case it
is set to NULL (meaning: use the default encoding).
|
|
|
|
|
| |
is required" (we can't say more because we don't know in which context
it is called).
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the string methods that implement slice-like semantics with
optional args (count, find, endswith, etc.) to properly handle
indeces outside [INT_MIN, INT_MAX]. Previously the "i" formatter
for PyArg_ParseTuple was used to get the indices. These could overflow.
This patch changes the string methods to use the "O&" formatter with
the slice_index() function from ceval.c which is used to do the same
job for Python code slices (e.g. 'abcabcabc'[0:1000000000L]).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix the string methods that implement slice-like semantics with
optional args (count, find, endswith, etc.) to properly handle
indeces outside [INT_MIN, INT_MAX]. Previously the "i" formatter
for PyArg_ParseTuple was used to get the indices. These could overflow.
This patch changes the string methods to use the "O&" formatter with
the slice_index() function from ceval.c which is used to do the same
job for Python code slices (e.g. 'abcabcabc'[0:1000000000L]). slice_index()
is renamed _PyEval_SliceIndex() and is now exported. As well, the return
values for success/fail were changed to make slice_index directly
usable as required by the "O&" formatter.
[GvR: shouldn't a similar patch be applied to unicodeobject.c?]
|
|
|
|
|
|
| |
gave bogus results for chars in the range 128-255, because their
implementation was using signed characters. Fixed this by using
unsigned character pointers (as opposed to using Py_CHARMASK()).
|
|
|
|
| |
strings _are_ valid!
|
| |
|
|
|
|
|
|
|
|
|
|
| |
For more comments, read the patches@python.org archives.
For documentation read the comments in mymalloc.h and objimpl.h.
(This is not exactly what Vladimir posted to the patches list; I've
made a few changes, and Vladimir sent me a fix in private email for a
problem that only occurs in debug mode. I'm also holding back on his
change to main.c, which seems unnecessary to me.)
|
|
|
|
| |
a size of 0 *is* illegal.
|
|
|
|
| |
Fixes the MBCS codec to work correctly with zero length strings.
|
| |
|
|
|
|
|
| |
Fixed \OOO interpretation for Unicode objects. \777 now
correctly produces the Unicode character with ordinal 511.
|
| |
|
|
|
|
| |
Doc strings can now be given as Unicode strings.
|
|
|
|
|
|
|
| |
Fixed a reference leak in the allocator.
Renamed utf8_string to _PyUnicode_AsUTF8String() and made
it external for use by other parts of the interpreter.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The previous checkin (2.84) added a PyErr_Format call that made the
cost of raising an AttributeError much more expensive. In general
this doesn't matter, except that checks for __init__ and
__del__ methods, where exceptions are caught and cleared in C, also
got much more expensive.
The fix is to split instance_getattr1 into two calls:
instance_getattr2 checks the instance and the class for the attribute
and returns it or returns NULL on error. It does not raise an
exception.
instance_getattr1 does rexec checks, then calls instance_getattr2. It
raises an exception if instance_getattr2 returns NULL.
PyInstance_New and instance_dealloc now call instance_getattr2
directly.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Improvements:
- does no longer need any extra memory
- has no relationship to tstate
- works in debug mode
- can easily be modified for free threading (hi Greg:)
Side effects:
Trashcan does change the order of object destruction.
Prevending that would be quite an immense effort, as
my attempts have shown. This version works always
the same, with debug mode or not. The slightly
changed destruction order should therefore be no problem.
Algorithm:
While the old idea of delaying the destruction of some
obejcts at a certain recursion level was kept, we now
no longer aloocate an object to hold these objects.
The delayed objects are instead chained together
via their ob_type field. The type is encoded via
ob_refcnt. When it comes to the destruction of the
chain of waiting objects, the topmost object is popped
off the chain and revived with type and refcount 1,
then it gets a normal Py_DECREF.
I am confident that this solution is near optimum
for minimizing side effects and code bloat.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
_PyTuple_Resize(). In addition, a change suggested by Jeremy Hylton
to limit the size of the free lists is also merged into this patch.
Charles wrote initially:
"""
Test Case: run the following code:
class Nothing:
def __len__(self):
return 5
def __getitem__(self, i):
if i < 3:
return i
else:
raise IndexError, i
def g(a,*b,**c):
return
for x in xrange(1000000):
g(*Nothing())
and watch Python's memory use go up and up.
Diagnosis:
The analysis begins with the call to PySequence_Tuple at line 1641 in
ceval.c - the argument to g is seen to be a sequence but not a tuple,
so it needs to be converted from an abstract sequence to a concrete
tuple. PySequence_Tuple starts off by creating a new tuple of length
5 (line 1122 in abstract.c). Then at line 1149, since only 3 elements
were assigned, _PyTuple_Resize is called to make the 5-tuple into a
3-tuple. When we're all done the 3-tuple is decrefed, but rather than
being freed it is placed on the free_tuples cache.
The basic problem is that the 3-tuples are being added to the cache
but never picked up again, since _PyTuple_Resize doesn't make use of
the free_tuples cache. If you are resizing a 5-tuple to a 3-tuple and
there is already a 3-tuple in free_tuples[3], instead of using this
tuple, _PyTuple_Resize will realloc the 5-tuple to a 3-tuple. It
would more efficient to use the existing 3-tuple and cache the
5-tuple.
By making _PyTuple_Resize aware of the free_tuples (just as
PyTuple_New), we not only save a few calls to realloc, but also
prevent this misbehavior whereby tuples are being added to the
free_tuples list but never properly "recycled".
"""
And later:
"""
This patch replaces my submission of Sun, 16 Apr and addresses Jeremy
Hylton's suggestions that we also limit the size of the free tuple
list. I chose 2000 as the maximum number of tuples of any particular
size to save.
There was also a problem with the previous version of this patch
causing a core dump if Python was built with Py_TRACE_REFS. This is
fixed in the below version of the patch, which uses tupledealloc
instead of _Py_Dealloc.
"""
|
|
|
|
|
| |
Note that comparisons of deeply nested objects can still dump core in
extreme cases.
|
|
|
|
|
|
|
|
|
| |
The maxsplit functionality in .splitlines() was replaced by the keepends
functionality which allows keeping the line end markers together
with the string.
Added support for '%r' % obj: this inserts repr(obj) rather
than str(obj).
|
|
|
|
|
| |
Added a few missing whitespace Unicode char mappings.
Thanks to Brian Hooper.
|