| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Convert Py_ssize_t using PyInt_FromSsize_t
|
| |
|
|
|
|
| |
http://mail.python.org/pipermail/python-dev/2006-February/060524.html
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In C++, it's an error to pass a string literal to a char* function
without a const_cast(). Rather than require every C++ extension
module to put a cast around string literals, fix the API to state the
const-ness.
I focused on parts of the API where people usually pass literals:
PyArg_ParseTuple() and friends, Py_BuildValue(), PyMethodDef, the type
slots, etc. Predictably, there were a large set of functions that
needed to be fixed as a result of these changes. The most pervasive
change was to make the keyword args list passed to
PyArg_ParseTupleAndKewords() to be a const char *kwlist[].
One cast was required as a result of the changes: A type object
mallocs the memory for its tp_doc slot and later frees it.
PyTypeObject says that tp_doc is const char *; but if the type was
created by type_new(), we know it is safe to cast to char *.
|
| |
|
|
|
|
|
| |
Prevents a collision pattern that occurs with nested tuples.
(Yitz Gale provided code that repeatably demonstrated the weakness.)
|
|
|
|
| |
many more prime multipliers and that performs well on collision tests.
|
|
|
|
|
|
|
| |
(Basic approach and test concept by Tim Peters.)
* Improved the hash to reduce collisions.
* Added the torture test to the test suite.
|
| |
|
|
|
|
| |
off a new compiler wng under MSVC6).
|
|
|
|
|
|
|
|
| |
the newly created tuples, but tuples added in the freelist are now cleared in
tupledealloc already (which is very cheap, because we are already
Py_XDECREF'ing all elements anyway).
Python should have a standard Py_ZAP macro like ZAP in pystate.c.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
and list.extend(). Factoring the inner loops to remove the constant
structure references and fixed offsets gives speedups ranging from
20% to 30%.
|
|
|
|
|
| |
useful for rapidly building argument tuples without having to invoke the
more sophisticated machinery of Py_BuildValue().
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reverted a Py2.3b1 change to iterator in subclasses of list and tuple.
They had been changed to use __getitem__ whenever it had been overriden
in the subclass.
This caused some usabilty and performance problems. Also, it was
inconsistent with the rest of python where many container methods
access the underlying object directly without first checking for
an overridden getter. Users needing a change in iterator behavior
should override it directly.
|
|
|
|
| |
functions with different signatures.
|
|
|
|
|
|
|
|
| |
As a side issue on this bug, it was noted that list and tuple iterators
used macros to directly access containers and would not recognize
__getitem__ overrides. If the method is overridden, the patch returns
a generic sequence iterator which calls the __getitem__ method; otherwise,
it returns a high custom iterator with direct access to container elements.
|
|
|
|
|
|
| |
to more accurately describe what the function does.
Suggested by Thomas Wouters.
|
|
|
|
| |
Factors out the common case of returning self.
|
|
|
|
|
|
|
|
| |
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
|
|
|
|
| |
Will backport.
|
| |
|
|
|
|
| |
out of the loop.
|
|
|
|
|
|
|
| |
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */. Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here"). The new phrase is hopefully unambiguous.
|
|
|
|
|
|
| |
tupleobject.c. Makes the code in iterobject.c cleaner
and speeds-up the general case by not checking for
tuples everytime. SF Patch #592065.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
|
|
|
|
|
| |
When resizing a tuple, zero out the memory starting at the end of the
old tuple not at the beginning of the old tuple.
|
|
|
|
| |
Convert loops to memset()s.
|
| |
|
|
|
|
|
|
|
|
|
| |
[ 400998 ] experimental support for extended slicing on lists
somewhat spruced up and better tested than it was when I wrote it.
Includes docs & tests. The whatsnew section needs expanding, and arrays
should support extended slices -- later.
|
| |
|
|
|
|
|
|
|
|
| |
The fix makes it possible to call PyObject_GC_UnTrack() more than once
on the same object, and then move the PyObject_GC_UnTrack() call to
*before* the trashcan code is invoked.
BUGFIX CANDIDATE!
|
|
|
|
|
|
|
|
|
|
|
| |
out the for loop at the end intended to zero out new items wasn't
doing anything, because sv->ob_size was already equal to newsize. The
fix slightly refactors the function, introducing a variable oldsize
and doing away with sizediff (which was used only once), and using
oldsize and newsize consistently. I also added comments explaining
what the two for loops do. (Looking at the CVS annotation of this
function, it's no miracle a bug crept in -- this has been patched by
many different folks! :-)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
many types were subclassable but had a xxx_dealloc function that
called PyObject_DEL(self) directly instead of deferring to
self->ob_type->tp_free(self). It is permissible to set tp_free in the
type object directly to _PyObject_Del, for non-GC types, or to
_PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster,
so I'm fearing that our pystone rating is going down again. I'm not
sure if doing something like
void xxx_dealloc(PyObject *self)
{
if (PyXxxCheckExact(self))
PyObject_DEL(self);
else
self->ob_type->tp_free(self);
}
is any faster than always calling the else branch, so I haven't
attempted that -- however those types whose own dealloc is fancier
(int, float, unicode) do use this pattern.
|
|
|
|
| |
Disable t[:], t*0, t*1 optimizations when t is of a tuple subclass type.
|
| |
|
| |
|
|
|
|
|
|
|
| |
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
|
|
|
|
| |
suggestion (modulo style).
|
|
|
|
| |
_PyTuple_Resize().
|
|
|
|
|
|
|
| |
Instead of raising a SystemError, just create a new tuple of the desired
size.
This fixes (at least) SF bug #420343.
|