| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Closes bug #885293 (thanks, Josiah Carlson).
|
| |
|
|
|
|
| |
patch.
|
| |
|
|
|
|
| |
Remove BAD_EXEC_PROTOYPE (leftover from IRIX 4 demolition).
|
|
|
|
|
|
|
| |
(Contributed by Greg Chapman.)
Since this only changes the error message, I doubt that it should be
backported.
|
|
|
|
| |
separaters on str.split() and str.rsplit().
|
|
|
|
|
| |
Formerly, the length was only fetched from sequence objects.
Now, any object that reports its length can benefit from pre-sizing.
|
|
|
|
|
|
|
|
| |
Formerly, length data fetched from sequence objects.
Now, any object that reports its length can benefit from pre-sizing.
On one sample timing, it gave a threefold speedup for list(s) where s
was a set object.
|
| |
|
|
|
|
| |
* Speed-up intersection whenever PyDict_Next can be used.
|
| |
|
| |
|
|
|
|
|
|
| |
The special-case code that was removed could return a value indicating
success but leave an exception set. test_fileinput failed in a debug
build as a result.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
which can be reviewed via
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2003-12/1011.html
Duncan Booth investigated, and discovered that an "optimisation" was
in fact a pessimisation for small numbers of elements in a source list,
compared to not having the optimisation, although with large numbers
of elements in the source list the optimisation was quite beneficial.
He posted his change to comp.lang.python (but not to SF).
Further research has confirmed his assessment that the optimisation only
becomes a net win when the source list has more than 100 elements.
I also found that the optimisation could apply to tuples as well,
but the gains only arrive with source tuples larger than about 320
elements and are nowhere near as significant as the gains with lists,
(~95% gain @ 10000 elements for lists, ~20% gain @ 10000 elements for
tuples) so I haven't proceeded with this.
The code as it was applied the optimisation to list subclasses as
well, and this also appears to be a net loss for all reasonable sized
sources (~80-100% for up to 100 elements, ~20% for more than 500
elements; I tested up to 10000 elements).
Duncan also suggested special casing empty lists, which I've extended
to all empty sequences.
On the basis that list_fill() is only ever called with a list for the
result argument, testing for the source being the destination has
now happens before testing source types.
|
|
|
|
| |
using specialized splitter for 1 char sep.
|
|
|
|
|
|
| |
bit by checking the value of UCHAR_MAX in Include/Python.h. There was a
check in Objects/stringobject.c. Remove that. (Note that we don't define
UCHAR_MAX if it's not defined as the old test did.)
|
|
|
|
| |
(Pointy hat goes to perky)
|
| |
|
|
|
|
| |
sorted() becomes a regular function instead of a classmethod.
|
| |
|
|
|
|
|
| |
SF feature request #801847.
Original patch is written by Sean Reifschneider.
|
| |
|
|
|
|
|
| |
* Use Py_RETURN_NONE everywhere.
* Fix-up the firstpass check for the tp_print slot.
|
| |
|
| |
|
|
|
|
| |
Simplifies and speeds-up the code.
|
| |
|
| |
|
|
|
|
|
| |
* Used the flag to optimize set.__contains__(), dict.__contains__(),
dict.__getitem__(), and list.__getitem__().
|
|
|
|
|
| |
deallocating garbage pointers; saved_ob_item and empty_ob_item.
(Reviewed by Raymond Hettinger)
|
|
|
|
|
|
| |
can run" bugs as discussed in
[ 848856 ] couple of new list.sort bugs
|
|
|
|
|
|
|
|
|
|
| |
and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
|
|
|
|
|
| |
an exception raised by the key function.
(Suggested by Michael Hudson.)
|
| |
|
|
|
|
|
|
|
| |
than PySequence_Contains() and more clearly applicable to dicts.
Apply the new function in setobject.c where __contains__ checking is
ubiquitous.
|
| |
|
|
|
|
|
| |
unsigned int (on a 32-bit machine), by adding an explicit 'u' to the
literal (a prime used to improve the hash function for frozenset).
|
|
|
|
|
|
|
| |
* Add more tests
* Refactor and neaten the code a bit.
* Rename union_update() to update().
* Improve the algorithms (making them a closer to sets.py).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
function.
* Add a better test for deepcopying.
* Add tests to show the __init__() function works like it does for list
and tuple. Add related test.
* Have shallow copies of frozensets return self. Add related test.
* Have frozenset(f) return f if f is already a frozenset. Add related test.
* Beefed-up some existing tests.
|
|
|
|
|
|
|
|
|
|
| |
by the function object or by the method object, the function
object's attribute usually wins. Christian Tismer pointed out that
that this is really a mistake, because this only happens for special
methods (like __reduce__) where the method object's version is
really more appropriate than the function's attribute. So from now
on, all method attributes will have precedence over function
attributes with the same name.
|
|
|
|
| |
Brings the functionality back in line with sets.py.
|
|
|
|
| |
(Requested by Alex Martelli.)
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* Improve the hash function to increase the chance that distinct sets will
have distinct xor'd hash totals.
* Use PyDict_Merge where possible (it is faster than an equivalent iter/set
pair).
* Don't rebuild dictionaries where the input already has one.
|
|
|
|
|
|
|
|
| |
Also SF patch 843455.
This is a critical bugfix.
I'll backport to 2.3 maint, but not beyond that. The bugs this fixes
have been there since weakrefs were introduced.
|
| |
|
| |
|
| |
|