| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
Fixed a half dozen ways in which general dict comparison could crash
Python (even cause Win98SE to reboot) in the presence of kay and/or
value comparison routines that mutate the dict during dict comparison.
Bugfix candidate.
|
| |
|
|
|
|
|
|
| |
d1 == d2 and d1 != d2 now work even if the keys and values in d1 and d2
don't support comparisons other than ==, and testing dicts for equality
is faster now (especially when inequality obtains).
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES.
More AttributeErrors transmuted into TypeErrors, in test_b2.py, and,
again, this strikes me as a good thing.
This checkin completes the iterator generalization work that obviously
needed to be done. Can anyone think of others that should be changed?
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES
A few more AttributeErrors turned into TypeErrors, but in test_contains
this time.
The full story for instance objects is pretty much unexplainable, because
instance_contains() tries its own flavor of iteration-based containment
testing first, and PySequence_Contains doesn't get a chance at it unless
instance_contains() blows up. A consequence is that
some_complex_number in some_instance
dies with a TypeError unless some_instance.__class__ defines __iter__ but
does not define __getitem__.
|
|
|
|
|
|
|
|
| |
to string.join(), so that when the latter figures out in midstream that
it really needs unicode.join() instead, unicode.join() can actually get
all the sequence elements (i.e., there's no guarantee that the sequence
passed to string.join() can be iterated over *again* by unicode.join(),
so string.join() must not pass on the original sequence object anymore).
|
|
|
|
|
|
| |
because PySequence_Fast() started working for free as soon as
PySequence_Tuple() learned how to work with iterators. For some reason
unicode.join() still doesn't work, though.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES.
This one surprised me! While I expected tuple() to be a no-brainer, turns
out it's actually dripping with consequences:
1. It will *allow* the popular PySequence_Fast() to work with any iterable
object (code for that not yet checked in, but should be trivial).
2. It caused two std tests to fail. This because some places used
PyTuple_Sequence() (the C spelling of tuple()) as an indirect way to test
whether something *is* a sequence. But tuple() code only looked for the
existence of sq->item to determine that, and e.g. an instance passed
that test whether or not it supported the other operations tuple()
needed (e.g., __len__). So some things the tests *expected* to fail
with an AttributeError now fail with a TypeError instead. This looks
like an improvement to me; e.g., test_coercion used to produce 559
TypeErrors and 2 AttributeErrors, and now they're all TypeErrors. The
error details are more informative too, because the places calling this
were *looking* for TypeErrors in order to replace the generic tuple()
"not a sequence" msg with their own more specific text, and
AttributeErrors snuck by that.
|
| |
|
|
|
|
| |
free to do one!
|
|
|
|
| |
NEEDS DOC CHANGES.
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES.
Possibly contentious: The first time s.next() yields StopIteration (for
a given map argument s) is the last time map() *tries* s.next(). That
is, if other sequence args are longer, s will never again contribute
anything but None values to the result, even if trying s.next() again
could yield another result. This is the same behavior map() used to have
wrt IndexError, so it's the only way to be wholly backward-compatible.
I'm not a fan of letting StopIteration mean "try again later" anyway.
|
|
|
|
| |
NEEDS DOC CHANGES.
|
|
|
|
|
| |
filter() to no longer insist that len(seq) be defined.
NEEDS DOC CHANGES.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to no longer insist that len(seq) be defined.
NEEDS DOC CHANGES.
This is meant to be a model for how other functions of this ilk (max,
filter, etc) can be generalized similarly. Feel encouraged to grab your
favorite and convert it!
Note some cute consequences:
list(file) == file.readlines() == list(file.xreadlines())
list(dict) == dict.keys()
list(dict.iteritems()) = dict.items()
list(xrange(i, j, k)) == range(i, j, k)
|
|
|
|
|
| |
I believe Kevin Rodgers here! The old WINDOWS_LEAN_AND_MEAN has, AFAICT,
always been wrong.
|
| |
|
|
|
|
| |
Hopefully this is the last checkin for 2.1!
|
|
|
|
| |
Greatly updated news for 2.1c1 (!).
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
prefix to the message lines.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
modified from setup.py version "1.37" to support BeOS build.
Contributed by Donn Cave (SF patch 411830).
|
| |
|
|
|
|
| |
on c.l.py.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
must now initialize the extra field used by the weak-ref machinery to
NULL themselves, to avoid having to require PyObject_INIT() to check
if the type supports weak references and do it there. This causes less
work to be done for all objects (the type object does not need to be
consulted to check for the Py_TPFLAGS_HAVE_WEAKREFS bit).
|
| |
|
|
|
|
|
|
|
|
| |
about these packages:
- distutils
- xml
|
|
|
|
| |
Moshe for noticing!
|
|
|
|
| |
Report the addition of the Tix module.
|
| |
|
|
|
|
| |
warnings.
|
| |
|
| |
|
|
|
|
| |
order.
|
| |
|