| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES
A few more AttributeErrors turned into TypeErrors, but in test_contains
this time.
The full story for instance objects is pretty much unexplainable, because
instance_contains() tries its own flavor of iteration-based containment
testing first, and PySequence_Contains doesn't get a chance at it unless
instance_contains() blows up. A consequence is that
some_complex_number in some_instance
dies with a TypeError unless some_instance.__class__ defines __iter__ but
does not define __getitem__.
|
|
|
|
|
|
|
|
| |
to string.join(), so that when the latter figures out in midstream that
it really needs unicode.join() instead, unicode.join() can actually get
all the sequence elements (i.e., there's no guarantee that the sequence
passed to string.join() can be iterated over *again* by unicode.join(),
so string.join() must not pass on the original sequence object anymore).
|
|
|
|
|
|
| |
because PySequence_Fast() started working for free as soon as
PySequence_Tuple() learned how to work with iterators. For some reason
unicode.join() still doesn't work, though.
|
|
|
|
| |
several of these turned up and got fixed during the iteration crusade.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES.
This one surprised me! While I expected tuple() to be a no-brainer, turns
out it's actually dripping with consequences:
1. It will *allow* the popular PySequence_Fast() to work with any iterable
object (code for that not yet checked in, but should be trivial).
2. It caused two std tests to fail. This because some places used
PyTuple_Sequence() (the C spelling of tuple()) as an indirect way to test
whether something *is* a sequence. But tuple() code only looked for the
existence of sq->item to determine that, and e.g. an instance passed
that test whether or not it supported the other operations tuple()
needed (e.g., __len__). So some things the tests *expected* to fail
with an AttributeError now fail with a TypeError instead. This looks
like an improvement to me; e.g., test_coercion used to produce 559
TypeErrors and 2 AttributeErrors, and now they're all TypeErrors. The
error details are more informative too, because the places calling this
were *looking* for TypeErrors in order to replace the generic tuple()
"not a sequence" msg with their own more specific text, and
AttributeErrors snuck by that.
|
|
|
|
| |
internals) so clients can be a lot dumber (wrt their knowledge).
|
| |
|
| |
|
|
|
|
| |
free to do one!
|
|
|
|
| |
NEEDS DOC CHANGES.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
NEEDS DOC CHANGES.
Possibly contentious: The first time s.next() yields StopIteration (for
a given map argument s) is the last time map() *tries* s.next(). That
is, if other sequence args are longer, s will never again contribute
anything but None values to the result, even if trying s.next() again
could yield another result. This is the same behavior map() used to have
wrt IndexError, so it's the only way to be wholly backward-compatible.
I'm not a fan of letting StopIteration mean "try again later" anyway.
|
|
|
|
| |
at the same time as it did from PyObject_Init() .
|
|
|
|
|
|
|
| |
the code necessary to accomplish this is simpler and faster if confined to
the object implementations, so we only do this there.
This causes no behaviorial changes beyond a (very slight) speedup.
|
|
|
|
| |
void function.
|
|
|
|
|
|
|
| |
need to be specified in the type structures independently. The flag
exists only for binary compatibility.
This is a "source cleanliness" issue and introduces no behavioral changes.
|
| |
|
|
|
|
| |
NEEDS DOC CHANGES.
|
|
|
|
|
| |
When replacing the exception object, be sure we stuff the new value
in sys.last_value (which we already did for the original value).
|
| |
|
| |
|
|
|
|
|
|
| |
annotations!
Also fixed a typo noted by Neil S.
|
| |
|
| |
|
|
|
|
|
| |
appropriate next() method, and this is what people really want to do with
these objects in practice.
|
|
|
|
| |
the support for containers and iteration.
|
| |
|
|
|
|
|
|
|
| |
objects but instead assume that they use the requested encoding.
This is needed on Windows to enable opening files by passing in
Unicode file names.
|
|
|
|
|
|
| |
dictionary size was comparing ma_size, the hash table size, which is
always a power of two, rather than ma_used, wich changes on each
insertion or deletion. Fixed this.
|
| |
|
|
|
|
|
| |
filter() to no longer insist that len(seq) be defined.
NEEDS DOC CHANGES.
|
| |
|
| |
|
|
|
|
| |
Refactored some object initialization to be more reusable.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to no longer insist that len(seq) be defined.
NEEDS DOC CHANGES.
This is meant to be a model for how other functions of this ilk (max,
filter, etc) can be generalized similarly. Feel encouraged to grab your
favorite and convert it!
Note some cute consequences:
list(file) == file.readlines() == list(file.xreadlines())
list(dict) == dict.keys()
list(dict.iteritems()) = dict.items()
list(xrange(i, j, k)) == range(i, j, k)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
object's type didn't define tp_print, there were still cases where the
full "print uses str() which falls back to repr()" semantics weren't
honored. This resulted in
>>> print None
<None object at 0x80bd674>
>>> print type(u'')
<type object at 0x80c0a80>
Fixed this by always using the appropriate PyObject_Repr() or
PyObject_Str() call, rather than trying to emulate what they would do.
Also simplified PyObject_Str() to always fall back on PyObject_Repr()
when tp_str is not defined (rather than making an extra check for
instances with a __str__ method). And got rid of the special case for
strings.
|
|
|
|
|
| |
course), so I can get rid of the special case for strings in
PyObject_Str().
|
|
|
|
|
|
|
| |
objects.
Tests show that iteritems() is 5-10% faster than iterating over the
dict and extracting the value with dict[key].
|
|
|
|
| |
printing of instances not to look for __str__(). Fix this.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Directory containing
Spam.py
spam/__init__.py
Then "import Spam" caused a SystemError, because code checking for
the existence of "Spam/__init__.py" finds it on a case-insensitive
filesystem, but then bails because the directory it finds it in
doesn't match case, and then old code assumed that was still an error
even though it isn't anymore. Changed the code to just continue
looking in this case (instead of calling it an error). So
import Spam
and
import spam
both work now.
|
|
|
|
|
|
|
|
|
| |
Also a 2.1 bugfix candidate (am I supposed to do something with those?).
Took away map()'s insistence that sequences support __len__, and cleaned
up the convoluted code that made it *look* like it really cared about
__len__ (in fact the old ->len field was only *used* as a flag bit, as
the main loop only looked at its sign bit, setting the field to -1 when
IndexError got raised; renamed the field to ->saw_IndexError instead).
|
|
|
|
|
|
|
|
|
|
|
| |
Patch #419651: Metrowerks on Mac adds 0x itself
C std says %#x and %#X conversion of 0 do not add the 0x/0X base marker.
Metrowerks apparently does. Mark Favas reported the same bug under a
Compaq compiler on Tru64 Unix, but no other libc broken in this respect
is known (known to be OK under MSVC and gcc).
So just try the damn thing at runtime and see what the platform does.
Note that we've always had bugs here, but never knew it before because
a relevant test case didn't exist before 2.1.
|
|
|
|
|
|
|
|
| |
Fix a very old flaw in PyObject_Print(). Amazing! When an object
type defines tp_str but not tp_repr, 'print x' to a real file
object would not call the tp_str slot but rather print a default style
representation: <foo object at 0x....>. This even though 'print x' to
a file-like-object would correctly call the tp_str slot.
|
|
|
|
| |
Simply disabling the Tk event handling hook in _tkinter is not as nice, but at least it works.
|
|
|
|
|
|
|
|
| |
The new test case demonstrates the bug. Be more careful in
symtable_resolve_free() to add a var to cells or frees only if it
won't be added under some other rule.
XXX Add new assertion that will catch this bug.
|