| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Removed the ifilter flag wart by splitting it into two simpler functions.
* Fixed comment tabbing in C code.
* Factored module start-up code into a loop.
Documentation:
* Re-wrote introduction.
* Addede examples for quantifiers.
* Simplified python equivalent for islice().
* Documented split of ifilter().
Sets.py:
* Replace old ifilter() usage with new.
|
|
|
|
| |
[ 678518 ] Another parsermodule validation error
|
|
|
|
| |
of datetime does, accept instances of subclasses too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__ne__ no longer complain if they don't know how to compare to the other
thing. If no meaningful way to compare is known, saying "not equal" is
sensible. This allows things like
if adatetime in some_sequence:
and
somedict[adatetime] = whatever
to work as expected even if some_sequence contains non-datetime objects,
or somedict non-datetime keys, because they only call __eq__.
It still complains (raises TypeError) for mixed-type comparisons in
contexts that require a total ordering, such as list.sort(), use as a
key in a BTree-based data structure, and cmp().
|
|
|
|
|
|
|
| |
Make length an int so we get the right value from
PyArg_ParseTuple(args, "s#", &str, &length)
Will backport.
|
|
|
|
|
| |
too hard to read.
* Simplified previous changes to izip() to make it easier to read.
|
|
|
|
|
|
|
|
|
|
|
| |
* Fixed typo in exception message for times()
* Filled in missing times_traverse()
* Document reasons that imap() did not adopt a None fill-in feature
* Document that count(sys.maxint) will wrap-around on overflow
* Add overflow test to islice()
* Check that starmap()'s argument returns a tuple
* Verify that imap()'s tuple re-use is safe
* Make a similar tuple re-use (with safety check) for izip()
|
|
|
|
| |
Closes SF bug #680797.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
guarantee to keep valid pointers in its slots.
tests: Moved ExtensionSaver from test_copy_reg into pickletester, and
use it both places. Once extension codes get assigned, it won't be
safe to overwrite them willy nilly in test suites, and ExtensionSaver
does a thorough job of undoing any possible damage.
Beefed up the EXT[124] tests a bit, to check the smallest and largest
codes in each opcode's range too.
|
|
|
|
|
| |
Moved such EXT tests as currently exist from TempAbstractPickleTests to
AbstractPickleTests, so that test_cpickle runs them too.
|
| |
|
| |
|
|
|
|
|
|
| |
by Michael Stone (mbrierst).
Python 2.1.4, 2.2.2 candidate.
|
|
|
|
|
|
|
|
| |
signed/unsigned comparison warnings on the call to iconv().
Fix comment typos.
From SF patch #680146.
|
| |
|
|
|
|
| |
generate these opcodes.
|
|
|
|
|
|
|
|
|
| |
this clarifies that they are part of an internal API (albeit shared
between pickle.py, copy_reg.py and cPickle.c).
I'd like to do the same for copy_reg.dispatch_table, but worry that it
might be used by existing code. This risk doesn't exist for the
extension registry.
|
| |
|
|
|
|
|
| |
Imported the extension-registry dicts from copy_reg.py, in preparation for
tackling EXT[124].
|
|
|
|
|
|
|
| |
because it seems more consistent with the rest of the code.
cPickle_PyMapping_HasKey(): This extern function isn't used anywhere in
Python or Zope, so got rid of it.
|
|
|
|
|
|
|
|
| |
extension implemented flush() was fixed. Scott also rewrite the
zlib test suite using the unittest module. (SF bug #640230 and
patch #678531.)
Backport candidate I think.
|
|
|
|
|
|
|
|
|
| |
readability.
load_bool(): Now that I know the intended difference between _PUSH and
_APPEND, used the right one.
Pdata_grow(): Squashed out a redundant overflow test.
|
| |
|
|
|
|
| |
overflow holes in Pdata_grow().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a function, then
p->f(arg1, arg2, ...)
is semantically the same as
(*p->f)(arg1, arg2, ...)
Changed all instances of the latter into the former. Given how often
the code embeds this kind of expression in an if test, the unnecessary
parens and dereferening operator were a real drag on readability.
|
|
|
|
| |
embedded assignments, for readability.
|
| |
|
| |
|
|
|
|
|
|
|
| |
loops. Renamed DATA and BINDATA to DATA0 and DATA1. Included
disassemblies, but noted why we can't test them. Added XXX comment to
cPickle about a mysterious comment, where pickle and cPickle diverge
in how they number PUT indices.
|
| |
|
| |
|
|
|
|
|
| |
to have an effect before protocol 3 is invented, so no test can be
written for this (yet).
|
|
|
|
|
| |
the hitherto unknown (to me) noload() cPickle function, which is (a)
something we don't test at all, and (b) pickle.py doesn't have.
|
|
|
|
|
| |
ElementDeclHandler by Expat.
Fixes SF bug #676990.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Assorted code cleanups; e.g., sizeof(char) is 1 by definition, so there's
no need to do things like multiply by sizeof(char) in hairy malloc
arguments. Fixed an undetected-overflow bug in readline_file().
longobject.c: Fixed a really stupid bug in the new _PyLong_NumBits.
pickle.py: Fixed stupid bug in save_long(): When proto is 2, it
wrote LONG1 or LONG4, but forgot to return then -- it went on to
append the proto 1 LONG opcode too.
Fixed equally stupid cancelling bugs in load_long1() and
load_long4(): they *returned* the unpickled long instead of pushing
it on the stack. The return values were ignored. Tests passed
before only because save_long() pickled the long twice.
Fixed bugs in encode_long().
Noted that decode_long() is quadratic-time despite our hopes,
because long(string, 16) is still quadratic-time in len(string).
It's hex() that's linear-time. I don't know a way to make decode_long()
linear-time in Python, short of maybe transforming the 256's-complement
bytes into marshal's funky internal format, and letting marshal decode
that. It would be more valuable to make long(string, 16) linear time.
pickletester.py: Added a global "protocols" vector so tests can try
all the protocols in a sane way. Changed test_ints() and test_unicode()
to do so. Added a new test_long(), but the tail end of it is disabled
because it "takes forever" under pickle.py (but runs very quickly under
cPickle: cPickle proto 2 for longs is linear-time).
|
|
|
|
| |
code cleanups, and purged more references to text-vs-binary modes.
|
| |
|
|
|
|
| |
already <wink>.
|
| |
|
|
|
|
|
| |
removed woefully inadequate opcode docs and pointed to pickletools.py
instead.
|
|
|
|
|
|
|
|
|
|
| |
functions. Reworked {time,datetime}_new() to do what their corresponding
setstates used to do in their state-tuple-input paths, but directly,
without constructing an object with throwaway state first. Tightened
the "is this a state tuple input?" paths to check the presumed state
string-length too, and to raise an exception if the optional second state
element isn't a tzinfo instance (IOW, check these paths for type errors
as carefully as the normal paths).
|
|
|
|
|
|
| |
anymore either, so don't. This also allows to get rid of obscure code
making __getnewargs__ identical to __getstate__ (hmm ... hope there
wasn't more to this than I realize!).
|
|
|
|
| |
attr, and copy_reg.safe_constructors.
|
|
|
|
| |
not the maze it was.
|
|
|
|
| |
delta_reduce(): Simplified.
|