| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
line fits in reasonable screen width.
|
| |
|
|
|
|
| |
Made the presence/absence of a semicolon after macros consistent.
|
|
|
|
|
|
|
| |
removed the tricks).
Changed the ENTER/LEAVE_ZLIB macros so as not to create a new block (a
new block is neither necessary nor helpful).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Apparently this patch (rev 2.41) replaced all the good old "s#"
formats in PyArg_ParseTuple() with "S". Then it did
PyString_FromStringAndSize() to get back the values setup by the
"s#" format. It also incref'd and decref'd the string obtained by
"S" even though the argument tuple had a reference to it.
Replace PyString_AsString() calls with PyString_AS_STRING().
A good rule of thumb -- if you never check the return value of
PyString_AsString() to see if it's NULL, you ought to be using the
macro <wink>.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many functions used a local variable called return_error, which was
initialized to zero. If an error occurred, it was set to true. Most
of the code paths checked were only executed if return_error was
false. goto is clearer.
The code also seemed to be written under the curious assumption that
calling Py_DECREF() on a local variable would assign the variable to
NULL. As a result, more of the error-exit code paths returned an
object that had a reference count of zero instead of just returning
NULL. Fixed the code to explicitly assign NULL after the DECREF.
A bit more reformatting, but not much.
XXX Need a much better test suite for zlib, since it the current tests
don't exercise any of this broken code.
|
| |
|
|
|
|
|
| |
It sets a ZlibError exception, using the msg from the z_stream pointer
if one is available.
|
|
|
|
|
| |
When PyString_FromStringAndSize() and _PyString_Resize() fail, they
set an exception. There's no need to set a new exception.
|
|
|
|
|
|
| |
Consistently indent 4 spaces.
Use whitespace around operators.
Put braces in the right places.
|
|
|
|
|
|
| |
This changes Pythread_start_thread() to return the thread ID, or -1
for an error. (It's technically an incompatible API change, but I
doubt anyone calls it.)
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Mostly by Toby Dickenson and Titus Brown.
Add an optional argument to a decompression object's decompress()
method. The argument specifies the maximum length of the return
value. If the uncompressed data exceeds this length, the excess data
is stored as the unconsumed_tail attribute. (Not to be confused with
unused_data, which is a separate issue.)
Difference from SF patch: Default value for unconsumed_tail is ""
rather than None. It's simpler if the attribute is always a string.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Added support for saving the names of the functions observed into the
profile log.
Added support for using the profiler to measure coverage without collecting
timing information (which is the slow part). Coverage logs can also be
substantially smaller than profiling logs where per-line information is
being collected.
Updated comments on the log format; corrected record type values in some
of the record descriptions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Raise ValueError when an object contains an arbitrarily nested
reference to itself. (The previous fix just produced invalid
pickles.)
Solution is very much like Py_ReprEnter() and Py_ReprLeave():
fast_save_enter() and fast_save_leave() that tracks the fast_container
limit and keeps a fast_memo of objects currently being pickled.
The cost of the solution is moderately expensive for deeply nested
structures, but it still seems to be faster than normal pickling,
based on tests with deeply nested lists.
Once FAST_LIMIT is exceeded, the new code is about twice as slow as
fast-mode code that doesn't check for recursion. It's still twice as
fast as the normal pickling code. In the absence of deeply nested
structures, I couldn't measure a difference.
|
|
|
|
| |
initialize (or use or even know about :-).
|
|
|
|
|
|
|
| |
To whoever who changed a bunch of (PyCFunction) casts to
(PyNoArgsFunction) in PyMethodDef initializers: don't do that. The
cast is to shut the compiler up. The compiler wants the function
pointer initializer to be a PyCFunction.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"for <var> in <testlist> may no longer be a single test followed by
a comma. This solves SF bug #431886. Note that if the testlist
contains more than one test, a trailing comma is still allowed, for
maximum backward compatibility; but this example is not:
[(x, y) for x in range(10), for y in range(10)]
^
The fix involved creating a new nonterminal 'testlist_safe' whose
definition doesn't allow the trailing comma if there's only one test:
testlist_safe: test [(',' test)+ [',']]
|
|
|
|
| |
gcc defines both.
|
| |
|
|
|
|
| |
of calling external functions.
|
| |
|
|
|
|
|
| |
This still doesn't compile on Windows, but at least I have a shot at
fixing that now.
|
|
|
|
|
| |
Still broken: GETTIMEOFDAY. This macro obviously isn't being defined
on Windows, so there's logic errors here I'd rather Fred untangled.
|
| |
|
|
|
|
| |
up GCC warnings.
|
|
|
|
|
|
|
|
|
|
|
| |
a misunderstanding of the refcont behavior of the 'O' format code in
PyArg_ParseTuple() and Py_BuildValue(), respectively.
- pobj is only a borrowed reference, so should *not* be DECREF'ed at
the end. This was the cause of SF bug #470635.
- The Py_BuildValue() call would leak the object produced by
makesockaddr(). (I found this by eyeballing the code.)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a fast_container member to Picklerobject. If fast is true, then
fast_container counts the depth of nested container calls. If the
depth exceeds FAST_LIMIT (2000), the fast flag is ignored and the
normal checks occur. This approach is much like the approach for
prevent stack overflow for comparison and reprs of recursive objects
(e.g. [[...]]).
- Fast container used for save_list(), save_dict(), and
save_inst().
XXX Not clear which other save_xxx() functions should use it.
Make Picklerobject into new-style types, using PyObject_GenericGetAttr()
and PyObject_GenericSetAttr().
- Use PyMemberDef for binary and fast members
- Use PyGetSetDef for persistent_id, inst_persistent_id, memo, and
PicklingError.
XXX Not all of these seem like they need to use getset, but it's
not clear why the old getattr() and setattr() had such odd
semantics. One change is that the getvalue() attribute will
exist on all Picklers, not just list-based picklers; I think
this is a more rationale interface.
There is a long laundry list of other changes:
- Remove unused #defines for PyList_SET_ITEM() etc.
- Make some of the indentation consistent
- Replace uses of cPickle_PyMapping_HasKey() where the first
argument is self->memo with calls to PyDict_GetItem(), because
self->memo must be a dictionary.
- Don't bother to check if cPickle_PyMapping_HasKey() returns < 0,
because it can only return 0 or 1.
- Replace uses of PyObject_CallObject() with PyObject_Call(), when
we can guarantee that the argument tuple is really a tuple.
Performance impacts of these changes:
- 5% speedup for normal pickling
- No change to fast-mode pickling.
XXX Really need tests for all the features in cPickle that aren't in
pickle.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The platform requires 8-byte alignment for doubles, but the GC header
was 12 bytes and that threw off the natural alignment of the double
members of a subtype of complex. The fix puts the GC header into a
union with a double as the other member, to force no-looser-than
double alignment of GC headers. On boxes that require 8-byte alignment
for doubles, this may add pad bytes to the GC header accordingly; ditto
for platforms that *prefer* 8-byte alignment for doubles. On platforms
that don't care, it shouldn't change the memory layout (because the
size of the old GC header is certainly greater than the size of a double
on all platforms, so unioning with a double shouldn't change size or
alignment on such boxes).
|
|
|
|
| |
The former does the right thing on Windows, the latter does not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use #define X509_NAME_MAXLEN for server/issuer length on an SSL
object.
Update doc strings for socket.ssl() and ssl methods read() and
write().
PySSL_SSLwrite(): Check return value and raise exception on error.
Use int for len instead of size_t. (All the function the size_t obj
was passed to our from expected an int!)
PySSL_SSLread(): Check return value of PyArg_ParseTuple()! More
robust checks of return values from SSL_read().
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change all the local names that start with SSL to start with PySSL.
The OpenSSL library defines lots of calls that start with "SSL_". The
calls for Python's SSL objects also started with "SSL_". This choice
made it really confusing to figure out which calls were to the library
and which calls were local to the file.
Add PySSL_SetError() that sets an exception based on the information
from SSL_get_error(). This function will eventually replace all the
calls that set it with an error message that is based on the name of
the call that failed rather than the reason it failed. (Example: If
SSL_connect() failed it used to report "SSL_connect error" now it will
offer a specific message about why SSL_connect failed.)
XXX It might be helpful to augment the error message generated
below with the name of the SSL function that generated the error.
I expect it's obvious most of the time.
Remove several unnecessary INCREFs in the module's constructor call.
PyDict_SetItem() and friends do the INCREF for you.
|
| |
|
| |
|
|
|
|
|
|
|
| |
In SSL_dealloc(), free/dealloc them only if they're non-NULL.
Fixes some obvious core dumps, but not sure yet if there are more
semantics to the SSL calls that would affect the dealloc.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
XXX [1] These changes aren't tested very thoroughly, because regrtest
doesn't do any SSL tests. I've done some trivial tests on my own, but
don't really know how to use the key and cert files. In one case, an
SSL-level error causes Python to dump core. I'll get the fixed in the
next round of changes.
XXX [2] The checkin removes the x_attr member of the SSLObject struct.
I'm not sure if this is kosher for backwards compatibility at the
binary level. Perhaps its safer to keep the member but keep it
assigned to NULL.
And the leaks?
newSSLObject() called PyDict_New(), stored the result in x_attr
without checking it, and later stored NULL in x_attr without doing
anything to the dict. So the dict always leaks. There is no further
reference to x_attr, so I just removed it completely.
The error cases in newSSLObject() passed the return value of
PyString_FromString() directly to PyErr_SetObject().
PyErr_SetObject() expects a borrowed reference, so the string leaked.
|
| |
|
| |
|
|
|
|
|
|
| |
This simplifies the rounding in _PyObject_VAR_SIZE, allows to restore the
pre-rounding calling sequence, and allows some nice little simplifications
in its callers. I'm still making it return a size_t, though.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As Guido suggested, this makes the new subclassing code substantially
simpler. But the mechanics of doing it w/ C macro semantics are a mess,
and _PyObject_VAR_SIZE has a new calling sequence now.
Question: The PyObject_NEW_VAR macro appears to be part of the public API.
Regardless of what it expands to, the notion that it has to round up the
memory it allocates is new, and extensions containing the old
PyObject_NEW_VAR macro expansion (which was embedded in the
PyObject_NEW_VAR expansion) won't do this rounding. But the rounding
isn't actually *needed* except for new-style instances with dict pointers
after a variable-length blob of embedded data. So my guess is that we do
not need to bump the API version for this (as the rounding isn't needed
for anything an extension can do unless it's recompiled anyway). What's
your guess?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pad memory to properly align the __dict__ pointer in all cases.
gcmodule.c/objimpl.h, _PyObject_GC_Malloc:
+ Added a "padding" argument so that this flavor of malloc can allocate
enough bytes for alignment padding (it can't know this is needed, but
its callers do).
typeobject.c, PyType_GenericAlloc:
+ Allocated enough bytes to align the __dict__ pointer.
+ Sped and simplified the round-up-to-PTRSIZE logic.
+ Added blank lines so I could parse the if/else blocks <0.7 wink>.
|
|
|
|
|
| |
no way to talk the debugger into showing me how many bytes were being
allocated.
|
|
|
|
|
| |
objects. This is now simply a shim to give weakref.py access to the
underlying implementation.
|
| |
|
| |
|
|
|
|
| |
to make the SGI C compiler happier (bug #445960).
|
|
|
|
| |
Patch from Steve Scott to add SIGBREAK support (unique to Windows).
|