| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Fix over-aggressive PyErr_Clear(). The same code fragment appears in
various guises in list.extend(), map(), filter(), zip(), and internally
in PySequence_Tuple().
|
| |
|
|
|
|
|
|
|
|
| |
and set.discard for handling keys that both inherite from set and
define their own __hash__() function.
* Fixed O(n) performance issue with set.pop() which should have been
an O(1) process.
|
|
|
|
|
|
|
|
|
|
|
|
| |
"""
SF bug #1238681: freed pointer is used in longobject.c:long_pow().
In addition, long_pow() skipped a necessary (albeit extremely unlikely
to trigger) error check when converting an int modulus to long.
Alas, I was unable to write a test case that crashed due to either
cause.
"""
|
|
|
|
|
|
|
|
|
| |
SF bug 1185883: PyObject_Realloc can't safely take over a block currently
managed by C, because it's possible for the block to be smaller than the
new requested size, and at the end of allocated VM. Trying to copy over
nbytes bytes to a Python small-object block can segfault then, and there's
no portable way to avoid this (we would have to know how many bytes
starting at p are addressable, and std C has no means to determine that).
|
|
|
|
|
| |
Reverts 1.26 and 1.27.
And adds cycle testing.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for rather inaccurately titled bug
[ 1165306 ] Property access with decorator makes interpreter crash
Don't allow the creation of unbound methods with NULL im_class, because
attempting to call such crashes.
Backport candidate.
|
| |
|
| |
|
| |
|
|
|
|
| |
numbers in PyLong_AsLongLong, and update test suite accordingly.
|
|
|
|
|
|
|
|
| |
[ 1124295 ] Function's __name__ no longer accessible in restricted mode
which I introduced with a bit of mindless copy-paste when making
__name__ writable. You can't assign to __name__ in restricted mode,
which I'm going to pretend was intentional :)
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
and its usage in PyLocale_strcoll().
Clarify the documentation on this.
Thanks to Andreas Degert for pointing this out.
|
|
|
|
| |
Support automatic pickling of dictionaries in instance of set subclasses.
|
|
|
|
|
| |
stderr. close() can fail if the user is out-of-quota, for example.
Fixes #959379.
|
|
|
|
|
|
|
|
| |
In cyclic gc, clear weakrefs to unreachable objects before allowing any
Python code (weakref callbacks or __del__ methods) to run.
This is a critical bugfix, affecting all versions of Python since weakrefs
were introduced. I'll backport to 2.3.
|
|
|
|
|
|
|
|
|
|
| |
exposed in header files. Fixed a few comments in these headers.
As we might have expected, writing down invariants systematically exposed a
(minor) bug. In this case, function objects have a writeable func_code
attribute, which could be set to code objects with the wrong number of
free variables. Calling the resulting function segfaulted the interpreter.
Added a corresponding test.
|
|
|
|
|
| |
_PyString_Resize() readied strings for mutation but did not invalidate
the cached hash value.
|
|
|
|
| |
Python 2.3.x candidate.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also, add a testcase.
Formerly, the list_extend() code used several local variables to remember
its state across iterations. Since an iteration could call arbitrary
Python code, it was possible for the list state to be changed. The new
code uses dynamic structure references instead of C locals. So, they
are always up-to-date.
After list_resize() is called, its size has been updated but the new
cells are filled with NULLs. These needed to be filled before arbitrary
iteration code was called; otherwise, that code could attempt to modify
a list that was in a semi-invalid state. The solution was to change
the ob->size field back to a value reflecting the actual number of valid
cells.
|
| |
|
|
|
|
|
| |
buffers, compute the new buffer size based on the old buffer size.
Fixes SF bug #1034242.
|
|
|
|
|
| |
'base'). Fixes SF bug #1033720. Move offset sanity checking to
buffer_from_memory().
|
|
|
|
|
| |
platforms where that macro works, NaN compared to an int or long works
the same as NaN compared to a finite float.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When an integer is compared to a float now, the int isn't coerced to float.
This avoids spurious overflow exceptions and insane results. This should
compute correct results, without raising spurious exceptions, in all cases
now -- although I expect that what happens when an int/long is compared to
a NaN is still a platform accident.
Note that we had potential problems here even with "short" ints, on boxes
where sizeof(long)==8. There's #ifdef'ed code here to handle that, but
I can't test it as intended. I tested it by changing the #ifdef to
trigger on my 32-bit box instead.
I suppose this is a bugfix candidate, but I won't backport it. It's
long-winded (for speed) and messy (because the problem is messy). Note
that this also depends on a previous 2.4 patch that introduced
_Py_SwappedOp[] as an extern.
|
|
|
|
|
| |
I need to define it again. Bite the bullet and define it once as an
extern, _Py_SwappedOp[].
|
| |
|
|
|
|
|
|
| |
Make PySequence_Check() and PyMapping_Check() handle NULL inputs. This
goes beyond what most of the other checks do, but it is nice defensive
programming and solves the OP's problem.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
module type with silly arguments. (The exact name can be quibbled
over, if you care).
This was partially inspired by bug #1014215 and so on, but is also
just a good idea.
|
| |
|
| |
|
|
|
|
|
| |
to make it clear that it is possible to pass None as the
separator argument to get the default "any whitespace" separator.
|
|
|
|
|
|
|
|
|
|
|
| |
The list resizing scheme only downsized when more than 16 elements were
removed in a single step: del a[100:120]. As a result, the list would
never shrink when popping elements off one at a time.
This patch makes it shrink whenever more than half of the space is unused.
Also, at Tim's suggestion, renamed _new_size to new_allocated. This makes
the code easier to understand.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
decoding incomplete input (when the input stream is temporarily exhausted).
codecs.StreamReader now implements buffering, which enables proper
readline support for the UTF-16 decoders. codecs.StreamReader.read()
has a new argument chars which specifies the number of characters to
return. codecs.StreamReader.readline() and codecs.StreamReader.readlines()
have a new argument keepends. Trailing "\n"s will be stripped from the lines
if keepends is false. Added C APIs PyUnicode_DecodeUTF8Stateful and
PyUnicode_DecodeUTF16Stateful.
|
|
|
|
| |
(Contributed by Dima Dorfman)
|
|
|
|
|
| |
Bugfix candidate -- although long_pow() is so different now I doubt a
patch would apply to 2.3.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This checkin is adapted from part 2 (of 3) of Trevor Perrin's patch set.
BACKWARD INCOMPATIBILITY: SHIFT must now be divisible by 5. AFAIK,
nobody will care. long_pow() could be complicated to worm around that,
if necessary.
long_pow():
- BUGFIX: This leaked the base and power when the power was negative
(and so the computation delegated to float pow).
- Instead of doing right-to-left exponentiation, do left-to-right. This
is more efficient for small bases, which is the common case.
- In addition, if the exponent is large (more than FIVEARY_CUTOFF
digits), precompute [a**i % c for i in range(32)], and go left to
right 5 bits at a time.
l_divmod():
- The signature changed so that callers who don't want the quotient,
or don't want the remainder, can pass NULL in the slot they don't
want. This saves them from having to declare a vrbl for unwanted
stuff, and remembering to decref it.
long_mod(), long_div(), long_classic_div():
- Adjust to new l_divmod() signature, and simplified as a result.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This checkin is adapted from part 1 (of 3) of Trevor Perrin's patch set.
x_mul()
- sped a little by optimizing the C
- sped a lot (~2X) if it's doing a square; note that long_pow() squares
often
k_mul()
- more cache-friendly now if it's doing a square
KARATSUBA_CUTOFF
- boosted; gradeschool mult is quicker now, and it may have been too low
for many platforms anyway
KARATSUBA_SQUARE_CUTOFF
- new
- since x_mul is a lot faster at squaring now, the point at which
Karatsuba pays for squaring is much higher than for general mult
|
|
|
|
|
|
|
|
|
| |
need to convert str objects from the iterable to unicode. So, if
someone set the system default encoding to something nasty enough,
the conversion process could mutate the input iterable as a side
effect, and PySequence_Fast doesn't hide that from us if the input was
a list. IOW, can't assume the size of PySequence_Fast's result is
invariant across PyUnicode_FromObject() calls.
|
|
|
|
|
|
|
|
| |
much to reduce the size of the code, but greatly improves its clarity.
It's also quicker in what's probably the most common case (the argument
iterable is a list). Against it, if the iterable isn't a list or a tuple,
a temp tuple is materialized containing the entire input sequence, and
that's a bigger temp memory burden. Yawn.
|
|
|
|
|
| |
int. I sure wish MS would gripe about that! Whatever, note that the
statement above it guarantees that the cast loses no info.
|