| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
of PyString_DecodeEscape(). This prevents a call to
_PyString_Resize() for the empty string, which would
result in a PyErr_BadInternalCall(), because the
empty string has more than one reference.
This closes SF bug http://www.python.org/sf/603937
|
| |
|
|
|
|
|
| |
Now all non-mutating dict methods are in the proxy also.
Inspired by SF bug #602232,
|
| |
|
|
|
|
| |
when given its own type as an argument.
|
|
|
|
|
|
| |
possible. This always called PyUnicode_Check() and PyString_Check(),
at least one of which would call PyType_IsSubtype(). Also, this would
call PyString_Size() on known string objects.
|
|
|
|
|
| |
Because all built-in tests return bools now, this is the most common
path!
|
|
|
|
|
| |
always returns a bool, so avoid calling PyObject_IsTrue() in that
case.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
wrong thing for a unicode subclass when there were zero string
replacements. The example given in the SF bug report was only one way
to trigger this; replacing a string of length >= 2 that's not found is
another. The code would actually write outside allocated memory if
replacement string was longer than the search string.
(I wonder how many more of these are lurking? The unicode code base
is full of wonders.)
Bugfix candidate; this same bug is present in 2.2.1.
|
|
|
|
|
| |
the string/unicode method .replace() with a zero-lengt first argument.
Inyeol contributed tests for this too.
|
|
|
|
|
|
|
|
|
|
|
|
| |
SHIFT and MASK, and widen digit. One problem is that code of the form
digit << small_integer
implicitly assumes that the result fits in an int or unsigned int
(platform-dependent, but "int sized" in any case), since digit is
promoted "just" to int or unsigned via the usual integer promotions.
But if digit is typedef'ed as unsigned int, this loses information.
The cure for this is just to cast digit to twodigits first.
|
|
|
|
|
|
| |
These were reported and fixed by Inyeol Lee in SF bug 595350. The
endswith() bug was already fixed in 2.3, but this adds some more test
cases.
|
|
|
|
| |
signed vs unsigned).
|
|
|
|
|
|
|
|
| |
interning. I modified Oren's patch significantly, but the basic idea
and most of the implementation is unchanged. Interned strings created
with PyString_InternInPlace() are now mortal, and you must keep a
reference to the resulting string around; use the new function
PyString_InternImmortal() to create immortal interned strings.
|
|
|
|
|
|
|
| |
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */. Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here"). The new phrase is hopefully unambiguous.
|
|
|
|
| |
to _PyType_Lookup().
|
|
|
|
| |
Should save 4% on slot lookups.
|
|
|
|
|
|
| |
Move some debugging checks inside Py_DEBUG.
They were causing cache misses according to cachegrind.
|
|
|
|
| |
This causes a modest speedup.
|
|
|
|
|
|
|
| |
expensive and overly general PyObject_IsInstance(), call
PyObject_TypeCheck() which is a macro that often avoids a call, and if
it does make a call, calls the much more efficient PyType_IsSubtype().
This saved 6% on a benchmark for slot lookups.
|
|
|
|
| |
com_error() is static in Python/compile.c.
|
|
|
|
|
|
|
| |
-- replace then with slightly faster PyObject_Call(o,a,NULL). (The
difference is that the latter requires a to be a tuple; the former
allows other values and wraps them in a tuple if necessary; it
involves two more levels of C function calls to accomplish all that.)
|
|
|
|
| |
to inner scope, too.
|
| |
|
|
|
|
|
|
|
| |
rigorous instead of hoping for testing not to turn up counterexamples.
Call me heretical, but despite that I'm wholly confident in the proof,
and have done it two different ways now, I still put more faith in
testing ...
|
|
|
|
|
| |
normalized result, so no point to normalizing it again. The number
of test+branches was also excessive.
|
|
|
|
|
|
|
|
| |
[ 587993 ] SET_LINENO killer
Remove SET_LINENO. Tracing is now supported by inspecting co_lnotab.
Many sundry changes to document and adapt to this change.
|
| |
|
| |
|
|
|
|
| |
copy the metatype from the base, the base actually has one!
|
|
|
|
| |
al*bl "always fit": it's actually trivial given what came before.
|
|
|
|
|
|
| |
ah*bh and al*bl. This is much easier than explaining why that's true
for (ah+al)*(bh+bl), and follows directly from the simple part of the
(ah+al)*(bh+bl) explanation.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
space is no longer needed, so removed the code. It was only possible when
a degenerate (ah->ob_size == 0) split happened, but after that fix went
in I added k_lopsided_mul(), which saves the body of k_mul() from seeing
a degenerate split. So this removes code, and adds a honking long comment
block explaining why spilling out of bounds isn't possible anymore. Note:
ff we end up spilling out of bounds anyway <wink>, an assert in v_iadd()
is certain to trigger.
|
| |
|
| |
|
|
|
|
|
|
|
| |
(rev. 2.86). The other type is only disqualified from sq_repeat when
it has the CHECKTYPES flag. This means that for extension types that
only support "old-style" numeric ops, such as Zope 2's ExtensionClass,
sq_repeat still trumps nb_multiply.
|
|
|
|
| |
is an *unsigned* long.
|
| |
|
|
|
|
|
|
|
|
| |
k_mul() when inputs have vastly different sizes, and a little more
efficient when they're close to a factor of 2 out of whack.
I consider this done now, although I'll set up some more correctness
tests to run overnight.
|
|
|
|
|
| |
multiply via Ctrl+C could cause a NULL-pointer dereference due to
the assert.
|
|
|
|
| |
the good one <wink>. Also checked in a test-aid by mistake.
|
|
|
|
|
|
|
|
|
| |
cases, overflow the allocated result object by 1 bit. In such cases,
it would have been brought back into range if we subtracted al*bl and
ah*bh from it first, but I don't want to do that because it hurts cache
behavior. Instead we just ignore the excess bit when it appears -- in
effect, this is forcing unsigned mod BASE**(asize + bsize) arithmetic
in a case where that doesn't happen all by itself.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. You can now have __dict__ and/or __weakref__ in your __slots__
(before only __weakref__ was supported). This is treated
differently than before: it merely sets a flag that the object
should support the corresponding magic.
2. Dynamic types now always have descriptors __dict__ and __weakref__
thrust upon them. If the type in fact does not support one or the
other, that descriptor's __get__ method will raise AttributeError.
3. (This is the reason for all this; it fixes SF bug 575229, reported
by Cesar Douady.) Given this code:
class A(object): __slots__ = []
class B(object): pass
class C(A, B): __slots__ = []
the class object for C was broken; its size was less than that of
B, and some descriptors on B could cause a segfault. C now
correctly inherits __weakrefs__ and __dict__ from B, even though A
is the "primary" base (C.__base__ is A).
4. Some code cleanup, and a few comments added.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
algorithm. MSVC 6 wasn't impressed <wink>.
Something odd: the x_mul algorithm appears to get substantially worse
than quadratic time as the inputs grow larger:
bits in each input x_mul time k_mul time
------------------ ---------- ----------
15360 0.01 0.00
30720 0.04 0.01
61440 0.16 0.04
122880 0.64 0.14
245760 2.56 0.40
491520 10.76 1.23
983040 71.28 3.69
1966080 459.31 11.07
That is, x_mul is perfectly quadratic-time until a little burp at
2.56->10.76, and after that goes to hell in a hurry. Under Karatsuba,
doubling the input size "should take" 3 times longer instead of 4, and
that remains the case throughout this range. I conclude that my "be nice
to the cache" reworkings of k_mul() are paying.
|
|
|
|
|
|
|
|
| |
correct now, so added some final comments, did some cleanup, and enabled
it for all long-int multiplies. The KARAT envar no longer matters,
although I left some #if 0'ed code in there for my own use (temporary).
k_mul() is still much slower than x_mul() if the inputs have very
differenent sizes, and that still needs to be addressed.
|