| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
separaters on str.split() and str.rsplit().
|
|
|
|
|
|
| |
bit by checking the value of UCHAR_MAX in Include/Python.h. There was a
check in Objects/stringobject.c. Remove that. (Note that we don't define
UCHAR_MAX if it's not defined as the old test did.)
|
|
|
|
|
| |
SF feature request #801847.
Original patch is written by Sean Reifschneider.
|
|
|
|
|
|
|
|
|
|
| |
and left shifts. (Thanks to Kalle Svensson for SF patch 849227.)
This addresses most of the remaining semantic changes promised by
PEP 237, except for repr() of a long, which still shows the trailing
'L'. The PEP appears to promise warnings for operations that
changed semantics compared to Python 2.3, but this is not
implemented; we've suffered through enough warnings related to
hex/oct literals and I think it's best to be silent now.
|
| |
|
|
|
|
| |
This closes SF bug #827260.
|
|
|
|
| |
Backported to 2.3.
|
|
|
|
|
|
| |
Adding missing support for '%F'.
Will backport to 2.3.1.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
* Doc - add doc for when functions were added
* UserString
* string object methods
* string module functions
'chars' is used for the last parameter everywhere.
These changes will be backported, since part of the changes
have already been made, but they were inconsistent.
|
| |
|
|
|
|
|
|
|
|
| |
instead of raising a TypeError. (From SF patch #710127)
Add tests to verify this is fixed.
Add various tests for '%c' % int.
|
|
|
|
|
|
|
|
| |
types. The special handling for these can now be removed from save_newobj().
Add some testing for this.
Also add support for setting the 'fast' flag on the Python Pickler class,
which suppresses use of the memo.
|
|
|
|
|
|
| |
and unicode
Patch by Christopher Blunck.
|
|
|
|
|
| |
a single character. Shaves another 10% off the running time by avoiding
the lg2(N) loops and cache effects for the other cases.
|
|
|
|
|
|
|
|
|
|
|
| |
Christian Tismer pointed out the high cost of the loop overhead and
function call overhead for 'c' * n where n is large. Accordingly,
the new code only makes lg2(n) loops.
Interestingly, 'c' * 1000 * 1000 ran a bit faster with old code. At some
point, the loop and function call overhead became cheaper than invalidating
the cache with lengthy memcpys. But for more typical sizes of n, the new
code runs much faster and for larger values of n it runs only a bit slower.
|
|
|
|
|
| |
Python 2.2.x backport candidate. (This bug has been around since
Python 1.6.)
|
|
|
|
|
|
|
| |
Obtain cleaner coding and a system wide
performance boost by using the fast, pre-parsed
PyArg_Unpack function instead of PyArg_ParseTuple
function which is driven by a format string.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
When mwh added extended slicing, strings and unicode became mappings.
Thus, dict was set which prevented an error when doing:
newstr = 'format without a percent' % string_value
This fix raises an exception again when there are no formats
and % with a string value.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'%2147483647d' % -123 segfaults. This was because an integer overflow
in a comparison caused the string resize to be skipped. After fixing
the overflow, this could call _PyString_Resize() with a negative size,
so I (1) test for that and raise MemoryError instead; (2) also added a
test for negative newsize to _PyString_Resize(), raising SystemError
as for all bad arguments.
An identical bug existed in unicodeobject.c, of course.
Will backport to 2.2.2.
|
|
|
|
|
|
|
|
|
| |
Also fixed an error message -- %s argument has non-string str()
doesn't make sense for %r, so the error message now differentiates
between %s and %r.
because PyObject_Repr() and PyObject_Str() ensure that this can never
happen. Added a helpful comment instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
sees a Unicode argument. Unfortunately this test was also executed
for %r, because %s and %r share almost all of their code. This meant
that, if u is a unicode object while repr(u) is an 8-bit string
containing ASCII characters, '%r' % u is a *unicode* string containing
only ASCII characters!
Fixed by executing the test only for %s.
Also fixed an error message -- %s argument has non-string str()
doesn't make sense for %r, so the error message now differentiates
between %s and %r.
|
| |
|
| |
|
|
|
|
| |
Two of these were real bugs.
|
| |
|
|
|
|
|
|
|
|
|
| |
of PyString_DecodeEscape(). This prevents a call to
_PyString_Resize() for the empty string, which would
result in a PyErr_BadInternalCall(), because the
empty string has more than one reference.
This closes SF bug http://www.python.org/sf/603937
|
| |
|
|
|
|
|
|
| |
possible. This always called PyUnicode_Check() and PyString_Check(),
at least one of which would call PyType_IsSubtype(). Also, this would
call PyString_Size() on known string objects.
|
|
|
|
|
| |
the string/unicode method .replace() with a zero-lengt first argument.
Inyeol contributed tests for this too.
|
|
|
|
|
|
| |
These were reported and fixed by Inyeol Lee in SF bug 595350. The
endswith() bug was already fixed in 2.3, but this adds some more test
cases.
|
|
|
|
|
|
|
|
| |
interning. I modified Oren's patch significantly, but the basic idea
and most of the implementation is unchanged. Interned strings created
with PyString_InternInPlace() are now mortal, and you must keep a
reference to the resulting string around; use the new function
PyString_InternImmortal() to create immortal interned strings.
|
|
|
|
|
|
|
| |
comments everywhere that bugged me: /* Foo is inlined */ instead of
/* Inline Foo */. Somehow the "is inlined" phrase always confused me
for half a second (thinking, "No it isn't" until I added the missing
"here"). The new phrase is hopefully unambiguous.
|
|
|
|
| |
com_error() is static in Python/compile.c.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
currently return inconsistent results for ints and longs; in
particular: hex/oct/%u/%o/%x/%X of negative short ints, and x<<n that
either loses bits or changes sign. (No warnings for repr() of a long,
though that will also change to lose the trailing 'L' eventually.)
This introduces some warnings in the test suite; I'll take care of
those later.
|
|
|
|
| |
string of longer than 1 character.
|
|
|
|
| |
will trigger splitting on any whitespace.
|
|
|
|
|
|
|
|
| |
'%g' % '1'
'%d' % '1'
Add a test for these conditions
Fix the test so that if not exception is raise, this is a failure
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The staticforward define was needed to support certain broken C
compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the
static keyword when it was used with a forward declaration of a static
initialized structure. Standard C allows the forward declaration with
static, and we've decided to stop catering to broken C compilers. (In
fact, we expect that the compilers are all fixed eight years later.)
I'm leaving staticforward and statichere defined in object.h as
static. This is only for backwards compatibility with C extensions
that might still use it.
XXX I haven't updated the documentation.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
helper macros to something saner, and used them appropriately in other
files too, to reduce #ifdef blocks.
classobject.c, instance_dealloc(): One of my worst Python Memories is
trying to fix this routine a few years ago when COUNT_ALLOCS was defined
but Py_TRACE_REFS wasn't. The special-build code here is way too
complicated. Now it's much simpler. Difference: in a Py_TRACE_REFS
build, the instance is no longer in the doubly-linked list of live
objects while its __del__ method is executing, and that may be visible
via sys.getobjects() called from a __del__ method. Tough -- the object
is presumed dead while its __del__ is executing anyway, and not calling
_Py_NewReference() at the start allows enormous code simplification.
typeobject.c, call_finalizer(): The special-build instance_dealloc()
pain apparently spread to here too via cut-'n-paste, and this is much
simpler now too. In addition, I didn't understand why this routine
was calling _PyObject_GC_TRACK() after a resurrection, since there's no
plausible way _PyObject_GC_UNTRACK() could have been called on the
object by this point. I suspect it was left over from pasting the
instance_delloc() code. Instead asserted that the object is still
tracked. Caution: I suspect we don't have a test that actually
exercises the subtype_dealloc() __del__-resurrected-me code.
|
|
|
|
| |
Handle negative indices similar to slices.
|