| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
tests.
|
|
|
|
|
|
|
| |
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
|
|
|
|
| |
when checking surrogates.
|
|
|
|
|
|
|
|
|
|
| |
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
|
|
|
|
| |
sizeof(int)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"mapping" object, specifically one that supports PyMapping_Keys() and
PyObject_GetItem(). This allows you to say e.g. {}.update(UserDict())
We keep the special case for concrete dict objects, although that
seems moderately questionable. OTOH, the code exists and works, so
why change that?
.update()'s docstring already claims that D.update(E) implies calling
E.keys() so it's appropriate not to transform AttributeErrors in
PyMapping_Keys() to TypeErrors.
Patch eyeballed by Tim.
|
|
|
|
|
|
| |
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
|
|
|
|
|
| |
sizeof(Py_UNICODE) >= sizeof(long). also changed surrogate expansion
to work if sizeof(Py_UNICODE) > 2.
|
|
|
|
| |
HAVE_USABLE_WCHAR_T
|
|
|
|
|
|
|
|
| |
the next free valuestack slot, not to the base (in America, stacks push
and pop at the top -- they mutate at the bottom in Australia <winK>).
eval_frame(): assert that f_stacktop isn't NULL upon entry.
frame_delloc(): avoid ordered pointer comparisons involving f_stacktop
when f_stacktop is NULL.
|
|
|
|
|
| |
Bugfix candidate in inspect.py: it was referencing "self" outside of
a method.
|
|
|
|
|
|
|
|
|
| |
i_divmod: New and simpler algorithm. Old one returned gibberish on most
boxes when the numerator was -sys.maxint-1. Oddly enough, it worked in the
release (not debug) build on Windows, because the compiler optimized away
some tricky sign manipulations that were incorrect in this case.
Makes you wonder <wink> ...
Bugfix candidate.
|
|
|
|
| |
#if/#error constructs.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Gave Python linear-time repr() implementations for dicts, lists, strings.
This means, e.g., that repr(range(50000)) is no longer 50x slower than
pprint.pprint() in 2.2 <wink>.
I don't consider this a bugfix candidate, as it's a performance boost.
Added _PyString_Join() to the internal string API. If we want that in the
public API, fine, but then it requires runtime error checks instead of
asserts.
|
| |
|
|
|
|
| |
some code for longer than needed.
|
|
|
|
| |
significant digits sign bits. Again no change in semantics.
|
|
|
|
|
|
| |
is allocated than needed (used to allocate 80 bytes of digit space no
matter how small the long input). This also runs faster, at least on 32-
bit boxes.
|
|
|
|
|
| |
semantic change, but a bit clearer and may help a really stupid compiler
avoid pointless runtime length conversions.
|
|
|
|
| |
truly needed; usually saves a little time, but no change in semantics.
|
|
|
|
| |
outside the function's control, but is crucial to correct operation.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Replaced PyLong_{As,From}{Unsigned,}LongLong guts with calls
to _PyLong_{As,From}ByteArray.
_testcapimodule.c:
Added strong tests of PyLong_{As,From}{Unsigned,}LongLong.
Fixes SF bug #432552 PyLong_AsLongLong() problems.
Possible bugfix candidate, but the fix relies on code added to longobject
to support the new q/Q structmodule format codes.
|
|
|
|
| |
clarity. Should have no effect visible to callers.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This completes the q/Q project.
longobject.c _PyLong_AsByteArray: The original code had a gross bug:
the most-significant Python digit doesn't necessarily have SHIFT
significant bits, and you really need to count how many copies of the sign
bit it has else spurious overflow errors result.
test_struct.py: This now does exhaustive std q/Q testing at, and on both
sides of, all relevant power-of-2 boundaries, both positive and negative.
NEWS: Added brief dict news while I was at it.
|
|
|
|
|
|
|
|
|
|
| |
_PyLong_FromByteArray
_PyLong_AsByteArray
Untested and probably buggy -- they compile OK, but nothing calls them
yet. Will soon be called by the struct module, to implement x-platform
'q' and 'Q'.
If other people have uses for them, we could move them into the public API.
See longobject.h for usage details.
|
| |
|
|
|
|
|
|
| |
case of objects with equal types which support tp_compare. Give
type objects a tp_compare function.
Also add c<0 tests before a few PyErr_Occurred tests.
|
| |
|
|
|
|
|
|
|
| |
frequently used, and in particular this allows to drop the last
remaining obvious time-waster in the crucial lookdict() and
lookdict_string() functions. Other changes consist mostly of changing
"i < ma_size" to "i <= ma_mask" everywhere.
|
|
|
|
|
|
|
| |
be possible to provoke unbounded recursion now, but leaving that to someone
else to provoke and repair.
Bugfix candidate -- although this is getting harder to backstitch, and the
cases it's protecting against are mondo contrived.
|
|
|
|
|
| |
This code is likely to get even hairier to squash core dumps due to
mutating comparisons, and it's hard enough to follow without that.
|
|
|
|
| |
Bugfix candidate.
|
|
|
|
|
| |
converting to string.
Critical bugfix candidate -- if you take this seriously <wink>.
|
|
|
|
| |
true reason for allocating the tuple before checking the dict size.
|
|
|
|
|
|
|
| |
code, less memory. Tests have uncovered no drawbacks. Christian and
Vladimir are the other two people who have burned many brain cells on the
dict code in recent years, and they like the approach too, so I'm checking
it in without further ado.
|
| |
|
|
|
|
| |
suggestion (modulo style).
|
|
|
|
| |
_PyTuple_Resize().
|
|
|
|
|
|
|
| |
Instead of raising a SystemError, just create a new tuple of the desired
size.
This fixes (at least) SF bug #420343.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
instead of multiplication to generate the probe sequence. The idea is
recorded in Python-Dev for Dec 2000, but that version is prone to rare
infinite loops.
The value is in getting *all* the bits of the hash code to participate;
and, e.g., this speeds up querying every key in a dict with keys
[i << 16 for i in range(20000)] by a factor of 500. Should be equally
valuable in any bad case where the high-order hash bits were getting
ignored.
Also wrote up some of the motivations behind Python's ever-more-subtle
hash table strategy.
|
|
|
|
|
|
|
| |
now takes any iterable argument, not only sequences.
NEEDS DOC CHANGES -- but I don't think we settled on a concise way to
say this stuff.
|
|
|
|
| |
multi-argument list.append(1, 2, 3) (as opposed to .append((1,2,3))).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resizing.
Accurate timings are impossible on my Win98SE box, but this is obviously
faster even on this box for reasonable list.append() cases. I give
credit for this not to the resizing strategy but to getting rid of integer
multiplication and divsion (in favor of shifting) when computing the
rounded-up size.
For unreasonable list.append() cases, Win98SE now displays linear behavior
for one-at-time appends up to a list with about 35 million elements. Then
it dies with a MemoryError, due to fatally fragmented *address space*
(there's plenty of VM available, but by this point Win9X has broken user
space into many distinct heaps none of which has enough contiguous space
left to resize the list, and for whatever reason Win9x isn't coalescing
the dead heaps). Before the patch it got a MemoryError for the same
reason, but once the list reached about 2 million elements.
Haven't yet tried on Win2K but have high hopes extreme list.append()
will be much better behaved now (NT & Win2K didn't fragment address space,
but suffered obvious quadratic-time behavior before as lists got large).
For other systems I'm relying on common sense: replacing integer * and /
by << and >> can't plausibly hurt, the number of function calls hasn't
changed, and the total operation count for reasonably small lists is about
the same (while the operations are cheaper now).
|
|
|
|
| |
Use new _PyString_Eq in lookdict_string.
|
|
|
|
|
| |
they're entirely full. Not a question of correctness, but of temporarily
misplaced common sense.
|
|
|
|
|
|
|
|
|
|
| |
dictresize() was too aggressive about never ever resizing small dicts.
If a small dict is entirely full, it needs to rebuild it despite that
it won't actually resize it, in order to purge old dummy entries thus
creating at least one virgin slot (lookdict assumes at least one such
exists).
Also took the opportunity to add some high-level comments to dictresize.
|