| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
This change implements a new bytecode compiler, based on a
transformation of the parse tree to an abstract syntax defined in
Parser/Python.asdl.
The compiler implementation is not complete, but it is in stable
enough shape to run the entire test suite excepting two disabled
tests.
|
|
|
|
|
|
|
|
| |
type lookups: whitespace and linebreak.
These lookup tables are from the Python 1.6 version with the addition
of the 205F code point which was added as whitespace code point to Unicode
since then.
|
|
|
|
| |
Will backport
|
| |
|
|
|
|
| |
Backport candidate.
|
|
|
|
|
|
|
| |
PyUnicode_DecodeCharmap() the accept a unicode string as the mapping
argument which is used as a mapping table.
This code isn't used by any of the codecs yet.
|
|
|
|
| |
enabled.
|
|
|
|
|
| |
segfault when a class contain a non-list value in the (undocumented)
special attribute __slotnames__.
|
| |
|
|
|
|
| |
http://mail.python.org/pipermail/python-dev/2002-July/026876.html
|
|
|
|
|
|
|
| |
represented as a C int, raise OverflowError.
(Forward port from 2.4.2; the patch to classobject.c was already in
but needed a correction in the error message text.)
|
|
|
|
|
|
| |
containing a value that doesn't fit in a C int, raise OverflowError
rather than truncating silently (and having 50% chance of hitting the
"it should be >= 0" error).
|
|
|
|
| |
Inspired by Armin Rigo's suggestion to do the same with dictionaries.
|
| |
|
|
|
|
|
| |
subclasses should be substituted as-is and not have tp_str called on
them.
|
|
|
|
|
| |
about illegal code points. The codec now supports PEP 293 style error handlers.
(This is a variant of the Nik Haldimann's patch that detects truncated data)
|
|
|
|
| |
(fixes bug #1119418)
|
|
|
|
|
| |
already been computed.
* Apply a GET_SIZE macro().
|
|
|
|
|
|
| |
Fix over-aggressive PyErr_Clear(). The same code fragment appears in
various guises in list.extend(), map(), filter(), zip(), and internally
in PySequence_Tuple().
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
* set_merge() cannot assume that the table doesn't resize during iteration.
* convert some unnecessary tests to asserts -- they were necessary in
dictobject.c because PyDict_Next() is a public function. The same is
not true for set_next().
* re-arrange the order of functions to more closely match the order
in dictobject.c. This makes it must easier to compare the two
and ought to simplify any issues of maintaining both.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
* Use set_next() for looping in issubset() and frozenset_hash().
* Re-order the presentation of cmp and hash functions.
|
|
|
|
|
|
|
|
|
| |
was never called during interpreter shutdown GC, so the f_back!=NULL
assertion was correct. Now that generators get close()d during GC,
the assertion was being triggered because the generator close() was being
called as the top-level frame. However, nothing actually is broken by
this; it's just that the condition was unexpected in previous Python
versions.
|
|
|
|
|
|
|
| |
a frozenset conversion when the initial search attempt fails with a
TypeError and the key is some type of set. Add a testcase.
* Eliminate a duplicate if-stmt.
|
|
|
|
|
| |
unicode instance if the argument is not an instance of basestring and
calling __str__ on the argument returns a unicode instance.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
s|=s, s&=s, s-=s, or s^=s). Add related tests.
* Improve names for several variables and functions.
* Provide alternate table access functions (next, contains, add, and discard)
that work with an entry argument instead of just a key. This improves
set-vs-set operations because we already have a hash value for each key
and can avoid unnecessary calls to PyObject_Hash(). Provides a 5% to 20%
speed-up for quick hashing elements like strings and integers. Provides
much more substantial improvements for slow hashing elements like tuples
or objects defining a custom __hash__() function.
* Have difference operations resize() when 1/5 of the elements are dummies.
Formerly, it was 1/6. The new ratio triggers less frequently and only
in cases that it can resize quicker and with greater benefit. The right
answer is probably either 1/4, 1/5, or 1/6. Picked the middle value for
an even trade-off between resize time and the space/time costs of dummy
entries.
|
|
|
|
|
|
|
|
| |
* Bring in free list from dictionary code.
* Improve several comments.
* Differencing can leave many dummy entries. If more than
1/6 are dummies, then resize them away.
* Factor-out common code with new macro, PyAnySet_CheckExact.
|
|
|
|
| |
* Have issubset() control its own loop instead of using set_next_internal().
|
|
|
|
|
| |
has already done the job.
* Use a macro form of PyErr_Occurred() inside the set_lookkey() function.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
* Give set_lookkey_string() a fast alternate path when no dummy entries
are present.
* Have set_swap_bodies() reset the hash field to -1 whenever either of
bodies is not a frozenset. Maintains the invariant of regular sets
always having -1 in the hash field; otherwise, any mutation would make
the hash value invalid.
* Use an entry pointer to simplify the code in frozenset_hash().
|
|
|
|
|
|
|
| |
dictobject.c.
* Have frozenset_hash() use entry->hash instead of re-computing each
individual hash with PyObject_Hash(o);
* Finalize the dummy entry before a system exit.
|
| |
|
|
|
|
|
| |
method still needs to support string exceptions, and allow None for the
third argument. Documentation updates are needed, too.
|
|
|
|
|
|
|
|
|
|
|
| |
- Handle both frozenset() and frozenset([]).
- Do not use singleton for frozenset subclasses.
- Finalize the singleton.
- Add test cases.
* Factor-out set_update_internal() from set_update(). Simplifies the
code for several internal callers.
* Factor constant expressions out of loop in set_merge_internal().
* Minor comment touch-ups.
|
|
|
|
| |
part also.
|
| |
|
| |
|
| |
|
|
|
|
|
| |
data structure instead of using dictionaries. Reduces memory consumption
by 1/3 and provides modest speed-ups for most set operations.
|
|
|
|
|
|
|
|
|
|
| |
In addition, long_pow() skipped a necessary (albeit extremely unlikely
to trigger) error check when converting an int modulus to long.
Alas, I was unable to write a test case that crashed due to either
cause.
Bugfix candidate.
|
|
|
|
|
|
|
| |
[ 1229429 ] missing Py_DECREF in PyObject_CallMethod
Add a test in test_enumerate, which is a bit random, but suffices
(reversed_new calls PyObject_CallMethod under some circumstances).
|
|
|
|
|
|
|
|
|
|
| |
managed by C, because it's possible for the block to be smaller than the
new requested size, and at the end of allocated VM. Trying to copy over
nbytes bytes to a Python small-object block can segfault then, and there's
no portable way to avoid this (we would have to know how many bytes
starting at p are addressable, and std C has no means to determine that).
Bugfix candidate. Should be backported to 2.4, but I'm out of time.
|
|
|
|
|
|
| |
float y = x;
when x is a double. Go figure.
|
|
|
|
| |
Hex longs now print with lowercase letters like their int counterparts.
|