| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
predicates
|
|
|
|
| |
CFStrings are in better shape, but Unicode support and automatic conversion to/from Python strings remains to be done.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python interpreter.
This change adds two new C-level APIs: PyEval_SetProfile() and
PyEval_SetTrace(). These can be used to install profile and trace
functions implemented in C, which can operate at much higher speeds
than Python-based functions. The overhead for calling a C-based
profile function is a very small fraction of a percent of the overhead
involved in calling a Python-based function.
The machinery required to call a Python-based profile or trace
function been moved to sysmodule.c, where sys.setprofile() and
sys.setprofile() simply become users of the new interface.
As a side effect, SF bug #436058 is fixed; there is no longer a
_PyTrace_Init() function to declare.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Python interpreter.
This change adds two new C-level APIs: PyEval_SetProfile() and
PyEval_SetTrace(). These can be used to install profile and trace
functions implemented in C, which can operate at much higher speeds
than Python-based functions. The overhead for calling a C-based
profile function is a very small fraction of a percent of the overhead
involved in calling a Python-based function.
The machinery required to call a Python-based profile or trace
function been moved to sysmodule.c, where sys.setprofile() and
sys.setprofile() simply become users of the new interface.
|
|
|
|
| |
tests.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
that required explicitly calling LazyList.clear() in the two tests that
use LazyList (I added a LazyList Fibonacci generator too).
A real bitch: the extremely inefficient first version of the 2-3-5 test
*looked* like a slow leak on Win98SE, but it wasn't "really": it generated
so many results that the heap grew over 4Mb (tons of frames! the number
of frames grows exponentially in that test). Then Win98SE malloc() starts
fragmenting address space allocating more and more heaps, and the visible
memory use grew very slowly while the disk was thrashing like mad.
Printing fewer results (i.e., keeping the heap burden under 4Mb) made
that illusion vanish.
Looks like there's no hope for plugging the LazyList leaks automatically
short of adding frameobjects and genobjects to gc. OTOH, they're very
easy to break by hand, and they're the only *kind* of plausibly realistic
leaks I've been able to provoke.
Dilemma.
|
|
|
|
|
|
|
| |
Implement sys.maxunicode.
Explicitly wrap around upper/lower computations for wide Py_UNICODE.
When decoding large characters with UTF-8, represent expected test
results using the \U notation.
|
|
|
|
|
|
|
|
|
|
|
|
| |
- the correct range for the error message is range(0x110000);
- put the 4-byte Unicode-size code inside the same else branch as the
2-byte code, rather generating unreachable code in the 2-byte case.
- Don't hide the 'else' behine the '}'.
(I would prefer that in 4-byte mode, any value should be accepted, but
reasonable people can argue about that, so I'll put that off.)
|
| |
|
|
|
|
| |
when checking surrogates.
|
|
|
|
|
|
| |
SIZEOF_SHORT by hand here.
Also added dynamic check that SIZEOF_SHORT is correct for the platform (in
_testcapimodule).
|
|
|
|
| |
not writable -- too dangerous!) from Python code.
|
|
|
|
|
|
|
|
|
|
| |
Add configure option --enable-unicode.
Add config.h macros Py_USING_UNICODE, PY_UNICODE_TYPE, Py_UNICODE_SIZE,
SIZEOF_WCHAR_T.
Define Py_UCS2.
Encode and decode large UTF-8 characters into single Py_UNICODE values
for wide Unicode types; likewise for UTF-16.
Remove test whether sizeof Py_UNICODE is two.
|
|
|
|
| |
such as the Core Foundation ones.
|
| |
|
|
|
|
| |
real functionality yet, but method chains seem to work, and so do Retain/Release semantics.
|
|
|
|
|
| |
Makes it much easier to find references via dumb editor search (former
"frame" in particular was near-hopeless).
|
|
|
|
| |
sizeof(int)
|
| |
|
|
|
|
| |
mapping objects as an argument.
|
|
|
|
|
| |
a non-dictionary mapping object. Include tests for several expected
failure modes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
"mapping" object, specifically one that supports PyMapping_Keys() and
PyObject_GetItem(). This allows you to say e.g. {}.update(UserDict())
We keep the special case for concrete dict objects, although that
seems moderately questionable. OTOH, the code exists and works, so
why change that?
.update()'s docstring already claims that D.update(E) implies calling
E.keys() so it's appropriate not to transform AttributeErrors in
PyMapping_Keys() to TypeErrors.
Patch eyeballed by Tim.
|
|
|
|
| |
wrt surrogates. (this extends the valid range from 65535 to 1114111)
|
|
|
|
| |
HAVE_USABLE_WCHAR_T
|
|
|
|
|
|
| |
unicodeobject.h, which forces sizeof(Py_UNICODE) == sizeof(Py_UCS4).
(this may be good enough for platforms that doesn't have a 16-bit
type. the UTF-16 codecs don't work, though)
|
|
|
|
|
| |
sizeof(Py_UNICODE) >= sizeof(long). also changed surrogate expansion
to work if sizeof(Py_UNICODE) > 2.
|
|
|
|
| |
HAVE_USABLE_WCHAR_T
|
| |
|
| |
|
|
|
|
| |
package to be loaded from a PYD resource.
|
|
|
|
| |
Not anymore <wink>. Pure hack. Doesn't fix any other "if 0:" glitches.
|
|
|
|
| |
Iterators list for bringing it up!
|
|
|
|
| |
generators. (An alternative would be to create a new "yield" debugger event, but that involves many more changes, and might break Bdb subclasses.)
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Add a temporary driver to help track down remaining leak(s).
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
clearing a shallow copy _run_examples() makes itself can't hurt anything.
|