| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
A new API (only accessible from C) to interrupt a thread by sending it
an exception. This is not always effective, but might help some people.
Requested by Just van Rossum and Alex Martelli. It is intentional
that you have to write your own C extension to call it from Python.
Docs will have to wait.
|
| |
|
|
|
|
| |
between CF objects and their Python representation. Fixes 734695.
|
|
|
|
|
| |
two fixed bits, position 15 and 16. It is right, why should these
be elsewhere.
|
|
|
|
|
|
| |
the purpose. Increased my claim to two bits, hoping that nobody
will complain about it. I'm taking the highest two bits, whatever
the integer word size may be.
|
|
|
|
|
|
| |
The compiler was reseting the list comprehension tmpname counter for each function, but the symtable was using the same counter for the entire module. Repair by move tmpname into the symtable entry.
Bugfix candidate.
|
|
|
|
|
| |
The presence of this bit controls, whether there
are special fields for non-recursive calls.
|
| |
|
|
|
|
| |
riscospath.extsep, and use os.extsep throughout.
|
|
|
|
| |
the terminal encoding on Windows and Unix.
|
|
|
|
| |
there or where to find it.
|
| |
|
| |
|
|
|
|
| |
The additional code complexity and new NOP opcode were not worth it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* Can now test for basic blocks.
* Optimize inverted comparisions.
* Optimize unary_not followed by a conditional jump.
* Added a new opcode, NOP, to keep code size constant.
* Applied NOP to previous transformations where appropriate.
Note, the NOP would not be necessary if other functions were
added to re-target jump addresses and update the co_lnotab mapping.
That would yield slightly faster and cleaner bytecode at the
expense of optimizer simplicity and of keeping it decoupled
from the line-numbering structure.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
new line.
New pvt API function _Py_PrintReferenceAddresses(): Prints only the
addresses and refcnts of the live objects. This is always safe to call,
because it has no dependence on Python's C API.
Py_Finalize(): If envar PYTHONDUMPREFS is set, call (the new)
_Py_PrintReferenceAddresses() right before dumping final pymalloc stats.
We can't print the reprs of the objects here because too much of the
interpreter has been shut down. You need to correlate the addresses
displayed here with the object reprs printed by the earlier
PYTHONDUMPREFS call to _Py_PrintReferences().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New functions:
unsigned long PyInt_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyInt_AsUnsignedLongLongMask(PyObject *);
unsigned long PyLong_AsUnsignedLongMask(PyObject *);
unsigned PY_LONG_LONG) PyLong_AsUnsignedLongLongMask(PyObject *);
New and changed format codes:
b unsigned char 0..UCHAR_MAX
B unsigned char none **
h unsigned short 0..USHRT_MAX
H unsigned short none **
i int INT_MIN..INT_MAX
I * unsigned int 0..UINT_MAX
l long LONG_MIN..LONG_MAX
k * unsigned long none
L long long LLONG_MIN..LLONG_MAX
K * unsigned long long none
Notes:
* New format codes.
** Changed from previous "range-and-a-half" to "none"; the
range-and-a-half checking wasn't particularly useful.
New test test_getargs2.py, to verify all this.
|
|
|
|
|
| |
- Call this in Py_Finalize().
- Expand the Misc/NEWS text on PY_LONG_LONG.
|
|
|
|
|
|
| |
work. This includes some more code that used to be part of pgen in
the main parser; I'm okay with that. I'll see if the Windows build
needs work next.
|
|
|
|
|
|
| |
recursively.
- pdb has a new command, "debug", which lets you step through
arbitrary code from the debugger's (pdb) prompt.
|
| |
|
|
|
|
| |
out whether __del__ exists, without executing any Python-level code.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Arranged that all the objects exposed by __builtin__ appear in the list
of all objects. I basically peed away two days tracking down a mystery
leak in sys.gettotalrefcount() in a ZODB app (== tons of code), because
the object leaking the references didn't appear in the sys.getobjects(0)
list. The object happened to be False. Now False is in the list, along
with other popular & previously missing leak candidates (like None).
Alas, we still don't have a choke point covering *all* Python objects,
so the list of all objects may still be incomplete.
|
|
|
|
|
|
|
| |
_Py_AddToAllObjects() that simply inserts an object at the front of
the doubly-linked list of all objects. Changed PyType_Ready() (the
closest thing we've got to a choke point for type objects) to call
that.
|
|
|
|
|
| |
refactoring to get all the duplicates of this delicate code out of the
cPickle and struct modules.
|
|
|
|
|
|
|
|
|
| |
variables to store internal data. As a result, any atempts to use the
unicode system with multiple active interpreters, or successive
interpreter executions, would fail.
Now that information is stored into members of the PyInterpreterState
structure.
|
|
|
|
|
|
| |
to more accurately describe what the function does.
Suggested by Thomas Wouters.
|
|
|
|
| |
Factors out the common case of returning self.
|
|
|
|
|
| |
classes defined by Python code using a class statement) is now
exported from object.h as PyHeapTypeObject. (SF patch #696193.)
|
|
|
|
| |
Remove prototype and doc. Backport candidate.
|
|
|
|
|
| |
the #ifdef HAVE_NCURSES_H: the same problem exists on OSX 10.1 with
a fink-installed curses (which uses curses.h as the include file name).
|
| |
|
| |
|
|
|
|
|
| |
called something else!). I can't imagine removing the prototype is
going to hurt, but put it back if *you* can.
|
|
|
|
| |
instead of a plain PyObject *. (SF patch #686601 by Ben Laurie.)
|
| |
|
|
|
|
|
|
|
|
|
| |
with an indented code block but no newline would raise SyntaxError.
This would have been a four-line change in parsetok.c... Except
codeop.py depends on this behavior, so a compilation flag had to be
invented that causes the tokenizer to revert to the old behavior;
this required extra changes to 2 .h files, 2 .c files, and 2 .py
files. (Fixes SF bug #501622.)
|
| |
|
|
|
|
| |
Incorporated nnorwitz's comment re. Py__USING_UNICODE.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
-DCALL_PROFILE: Count the number of function calls executed.
When this symbol is defined, the ceval mainloop and helper functions
count the number of function calls made. It keeps detailed statistics
about what kind of object was called and whether the call hit any of
the special fast paths in the code.
Optimization:
When we take the fast_function() path, which seems to be taken for
most function calls, and there is minimal frame setup to do, avoid
call PyEval_EvalCodeEx(). The eval code ex function does a lot of
work to handle keywords args and star args, free variables,
generators, etc. The inlined version simply allocates the frame and
copies the arguments values into the frame.
The optimization gets a little help from compile.c which adds a
CO_NOFREE flag to code objects that don't have free variables or cell
variables. This change allows fast_function() to get into the fast
path with fewer tests.
I measure a couple of percent speedup in pystone with this change, but
there's surely more that can be done.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
__module__ is the string name of the module the function was defined
in, just like __module__ of classes. In some cases, particularly for
C functions, the __module__ may be None.
Change PyCFunction_New() from a function to a macro, but keep an
unused copy of the function around so that we don't change the binary
API.
Change pickle's save_global() to use whichmodule() if __module__ is
None, but add the __module__ logic to whichmodule() since it might be
used outside of pickle.
|
|
|
|
|
|
|
|
|
| |
needs of pickling longs. Backed off to a definition that's much easier
to understand. The pickler will have to work a little harder, but other
uses are more likely to be correct <0.5 wink>.
_PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0,
or >0.
|
|
|
|
|
|
|
| |
start for the C implemention of new pickle LONG1 and LONG4 opcodes (the
linear-time way to pickle a long is to call _PyLong_AsByteArray, but
the caller has no idea how big an array to allocate, and correct
calculation is a bit subtle).
|
|
|
|
| |
Gernot Hillier added more detail to the internal API documentation.
|