| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
iterable object. I'm not sure how that got overlooked before!
Got rid of the internal _PySequence_IterContains, introduced a new
internal _PySequence_IterSearch, and rewrote all the iteration-based
"count of", "index of", and "is the object in it or not?" routines to
just call the new function. I suppose it's slower this way, but the
code duplication was getting depressing.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the base classes is not a classic class, and its class (the metaclass)
is callable, call the metaclass to do the deed.
One effect of this is that, when mixing classic and new-style classes
amongst the bases of a class, it doesn't matter whether the first base
class is a classic class or not: you will always get the error
"TypeError: metatype conflict among bases". (Formerly, with a classic
class first, you'd get "TypeError: PyClass_New: base must be a class".)
Another effect is that multiple inheritance from ExtensionClass.Base,
with a classic class as the first class, transfers control to the
ExtensionClass.Base class. This is what we need for SF #443239 (and
also for running Zope under 2.2a4, before ExtensionClass is replaced).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
corresponding "getitem" operation (sq_item or mp_subscript) is
implemented. I realize that "sequence-ness" and "mapping-ness" are
poorly defined (and the tests may still be wrong for user-defined
instances, which always have both slots filled), but I believe that a
sequence that doesn't support its getitem operation should not be
considered a sequence. All other operations are optional though.
For example, the ZODB BTree tests crashed because PySequence_Check()
returned true for a dictionary! (In 2.2, the dictionary type has a
tp_as_sequence pointer, but the only field filled is sq_contains, so
you can write "if key in dict".) With this fix, all standalone ZODB
tests succeed.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
a->tp_mro. If a doesn't have class, it's considered a subclass only
of itself or of 'object'.
This one fix is enough to prevent the ExtensionClass test suite from
dumping core, but that doesn't say much (it's a rather small test
suite). Also note that for ExtensionClass-defined types, a different
subclass test may be needed. But I haven't checked whether
PyType_IsSubtype() is actually used in situations where this matters
-- probably it doesn't, since we also don't check for classic classes.
|
| |
|
|
|
|
| |
has more bits than the numerator than can be counted in a C int (yes,
that's unlikely, and no, I'm not adding a test case with a 2 gigabit
long).
|
| | |
|
| |
|
|
| |
instead.
|
| |
|
|
|
|
|
|
|
| |
Curious: the MS docs say stati64 etc are supported even on Win95, but
Win95 doesn't support a filesystem that allows partitions > 2 Gb.
test_largefile: This was opening its test file in text mode. I have no
idea how that worked under Win64, but it sure needs binary mode on Win98.
BTW, on Win98 test_largefile runs quickly (under a second).
|
| | |
|
| |
|
|
|
|
| |
While not even documented, they were clearly part of the C API,
there's no great difficulty to support them, and it has the cool
effect of not requiring any changes to ExtensionClass.c.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
requires that errno ever get set, and it looks like glibc is already
playing that game. New rules:
+ Never use HUGE_VAL. Use the new Py_HUGE_VAL instead.
+ Never believe errno. If overflow is the only thing you're interested in,
use the new Py_OVERFLOWED(x) macro. If you're interested in any libm
errors, use the new Py_SET_ERANGE_IF_OVERFLOW(x) macro, which attempts
to set errno the way C89 said it worked.
Unfortunately, none of these are reliable, but they work on Windows and I
*expect* under glibc too.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I believe this works on Linux (tested both on a system with large file
support and one without it), and it may work on Solaris 2.7.
The changes are twofold:
(1) The configure script now boldly tries to set the two symbols that
are recommended (for Solaris and Linux), and then tries a test
script that does some simple seeking without writing.
(2) The _portable_{fseek,ftell} functions are a little more systematic
in how they try the different large file support options: first
try fseeko/ftello, but only if off_t is large; then try
fseek64/ftell64; then try hacking with fgetpos/fsetpos.
I'm keeping my fingers crossed. The meaning of the
HAVE_LARGEFILE_SUPPORT macro is not at all clear.
I'll see if I can get it to work on Windows as well.
|
| |
|
|
|
|
|
| |
system:
SCO_ATAN2_BUG, SCO_ACCEPT_BUG, and STRICT_SYSV_CURSES.
Work aroudn a bug in the SCO UnixWare atan2() implementation.
|
| | |
|
| |
|
|
|
| |
overflow. Needs testing on Linux (test_long.py and test_long_future.py
especially).
|
| |
|
|
| |
__builtin__.dir(). Moved the guts from bltinmodule.c to object.c.
|
| |
|
|
| |
e.g., (1L << 40000)/(1L << 40001) returns 0.5, not Inf or NaN or whatever.
|
| | |
|
| | |
|
| |
|
|
|
|
|
| |
the fiddling is simply due to that no caller of PyLong_AsDouble ever
checked for failure (so that's fixing old bugs). PyLong_AsDouble is much
faster for big inputs now too, but that's more of a happy consequence
than a design goal.
|
| |
|
|
|
| |
division, and this makes sense. Add -Qwarnall to warn for all
classic divisions, as required by the fixdiv.py tool.
|
| |
|
|
|
|
|
|
|
|
| |
but will be the foundation for Good Things:
+ Speed PyLong_AsDouble.
+ Give PyLong_AsDouble the ability to detect overflow.
+ Make true division of long/long nearly as accurate as possible (no
spurious infinities or NaNs).
+ Return non-insane results from math.log and math.log10 when passing a
long that can't be approximated by a double better than HUGE_VAL.
|
| |
|
|
|
| |
integer types, and y must be >= 0. See discussion at
http://sf.net/tracker/index.php?func=detail&aid=457066&group_id=5470&atid=105470
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
mapping object", in the same sense dict.update(x) requires of x (that x
has a keys() method and a getitem).
Questionable: The other type constructors accept a keyword argument, so I
did that here too (e.g., dictionary(mapping={1:2}) works). But type_call
doesn't pass the keyword args to the tp_new slot (it passes NULL), it only
passes them to the tp_init slot, so getting at them required adding a
tp_init slot to dicts. Looks like that makes the normal case (i.e., no
args at all) a little slower (the time it takes to call dict.tp_init and
have it figure out there's nothing to do).
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PEP 238. Changes:
- add a new flag variable Py_DivisionWarningFlag, declared in
pydebug.h, defined in object.c, set in main.c, and used in
{int,long,float,complex}object.c. When this flag is set, the
classic division operator issues a DeprecationWarning message.
- add a new API PyRun_SimpleStringFlags() to match
PyRun_SimpleString(). The main() function calls this so that
commands run with -c can also benefit from -Dnew.
- While I was at it, I changed the usage message in main() somewhat:
alphabetized the options, split it in *four* parts to fit in under
512 bytes (not that I still believe this is necessary -- doc strings
elsewhere are much longer), and perhaps most visibly, don't display
the full list of options on each command line error. Instead, the
full list is only displayed when -h is used, and otherwise a brief
reminder of -h is displayed. When -h is used, write to stdout so
that you can do `python -h | more'.
Notes:
- I don't want to use the -W option to control whether the classic
division warning is issued or not, because the machinery to decide
whether to display the warning or not is very expensive (it involves
calling into the warnings.py module). You can use -Werror to turn
the warnings into exceptions though.
- The -Dnew option doesn't select future division for all of the
program -- only for the __main__ module. I don't know if I'll ever
change this -- it would require changes to the .pyc file magic
number to do it right, and a more global notion of compiler flags.
- You can usefully combine -Dwarn and -Dnew: this gives the __main__
module new division, and warns about classic division everywhere
else.
|
| |
|
|
| |
xxx_subtype_new() functions are OK, but I goofed up in this one. :-( )
|
| |
|
|
|
| |
type and obj properties. The "bogus super object" message is gone --
this will now just raise an AttributeError.
|
| |
|
|
| |
object at %p>".
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
__dict__ slot for string subtypes.
subtype_dealloc(): properly use _PyObject_GetDictPtr() to get the
(potentially negative) dict offset. Don't copy things into local
variables that are used only once.
type_new(): properly calculate a negative dict offset when tp_itemsize
is nonzero. The __dict__ attribute, if present, is now a calculated
attribute rather than a structure member.
|
| |
|
|
|
|
|
| |
tupledealloc(): only feed the free list when the type is really a
tuple, not a subtype. Otherwise, use PyObject_GC_Del().
_PyTuple_Resize(): disallow using this for tuple subtypes.
|
| |
|
|
|
| |
long_subtype_new(): fix a typo (type->ob_size instead of
tmp->ob_size).
|
| | |
|
| |
|
|
| |
"should have" been added here when they were added to lists).
|
| |
|
|
|
|
| |
If tp_itemsize of the basetype is nonzero, only allow empty __slots__
(declaring that no __dict__ should be added), and don't add a weakref
offset.
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
|
|
| |
This uses a slightly wimpy and wasteful approach, but it works. :-)
|
| |
|
|
|
| |
In particular, the second argument can now be a subclass of the first
as well (normally it must be an instance though).
|