summaryrefslogtreecommitdiffstats
path: root/Objects
Commit message (Collapse)AuthorAgeFilesLines
* PyObject_Dump(): Use %p format to print the address of the pointer.Barry Warsaw2001-01-231-2/+4
| | | | PyGC_Dump(): Wrap this in a #ifdef WITH_CYCLE_GC.
* A few miscellaneous helpers.Barry Warsaw2001-01-231-2/+26
| | | | | | | | | | | | | | | | PyObject_Dump(): New function that is useful when debugging Python's C runtime. In something like gdb it can be a pain to get some useful information out of PyObject*'s. This function prints the str() of the object to stderr, along with the object's refcount and hex address. PyGC_Dump(): Similar to PyObject_Dump() but knows how to cast from the garbage collector prefix back to the PyObject* structure. [See Misc/gdbinit for some useful gdb hooks] none_dealloc(): Rather than SEGV if we accidentally decref None out of existance, we assign None's and NotImplemented's destructor slot to this function, which just calls abort().
* New special case in comparisons: None is smaller than any other objectGuido van Rossum2001-01-221-0/+6
| | | | (unless the object's type overrides this comparison).
* Once again, numeric-smelling objects compare smaller than non-numericGuido van Rossum2001-01-221-2/+17
| | | | ones.
* forgot to check in the new makeunicodedata.py scriptFredrik Lundh2001-01-211-1/+1
|
* Remove a smelly export.Neil Schemenauer2001-01-211-1/+1
|
* Better error message if ucnhash cannot be found (obscure attributeFredrik Lundh2001-01-201-3/+8
| | | | | | errors aren't that helpful), or doesn't contain what's expected from it. Also tweaked the test script so it compiles even if ucnhash is missing.
* Tim chastens:Barry Warsaw2001-01-201-1/+5
| | | | | | | | | | | | | | Barry, that comment belongs in the code, not in the checkin msg. The code *used* to do this correctly (as you well know, since you & I went thru considerable pain to fix this the first time). However, because the *reason* for the convolution wasn't recorded in the code as a comment, somebody threw it all away the first time it got reworked. c-code-isn't-often-self-explanatory-ly y'rs - tim default_3way_compare(): Stick the checkin message from 2.110 in a comment.
* default_3way_compare(): When comparing the pointers, they must be castBarry Warsaw2001-01-201-2/+2
| | | | | | | to integer types (i.e. Py_uintptr_t, our spelling of C9X's uintptr_t). ANSI specifies that pointer compares other than == and != to non-related structures are undefined. This quiets an Insure portability warning.
* Application and elaboration of patch #103305 to fix core dumps whenBarry Warsaw2001-01-191-30/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | del'ing func.func_dict. I took the opportunity to also clean up some other nits with the code, namely core dumps when del'ing func_defaults and KeyError instead of AttributeError when del'ing a non-existant function attribute. Specifically, func_memberlist: Move func_dict and __dict__ into here instead of special casing them in the setattro and getattro methods. I don't remember why I took them out of here before I first uploaded the PEP 232 patch. :/ func_getattro(): No need to special case __dict__/func_dict since their now in the func_memberlist and PyMember_Get() should Do The Right Thing (i.e. transforms NULL values into Py_None). func_setattro(): Document the intended behavior of del'ing or setting to None one of the special func_* attributes. I.e.: func_code - can only be set to a code object. It can't be del'd or set to None. func_defaults - can be del'd. Can only be set to None or a tuple. func_dict - can be del'd. Can only be set to None or a dictionary. Fix core dumps and incorrect exceptions as described above. Also, if we're del'ing an arbitrary function attribute but func_dict is NULL, don't create func_dict before discovering that we'll get an AttributeError anyway.
* refactored the unicodeobject/ucnhash interface, to hide theFredrik Lundh2001-01-191-103/+39
| | | | | | | implementation details inside the ucnhash module. also cleaned up the unicode copyright blurb a little; Secret Labs' internal revision history isn't that interesting...
* Derivative of patch #102549, "simpler, faster(!) implementation of string.join".Tim Peters2001-01-191-38/+52
| | | | | | | | Also fixes two long-standing bugs (present in 2.0): 1. .join() didn't check that the result size fit in an int. 2. string.join(s) when len(s)==1 returned s[0] regardless of s[0]'s type; e.g., "".join([3]) returned 3 (overly optimistic optimization). I resisted a keen temptation to make .join() apply str() automagically.
* Rich comparisons fallout: instance_hash() should check for bothGuido van Rossum2001-01-181-7/+14
| | | | | __cmp__ and __eq__ absent before deciding to do a quickie based on the object address. (Tim Peters discovered this.)
* Rich comparisons fallout: PyObject_Hash() should check for bothGuido van Rossum2001-01-181-1/+1
| | | | | tp_compare and tp_richcompare NULL before deciding to do a quickie based on the object address. (Tim Peters discovered this.)
* Changes to recursive-object comparisons, having to do with a test caseGuido van Rossum2001-01-181-107/+137
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found where rich comparison of unequal recursive objects gave unintuituve results. In a discussion with Tim, where we discovered that our intuition on when a<=b should be true was failing, we decided to outlaw ordering comparisons on recursive objects. (Once we have fixed our intuition and designed a matching algorithm that's practical and reasonable to implement, we can allow such orderings again.) - Refactored the recursive-object comparison framework; more is now done in the support routines so less needs to be done in the calling routines (even at the expense of slowing it down a bit -- this should normally never be invoked, it's mostly just there to avoid blowing up the interpreter). - Changed the framework so that the comparison operator used is also stored. (The dictionary now stores triples (v, w, op) instead of pairs (v, w).) - Changed the nesting limit to a more reasonable small 20; this only slows down comparisons of very deeply nested objects (unlikely to occur in practice), while speeding up comparisons of recursive objects (previously, this would first waste time and space on 500 nested comparisons before it would start detecting recursion). - Changed rich comparisons for recursive objects to raise a ValueError exception when recursion is detected for ordering oprators (<, <=, >, >=). Unrelated change: - Moved PyObject_Unicode() to just under PyObject_Str(), where it belongs. MAL's patch must've inserted in a random spot between two functions in the file -- between two helpers for rich comparison...
* Move distributed and duplicated config for stat() and fstat() into pyport.h.Tim Peters2001-01-181-20/+0
|
* Use rich comparisons to fulfill an old wish: complex numbers now raiseGuido van Rossum2001-01-181-49/+84
| | | | | | | | | | | exceptions when compared using <, <=, > or >=. NOTE: This is a tentative change: this means that cmp() involving complex numbers will raise an exception when the numbers differ, and that in turn means that e.g. dictionaries and certain other compounds (e.g. UserLists) containing complex numbers can't be compared either. So we'll have to decide whether this is acceptable. The alpha test cycle is a good time to keep an eye on this!
* Rich comparisons:Guido van Rossum2001-01-181-118/+45
| | | | | | | | | | | | | | | | | | | | | - Use PyObject_RichCompareBool() when comparing keys; this makes the error handling cleaner. - There were two implementations for dictionary comparison, an old one (#ifdef'ed out) and a new one. Got rid of the old one, which was abandoned years ago. - In the characterize() function, part of dictionary comparison, use PyObject_RichCompareBool() to compare keys and values instead. But continue to use PyObject_Compare() for comparing the final (deciding) elements. - Align the comments in the type struct initializer. Note: I don't implement rich comparison for dictionaries -- there doesn't seem to be much to be gained. (The existing comparison already decides that shorter dicts are always smaller than longer dicts.)
* Same treatment as listobject.c:Guido van Rossum2001-01-181-43/+104
| | | | | | | | | - tuplecontains(): call RichCompare(Py_EQ). - Get rid of tuplecompare(), in favor of new tuplerichcompare() (a clone of list_compare()). - Aligned the comments for large struct initializers.
* Fix a leak in instance_coerce(). This was introduced by Neil'sGuido van Rossum2001-01-171-2/+0
| | | | | | | | | earlier coercion changes, not by rich comparisons. When a coercion function returns 1 (meaning it cannot do it), it should not INCREF the arguments. When no __coerce__() method was found, instance_coerce() originally returned 0, pretending it did it. Neil changed the return value to 1, more accurately reflecting that it didn't do anything, but forgot to take out the two INCREF calls.
* Convert to rich comparisons:Guido van Rossum2001-01-171-90/+163
| | | | | | | | | | | | | | | | - sort's docompare() calls RichCompare(Py_LT). - list_contains(), list_index(), listcount(), listremove() call RichCompare(Py_EQ). - Get rid of list_compare(), in favor of new list_richcompare(). The latter does some nice shortcuts, like when == or != is requested, it first compares the lengths for trivial accept/reject. Then it goes over the items until it finds an index where the items differe; then it does more shortcut magic to minimize the number of additional comparisons. - Aligned the comments for large struct initializers.
* Deal properly (?) with comparing recursive datastructures.Guido van Rossum2001-01-171-62/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Use the compare nesting level and in-progress dictionary properly in PyObject_RichCompare(). - Change the in-progress code to use static variables instead of globals (both the nesting level and the key for the thread dict were globals but have no reason to be globals; the key can even be a function-static variable in get_inprogress_dict()). - Rewrote try_rich_to_3way_compare() to benefit from the similarity of the three cases, making it table-driven. - In try_rich_to_3way_compare(), test for EQ before LT and GT. This turns out essential when comparing recursive UserList instances; with the old code, these would recurse into rich comparison three times for each nesting level up to NESTING_LIMIT/2, making the total number of calls in the order of 3**(NESTING_LIMIT/2)! NOTE: I'm not 100% comfortable with this. It works for the standard test suite (which compares a few trivial recursive data structures only), but I'm not sure that the in-progress dictionary is used properly by the rich comparison code. Jeremy suggested that maybe the operation should be included in the dict. Currently I presume that objects in the dict are equal unless proven otherwise, and I set the outcome for the rich comparison accordingly: true for operators EQ, LE, GE, and false for the other three. But Jeremy seems to think that there may be counter-examples where this doesn't do the right thing.
* This patch adds a new builtin unistr() which behaves like str()Marc-André Lemburg2001-01-172-0/+48
| | | | | | | | | | except that it always returns Unicode objects. A new C API PyObject_Unicode() is also provided. This closes patch #101664. Written by Marc-Andre Lemburg. Copyright assigned to Guido van Rossum.
* Rich comparisons fall-out:Guido van Rossum2001-01-171-14/+1
| | | | | | - Get rid of float_cmp(). - Renamed Py_TPFLAGS_NEWSTYLENUMBER to Py_TPFLAGS_CHECKTYPES.
* Rich comparisons fall-out:Guido van Rossum2001-01-171-17/+1
| | | | | | - Get rid of long_cmp(). - Renamed Py_TPFLAGS_NEWSTYLENUMBER to Py_TPFLAGS_CHECKTYPES.
* Rich comparisons fall-out:Guido van Rossum2001-01-171-14/+1
| | | | | | - Get rid of int_cmp(). - Renamed Py_TPFLAGS_NEWSTYLENUMBER to Py_TPFLAGS_CHECKTYPES.
* Rich comparisons fall-out:Guido van Rossum2001-01-171-4/+4
| | | | | | - Renamed Py_TPFLAGS_NEWSTYLENUMBER to Py_TPFLAGS_CHECKTYPES. - Use PyObject_RichCompareBool() in PySequence_Contains().
* Rich comparisons.Guido van Rossum2001-01-171-146/+278
| | | | | | | | | | | | | | | | - Got rid of instance_cmp(); refactored instance_compare(). - Added instance_richcompare() which calls __lt__() etc. Some unrelated stuff mixed in: - Aligned comments in various large struct initializers. - Better test to avoid recursion if __coerce__ returns self as the first argument (this is an unrelated fix by Neil Schemenauer!). - Style nit: don't use Py_DECREF(Py_NotImplemented); use Py_DECREF(result) -- it just looks better. :-)
* Rich comparisons. Refactored internal routine do_cmp() and added APIsGuido van Rossum2001-01-171-74/+293
| | | | | | | | PyObject_RichCompare() and PyObject_RichCompareBool(). XXX Note: the code that checks for deeply nested rich comparisons is bogus -- it assumes the two objects are always identical, rather than using the same logic as PyObject_Compare(). I'll fix that later.
* Rationalizing the fallback code for portable fseek -- this is all muchGuido van Rossum2001-01-161-26/+12
| | | | | | | | | simpler if we use fgetpos and fsetpos, rather than trying to mess with platform-specific TELL64 alternatives. Of course, this hasn't been tested on a 64-bit platform, so I may have to withdraw this -- but I'm hopeful, and Trent Mick supports this patch!
* Added checks to prevent PyUnicode_Count() from dumping coreMarc-André Lemburg2001-01-162-19/+45
| | | | | | | | | | | | in case the parameters are out of bounds and fixes error handling for .count(), .startswith() and .endswith() for the case of mixed string/Unicode objects. This patch adds Python style index semantics to PyUnicode_Count() indices (including the special handling of negative indices). The patch is an extended version of patch #103249 submitted by Michael Hudson (mwh) on SF. It also includes new test cases.
* Committing PEP 232, function attribute feature, approved by Guido.Barry Warsaw2001-01-152-15/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Closes SF patch #103123. funcobject.h: PyFunctionObject: add the func_dict slot. funcobject.c: PyFunction_New(): Initialize the func_dict slot to NULL. func_getattr(): Rename to func_getattro() and change the signature. It's more efficient to use attro methods and dig the C string out than it is to re-convert a C string to a PyString. Also, add support for getting the __dict__ (a.k.a. func_dict) attribute, and for getting an arbitrary function attribute. func_setattr(): Rename to func_setattro() and change the signature for the same reason. Also add support for setting __dict__ (a.k.a. func_dict) and any arbitrary function attribute. func_dealloc(): Be sure to DECREF the func_dict slot. func_traverse(): Be sure to traverse func_dict too. PyFunction_Type: make the necessary func_?etattro() changes. classobject.c: instancemethod_memberlist: Add __dict__ instancemethod_setattro(): New method to set arbitrary attributes on methods (really the underlying im_func). Raise TypeError when the instance is bound or when you're trying to set one of the reserved im_* attributes. instancemethod_getattr(): Renamed to instancemethod_getattro() since that's what it really is. Also, added support fo getting arbitrary attributes through the im_func. PyMethod_Type: Do the ?etattr{,o} dance.
* SF patch #103158 by Greg Ball: Don't do unsafe arithmetic in xrangeGuido van Rossum2001-01-151-10/+80
| | | | | | | | | | | | | | | object. This fixes potential overflows in xrange()'s internal calculations on 64-bit platforms. The fix is complicated because the sq_length slot function can only return an int; we want to support xrange(sys.maxint), which is a 64-bit quantity on most 64-bit platforms (except Win64). The solution is hacky but the best possible: when the range is that long, we can use it in a for loop but we can't ask for its length (nor can we actually iterate beyond 2**31-1, because the sq_item slot function has the same restrictions on its arguments. Fixing those restrictions is a project for another day...
* Speed getline_via_fgets(), by supplying two "fast paths", although one isTim Peters2001-01-151-54/+81
| | | | | | | | faster than the other. Should be faster for Mark Favas's 254-character mail log lines, and *is* 3-4% quicker for my test case with much shorter lines (but they're typical of *my* text files, and I'm tired of optimizing for everyone else at my expense <wink> -- in fact, the only one who loses here is Guido ...).
* Use the "MS" getline hack (fgets()) by default on non-get_unlockedTim Peters2001-01-151-30/+47
| | | | platforms. See NEWS for details.
* Jeff Epler's patch adding an xreadlines() method. (It just importsGuido van Rossum2001-01-091-1/+25
| | | | the xreadlines module and lets it do its thing.)
* Tsk, tsk, tsk. Treat FreeBSD the same as the other BSDs when definingGuido van Rossum2001-01-091-1/+1
| | | | a fallback for TELL64. Fixes SF Bug #128119.
* Fix a silly bug in float_pow. Sorry Tim.Neil Schemenauer2001-01-081-1/+1
|
* A few reformats; no logic changes.Tim Peters2001-01-081-9/+8
|
* Let's hope that three time's a charm...Guido van Rossum2001-01-081-3/+3
| | | | | | | | | | Tim discovered another "bug" in my get_line() code: while the comments said that n<0 was invalid, it was in fact still called with n<0 (when PyFile_GetLine() was called with n<0). In that case fortunately executed the same code as for n==0. Changed the comment to admit this fact, and changed Tim's MS speed hack code to use 'n <= 0' as the criteria for the speed hack.
* Fiddled ms_getline_hack after talking w/ Guido: made clearer that theTim Peters2001-01-081-65/+67
| | | | | | | | | | | | | code duplication is to let us get away without a realloc whenever possible; boosted the init buf size (the cutoff at which we *can* get away without a realloc) from 100 to 200 so that more files can enjoy this boost; and allowed other threads to run in all cases. The last two cost something, but not significantly: in my fat test case, less than a 1% slowdown total. Since my test case has a great many short lines, that's probably the worst slowdown, too. While the logic barely changed, there were lots of edits. This also gets rid of the reference to fp->_cnt, so the last platform assumption being made here is that fgets doesn't overwrite bytes capriciously (== beyond the terminating null byte it must write).
* MS Win32 .readline() speedup, as discussed on Python-Dev. This is a trickyTim Peters2001-01-071-15/+184
| | | | | | variant that never needs to "search from the right". Also fixed unlikely memory leak in get_line, if string size overflows INTMAX. Also new std test test_bufio to make sure .readline() works.
* Tim noticed that I had botched get_line_raw(). Looking again, IGuido van Rossum2001-01-071-47/+30
| | | | | | realized that this behavior is already present in PyFile_GetLine(), which is the only place that needs it. A little refactoring of that function made get_line_raw() redundant.
* This patch adds a new feature to the builtin charmap codec:Marc-André Lemburg2001-01-061-8/+48
| | | | | | | | | | | | | | | The mapping dictionaries can now contain 1-n mappings, meaning that character ordinals may be mapped to strings or Unicode object, e.g. 0x0078 ('x') -> u"abc", causing the ordinal to be replaced by the complete string or Unicode object instead of just one character. Another feature introduced by the patch is that of mapping oridnals to the emtpy string. This allows removing characters. The patch is different from patch #103100 in that it does not cause a performance hit for the normal use case of 1-1 mappings. Written by Marc-Andre Lemburg, copyright assigned to Guido van Rossum.
* Restructured get_line() for clarity and speed.Guido van Rossum2001-01-051-66/+59
| | | | | | | - The raw_input() functionality is moved to a separate function. - Drop GNU getline() in favor of getc_unlocked(), which exists on more platforms (and is even a tad faster on my system).
* Changes for PEP 208. PyObject_Compare has been rewritten. Instances noNeil Schemenauer2001-01-041-118/+139
| | | | longer get special treatment. The Py_NotImplemented type is here as well.
* Make long a new style number type. Sequence repeat is now done hereNeil Schemenauer2001-01-041-76/+262
| | | | now as well.
* Make int a new style number type. Sequence repeat is now done hereNeil Schemenauer2001-01-041-64/+116
| | | | now as well.
* Make float a new style number type.Neil Schemenauer2001-01-041-42/+108
|
* Make instances a new style number type. See PEP 208 for details. InstanceNeil Schemenauer2001-01-041-184/+268
| | | | | types no longer get special treatment from abstract.c so more number number methods have to be implemented.