summaryrefslogtreecommitdiffstats
path: root/Objects
Commit message (Collapse)AuthorAgeFilesLines
* Use Py_CLEAR(). Add unrelated test.Raymond Hettinger2004-09-281-1/+1
|
* Checkin Tim's fix to an error discussed on python-dev.Raymond Hettinger2004-09-261-10/+20
| | | | | | | | | | | | | | | | | Also, add a testcase. Formerly, the list_extend() code used several local variables to remember its state across iterations. Since an iteration could call arbitrary Python code, it was possible for the list state to be changed. The new code uses dynamic structure references instead of C locals. So, they are always up-to-date. After list_resize() is called, its size has been updated but the new cells are filled with NULLs. These needed to be filled before arbitrary iteration code was called; otherwise, that code could attempt to modify a list that was in a semi-invalid state. The solution was to change the ob->size field back to a value reflecting the actual number of valid cells.
* Remove 'extern' declaration for _Py_SwappedOp.Brett Cannon2004-09-251-1/+1
|
* Ensure negative offsets cannot be passed to buffer(). When composingNeil Schemenauer2004-09-241-2/+15
| | | | | buffers, compute the new buffer size based on the old buffer size. Fixes SF bug #1034242.
* Fix buffer offset calculation (need to compute it before changingNeil Schemenauer2004-09-241-11/+7
| | | | | 'base'). Fixes SF bug #1033720. Move offset sanity checking to buffer_from_memory().
* float_richcompare(): Use the new Py_IS_NAN macro to ensure that, onTim Peters2004-09-231-11/+9
| | | | | platforms where that macro works, NaN compared to an int or long works the same as NaN compared to a finite float.
* SF bug #513866: Float/long comparison anomaly.Tim Peters2004-09-231-8/+206
| | | | | | | | | | | | | | | | | | When an integer is compared to a float now, the int isn't coerced to float. This avoids spurious overflow exceptions and insane results. This should compute correct results, without raising spurious exceptions, in all cases now -- although I expect that what happens when an int/long is compared to a NaN is still a platform accident. Note that we had potential problems here even with "short" ints, on boxes where sizeof(long)==8. There's #ifdef'ed code here to handle that, but I can't test it as intended. I tested it by changing the #ifdef to trigger on my 32-bit box instead. I suppose this is a bugfix candidate, but I won't backport it. It's long-winded (for speed) and messy (because the problem is messy). Note that this also depends on a previous 2.4 patch that introduced _Py_SwappedOp[] as an extern.
* A static swapped_op[] array was defined in 3 different C files, & I thinkTim Peters2004-09-233-12/+6
| | | | | I need to define it again. Bite the bullet and define it once as an extern, _Py_SwappedOp[].
* Patch #1024670: Support int objects in PyLong_AsUnsignedLong[Mask].Martin v. Löwis2004-09-201-0/+11
|
* SF bug #1030557: PyMapping_Check crashes when argument is NULLRaymond Hettinger2004-09-191-2/+2
| | | | | | Make PySequence_Check() and PyMapping_Check() handle NULL inputs. This goes beyond what most of the other checks do, but it is nice defensive programming and solves the OP's problem.
* Initialize sep and seplen to suppress warning from gcc.Skip Montanaro2004-09-161-3/+3
|
* Add a missing line continuation character.Thomas Heller2004-09-151-1/+1
|
* Make the word "module" appear in the error string for calling theMichael W. Hudson2004-09-141-1/+1
| | | | | | | | module type with silly arguments. (The exact name can be quibbled over, if you care). This was partially inspired by bug #1014215 and so on, but is also just a good idea.
* Move a comment back to its rightful location.Michael W. Hudson2004-09-141-2/+2
|
* Make the hint about the None default less ambiguous.Walter Dörwald2004-09-141-1/+1
|
* Enhance the docstrings for unicode.split() and string.split()Walter Dörwald2004-09-141-2/+2
| | | | | to make it clear that it is possible to pass None as the separator argument to get the default "any whitespace" separator.
* SF #1022910: Conserve memory with list.pop()Raymond Hettinger2004-09-121-8/+11
| | | | | | | | | | | The list resizing scheme only downsized when more than 16 elements were removed in a single step: del a[100:120]. As a result, the list would never shrink when popping elements off one at a time. This patch makes it shrink whenever more than half of the space is unused. Also, at Tim's suggestion, renamed _new_size to new_allocated. This makes the code easier to understand.
* Typo fix: 'comparisions' is not a wordAndrew M. Kuchling2004-09-101-1/+1
|
* SF patch #998993: The UTF-8 and the UTF-16 stateful decoders now supportWalter Dörwald2004-09-071-23/+57
| | | | | | | | | | | decoding incomplete input (when the input stream is temporarily exhausted). codecs.StreamReader now implements buffering, which enables proper readline support for the UTF-16 decoders. codecs.StreamReader.read() has a new argument chars which specifies the number of characters to return. codecs.StreamReader.readline() and codecs.StreamReader.readlines() have a new argument keepends. Trailing "\n"s will be stripped from the lines if keepends is false. Added C APIs PyUnicode_DecodeUTF8Stateful and PyUnicode_DecodeUTF16Stateful.
* SF patch #1020188: Use Py_CLEAR where necessary to avoid crashesRaymond Hettinger2004-09-013-14/+6
| | | | (Contributed by Dima Dorfman)
* long_pow(): Fix more instances of leaks in error cases.Tim Peters2004-08-301-3/+3
| | | | | Bugfix candidate -- although long_pow() is so different now I doubt a patch would apply to 2.3.
* SF patch 936813: fast modular exponentiationTim Peters2004-08-301-107/+185
| | | | | | | | | | | | | | | | | | | | | | | | This checkin is adapted from part 2 (of 3) of Trevor Perrin's patch set. BACKWARD INCOMPATIBILITY: SHIFT must now be divisible by 5. AFAIK, nobody will care. long_pow() could be complicated to worm around that, if necessary. long_pow(): - BUGFIX: This leaked the base and power when the power was negative (and so the computation delegated to float pow). - Instead of doing right-to-left exponentiation, do left-to-right. This is more efficient for small bases, which is the common case. - In addition, if the exponent is large (more than FIVEARY_CUTOFF digits), precompute [a**i % c for i in range(32)], and go left to right 5 bits at a time. l_divmod(): - The signature changed so that callers who don't want the quotient, or don't want the remainder, can pass NULL in the slot they don't want. This saves them from having to declare a vrbl for unwanted stuff, and remembering to decref it. long_mod(), long_div(), long_classic_div(): - Adjust to new l_divmod() signature, and simplified as a result.
* SF patch 936813: fast modular exponentiationTim Peters2004-08-291-21/+79
| | | | | | | | | | | | | | | | | | This checkin is adapted from part 1 (of 3) of Trevor Perrin's patch set. x_mul() - sped a little by optimizing the C - sped a lot (~2X) if it's doing a square; note that long_pow() squares often k_mul() - more cache-friendly now if it's doing a square KARATSUBA_CUTOFF - boosted; gradeschool mult is quicker now, and it may have been too low for many platforms anyway KARATSUBA_SQUARE_CUTOFF - new - since x_mul is a lot faster at squaring now, the point at which Karatsuba pays for squaring is much higher than for general mult
* PyUnicode_Join(): Bozo Alert. While this is chugging along, it mayTim Peters2004-08-271-0/+12
| | | | | | | | | need to convert str objects from the iterable to unicode. So, if someone set the system default encoding to something nasty enough, the conversion process could mutate the input iterable as a side effect, and PySequence_Fast doesn't hide that from us if the input was a list. IOW, can't assume the size of PySequence_Fast's result is invariant across PyUnicode_FromObject() calls.
* PyUnicode_Join(): Rewrote to use PySequence_Fast(). This doesn't doTim Peters2004-08-271-126/+96
| | | | | | | | much to reduce the size of the code, but greatly improves its clarity. It's also quicker in what's probably the most common case (the argument iterable is a list). Against it, if the iterable isn't a list or a tuple, a temp tuple is materialized containing the entire input sequence, and that's a bigger temp memory burden. Yawn.
* PyUnicode_Join(): Missed a spot where I intended a cast from size_t toTim Peters2004-08-271-1/+1
| | | | | int. I sure wish MS would gripe about that! Whatever, note that the statement above it guarantees that the cast loses no info.
* PyUnicode_Join(): Two primary aims:Tim Peters2004-08-271-40/+120
| | | | | | | | 1. u1.join([u2]) is u2 2. Be more careful about C-level int overflow. Since PySequence_Fast() isn't needed to achieve #1, it's not used -- but the code could sure be simpler if it were.
* Fix docstring typo.Raymond Hettinger2004-08-251-1/+1
|
* Stop producing or using OverflowWarning. PEP 237 thought this wouldTim Peters2004-08-251-36/+4
| | | | | | | happen in 2.3, but nobody noticed it still was getting generated (the warning was disabled by default). OverflowWarning and PyExc_OverflowWarning should be removed for 2.5, and left notes all over saying so.
* SF Patch #1007087: Return new string for single subclass joins (Bug #1001011)Raymond Hettinger2004-08-231-12/+8
| | | | | | | (Patch contributed by Nick Coghlan.) Now joining string subtypes will always return a string. Formerly, if there were only one item, it was returned unchanged.
* Fix repr for negative imaginary part. Fixes #1013908.Martin v. Löwis2004-08-221-2/+6
|
* Patch #980082: Missing INCREF in PyType_Ready.Martin v. Löwis2004-08-181-1/+3
|
* SF patch #1005778, Fix seg fault if list object is modified during list.index()Neal Norwitz2004-08-131-3/+1
| | | | Backport candidate
* This is my patchMichael W. Hudson2004-08-121-4/+32
| | | | | | | [ 1004703 ] Make func_name writable plus fixing a couple of nits in the documentation changes spotted by MvL and a Misc/NEWS entry.
* Previous commit was viewed as "perverse". Changed to just cast the unusedBrett Cannon2004-08-081-1/+3
| | | | | | variable to void.. Thanks to Sjoerd Mullender for the suggested change.
* Bug 1003935: xrange overflowsTim Peters2004-08-081-1/+16
| | | | | | | | | | | | | | | Added XXX comment about why the undocumented PyRange_New() API function is too broken to be worth the considerable pain of repairing. Changed range_new() to stop using PyRange_New(). This fixes a variety of bogus errors. Nothing in the core uses PyRange_New() now. Documented that xrange() is intended to be simple and fast, and that CPython restricts its arguments, and length of its result sequence, to native C longs. Added some tests that failed before the patch, and repaired a test that relied on a bogus OverflowError getting raised.
* Trimmed trailing whitespace.Tim Peters2004-08-081-5/+5
|
* This was quite a dark bug in my recent in-place string concatenationArmin Rigo2004-08-071-1/+2
| | | | | | hack: it would resize *interned* strings in-place! This occurred because their reference counts do not have their expected value -- stringobject.c hacks them. Mea culpa.
* Fixed some compiler warnings.Armin Rigo2004-08-071-2/+2
|
* Subclasses of string can no longer be interned. The semantics ofJeremy Hylton2004-08-071-22/+12
| | | | | | | | | | | interning were not clear here -- a subclass could be mutable, for example -- and had bugs. Explicitly interning a subclass of string via intern() will raise a TypeError. Internal operations that attempt to intern a string subclass will have no effect. Added a few tests to test_builtin that includes the old buggy code and verifies that calls like PyObject_SetAttr() don't fail. Perhaps these tests should have gone in test_string.
* SF bug #1004669: Type returned from .keys() is not checkedRaymond Hettinger2004-08-071-0/+5
|
* SF #989185: Drop unicode.iswide() and unicode.width() and addHye-Shik Chang2004-08-043-437/+299
| | | | | | | | | | | | unicodedata.east_asian_width(). You can still implement your own simple width() function using it like this: def width(u): w = 0 for c in unicodedata.normalize('NFC', u): cwidth = unicodedata.east_asian_width(c) if cwidth in ('W', 'F'): w += 2 else: w += 1 return w
* Be more careful about maintaining the invariants; it was actuallyFred Drake2004-08-031-3/+25
| | | | | possible that the callback-less flavors of the ref or proxy could have been added during GC, so we don't want to replace them.
* Repair the same thinko in two places about handling of _Py_RefTotal inMichael W. Hudson2004-08-032-10/+12
| | | | | | | the case of __del__ resurrecting an object. This makes the apparent reference leaks in test_descr go away (which I expected) and also kills off those in test_gc (which is more surprising but less so once you actually think about it a bit).
* Tweak previous patch to silence a warning about the unused left value in theBrett Cannon2004-08-031-1/+1
| | | | | | comma expression in listpop() that was being returned. Still essentially unused (as it is meant to be), but now the compiler thinks it is worth *something* by having it incremented.
* Add a missing decref.Michael W. Hudson2004-08-021-0/+1
|
* list_ass_slice(): Document the obscure new intent that deleting a sliceTim Peters2004-07-311-8/+16
| | | | | | | | | | | | | | | | | | | | | of no more than 8 elements cannot fail. listpop(): Take advantage of that its calls to list_resize() and list_ass_slice() can't fail. This is assert'ed in a debug build now, but in an icky way. That is, you can't say: assert(some_call() >= 0); because then some_call() won't occur at all in a release build. So it has to be a big pile of #ifdefs on Py_DEBUG (yuck), or the pleasant: status = some_call(); assert(status >= 0); But in that case, compilers may whine in a release build, because status appears unused then. I'm not certain the ugly trick I used here will convince all compilers to shut up about status (status is always "used" now, as the first (ignored) clause in a comma expression).
* list_ass_slice(): The difference between "recycle" and "recycled" wasTim Peters2004-07-311-17/+10
| | | | | | | | impossible to remember, so renamed one to something obvious. Headed off potential signed-vs-unsigned compiler complaints I introduced by changing the type of a vrbl to unsigned. Removed the need for the tedious explanation about "backward pointer loops" by looping on an int instead.
* Armin asked for a list_ass_slice review in his checkin, so here's theTim Peters2004-07-311-26/+44
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | result. list_resize(): Document the intent. Code is increasingly relying on subtle aspects of its behavior, and they deserve to be spelled out. list_ass_slice(): A bit more simplification, by giving it a common error exit and initializing more values. Be clearer in comments about what "size" means (# of elements? # of bytes?). While the number of elements in a list slice must fit in an int, there's no guarantee that the number of bytes occupied by the slice will. That malloc() and memmove() take size_t arguments is a hint about that <wink>. So changed to use size_t where appropriate. ihigh - ilow should always be >= 0, but we never asserted that. We do now. The loop decref'ing the recycled slice had a subtle insecurity: C doesn't guarantee that a pointer one slot *before* an array will compare "less than" to a pointer within the array (it does guarantee that a pointer one beyond the end of the array compares as expected). This was actually an issue in KSR's C implementation, so isn't purely theoretical. Python probably has other "go backwards" loops with a similar glitch. list_clear() is OK (it marches an integer backwards, not a pointer).
* This is a reorganization of list_ass_slice(). It should probably be reviewed,Armin Rigo2004-07-301-22/+20
| | | | | | | | | | | | | | | though I tried to be very careful. This is a slight simplification, and it adds a new feature: a small stack-allocated "recycled" array for the cases when we don't remove too many items. It allows PyList_SetSlice() to never fail if: * you are sure that the object is a list; and * you either do not remove more than 8 items, or clear the list. This makes a number of other places in the source code correct again -- there are some places that delete a single item without checking for MemoryErrors raised by PyList_SetSlice(), or that clear the whole list, and sometimes the context doesn't allow an error to be propagated.