summaryrefslogtreecommitdiffstats
path: root/Objects/longobject.c
Commit message (Collapse)AuthorAgeFilesLines
...
* _PyLong_Sign(): remove an assert that needed a variable ndigits thatGuido van Rossum2003-02-031-3/+2
| | | | | | | | | wasn't used outside the assert (and hence caused a compiler warning about an unused variable in NDEBUG mode). The assert wasn't very useful any more. _PyLong_NumBits(): moved the calculation of ndigits after asserting that v != NULL.
* long_from_binary_base(): Sped this a little by computing the # of bitsTim Peters2003-02-021-6/+6
| | | | needed outside the first loop.
* Tightened a too-generous assert.Tim Peters2003-02-021-1/+1
|
* long(string, base) now takes time linear in len(string) when base is aTim Peters2003-02-021-15/+108
| | | | | power of 2. Enabled the tail end of test_long() in pickletester.py because it no longer takes forever when run from test_pickle.py.
* cPickle.c: Full support for the new LONG1 and LONG4. Added comments.Tim Peters2003-02-021-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Assorted code cleanups; e.g., sizeof(char) is 1 by definition, so there's no need to do things like multiply by sizeof(char) in hairy malloc arguments. Fixed an undetected-overflow bug in readline_file(). longobject.c: Fixed a really stupid bug in the new _PyLong_NumBits. pickle.py: Fixed stupid bug in save_long(): When proto is 2, it wrote LONG1 or LONG4, but forgot to return then -- it went on to append the proto 1 LONG opcode too. Fixed equally stupid cancelling bugs in load_long1() and load_long4(): they *returned* the unpickled long instead of pushing it on the stack. The return values were ignored. Tests passed before only because save_long() pickled the long twice. Fixed bugs in encode_long(). Noted that decode_long() is quadratic-time despite our hopes, because long(string, 16) is still quadratic-time in len(string). It's hex() that's linear-time. I don't know a way to make decode_long() linear-time in Python, short of maybe transforming the 256's-complement bytes into marshal's funky internal format, and letting marshal decode that. It would be more valuable to make long(string, 16) linear time. pickletester.py: Added a global "protocols" vector so tests can try all the protocols in a sane way. Changed test_ints() and test_unicode() to do so. Added a new test_long(), but the tail end of it is disabled because it "takes forever" under pickle.py (but runs very quickly under cPickle: cPickle proto 2 for longs is linear-time).
* Squash compiler wng about signed/unsigned comparison mismatch.Tim Peters2003-01-311-1/+1
|
* _PyLong_NumBits(): The definition of this was too specific to the quirkyTim Peters2003-01-311-8/+17
| | | | | | | | | needs of pickling longs. Backed off to a definition that's much easier to understand. The pickler will have to work a little harder, but other uses are more likely to be correct <0.5 wink>. _PyLong_Sign(): New teensy function to characterize a long, as to <0, ==0, or >0.
* Implement appropriate __getnewargs__ for all immutable subclassable builtinGuido van Rossum2003-01-291-1/+12
| | | | | | | | types. The special handling for these can now be removed from save_newobj(). Add some testing for this. Also add support for setting the 'fast' flag on the Python Pickler class, which suppresses use of the memo.
* Added new private API function _PyLong_NumBits. This will be used at theTim Peters2003-01-281-0/+35
| | | | | | | start for the C implemention of new pickle LONG1 and LONG4 opcodes (the linear-time way to pickle a long is to call _PyLong_AsByteArray, but the caller has no idea how big an array to allocate, and correct calculation is a bit subtle).
* Consolidate the int and long sequence repeat code. Before the change,Neil Schemenauer2002-12-301-19/+0
| | | | integers checked for integer overflow but longs did not.
* Change int() so that passing a string, unicode, float or long argumentWalter Dörwald2002-11-191-9/+20
| | | | | | | that is outside the integer range no longer raises OverflowError, but returns a long object instead. This fixes SF bug http://www.python.org/sf/635115
* Make int("...") return a long if an int would overflow.Walter Dörwald2002-11-061-8/+10
| | | | | | Also remove the 512 character limitation for int(u"...") and long(u"..."). This closes SF bug #629989.
* replace thread state objects' ticker and checkinterval fields with twoSkip Montanaro2002-09-031-4/+2
| | | | | | | | | | globals, _Py_Ticker and _Py_CheckInterval. This also implements Jeremy's shortcut in Py_AddPendingCall that zeroes out _Py_Ticker. This allows the test in the main loop to only test a single value. The gory details are at http://python.org/sf/602191
* long_format(), long_lshift(): Someone on c.l.py is trying to boostTim Peters2002-08-201-2/+2
| | | | | | | | | | | | SHIFT and MASK, and widen digit. One problem is that code of the form digit << small_integer implicitly assumes that the result fits in an int or unsigned int (platform-dependent, but "int sized" in any case), since digit is promoted "just" to int or unsigned via the usual integer promotions. But if digit is typedef'ed as unsigned int, this loses information. The cure for this is just to cast digit to twodigits first.
* Illustrating by example one good reason not to trust a proof <wink>.Tim Peters2002-08-151-3/+3
|
* k_mul() comments: In honor of Dijkstra, made the proof that "t3 fits"Tim Peters2002-08-151-34/+37
| | | | | | | rigorous instead of hoping for testing not to turn up counterexamples. Call me heretical, but despite that I'm wholly confident in the proof, and have done it two different ways now, I still put more faith in testing ...
* long_mul(): Simplified exit code. In particular, k_mul() returns aTim Peters2002-08-151-9/+3
| | | | | normalized result, so no point to normalizing it again. The number of test+branches was also excessive.
* k_mul() comments: Simplified the simplified explanation of why ah*bh andTim Peters2002-08-141-6/+3
| | | | al*bl "always fit": it's actually trivial given what came before.
* k_mul() comments: Explained why there's always enough room to subtractTim Peters2002-08-141-0/+7
| | | | | | ah*bh and al*bl. This is much easier than explaining why that's true for (ah+al)*(bh+bl), and follows directly from the simple part of the (ah+al)*(bh+bl) explanation.
* Fixed error in new comment.Tim Peters2002-08-131-4/+3
|
* k_mul(): The fix for (ah+al)*(bh+bl) spilling 1 bit beyond the allocatedTim Peters2002-08-131-8/+44
| | | | | | | | | | space is no longer needed, so removed the code. It was only possible when a degenerate (ah->ob_size == 0) split happened, but after that fix went in I added k_lopsided_mul(), which saves the body of k_mul() from seeing a degenerate split. So this removes code, and adds a honking long comment block explaining why spilling out of bounds isn't possible anymore. Note: ff we end up spilling out of bounds anyway <wink>, an assert in v_iadd() is certain to trigger.
* Fix comment for PyLong_AsUnsignedLong() to say that the return valueGuido van Rossum2002-08-131-1/+1
| | | | is an *unsigned* long.
* k_lopsided_mul(): This allocated more space for bslice than necessary.Tim Peters2002-08-121-1/+1
|
* Added new function k_lopsided_mul(), which is much more efficient thanTim Peters2002-08-121-10/+72
| | | | | | | | k_mul() when inputs have vastly different sizes, and a little more efficient when they're close to a factor of 2 out of whack. I consider this done now, although I'll set up some more correctness tests to run overnight.
* k_mul(): Moved an assert down. In a debug build, interrupting aTim Peters2002-08-121-2/+2
| | | | | multiply via Ctrl+C could cause a NULL-pointer dereference due to the assert.
* k_mul(): Heh -- I checked in two fixes for the last problem. Only keepTim Peters2002-08-121-2/+2
| | | | the good one <wink>. Also checked in a test-aid by mistake.
* k_mul(): White-box testing turned up that (ah+al)*(bh+bl) can, in rareTim Peters2002-08-121-3/+11
| | | | | | | | | cases, overflow the allocated result object by 1 bit. In such cases, it would have been brought back into range if we subtracted al*bl and ah*bh from it first, but I don't want to do that because it hurts cache behavior. Instead we just ignore the excess bit when it appears -- in effect, this is forcing unsigned mod BASE**(asize + bsize) arithmetic in a case where that doesn't happen all by itself.
* x_mul(): Made life easier for C optimizers in the "grade school"Tim Peters2002-08-121-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | algorithm. MSVC 6 wasn't impressed <wink>. Something odd: the x_mul algorithm appears to get substantially worse than quadratic time as the inputs grow larger: bits in each input x_mul time k_mul time ------------------ ---------- ---------- 15360 0.01 0.00 30720 0.04 0.01 61440 0.16 0.04 122880 0.64 0.14 245760 2.56 0.40 491520 10.76 1.23 983040 71.28 3.69 1966080 459.31 11.07 That is, x_mul is perfectly quadratic-time until a little burp at 2.56->10.76, and after that goes to hell in a hurry. Under Karatsuba, doubling the input size "should take" 3 times longer instead of 4, and that remains the case throughout this range. I conclude that my "be nice to the cache" reworkings of k_mul() are paying.
* k_mul() and long_mul(): I'm confident that the Karatsuba algorithm isTim Peters2002-08-121-9/+30
| | | | | | | | correct now, so added some final comments, did some cleanup, and enabled it for all long-int multiplies. The KARAT envar no longer matters, although I left some #if 0'ed code in there for my own use (temporary). k_mul() is still much slower than x_mul() if the inputs have very differenent sizes, and that still needs to be addressed.
* k_mul: Rearranged computation for better cache use. Ignored overflowTim Peters2002-08-121-60/+50
| | | | | | | | | (it's possible, but should be harmless -- this requires more thought, and allocating enough space in advance to prevent it requires exactly as much thought, to know exactly how much that is -- the end result certainly fits in the allocated space -- hmm, but that's really all the thought it needs! borrows/carries out of the high digits really are harmless).
* x_mul(): This failed to normalize its result.Tim Peters2002-08-121-6/+18
| | | | | | | | | | | | | | | | | | | | | | k_mul(): This didn't allocate enough result space when one input had more than twice as many bits as the other. This was partly hidden by that x_mul() didn't normalize its result. The Karatsuba recurrence is pretty much hosed if the inputs aren't roughly the same size. If one has at least twice as many bits as the other, we get a degenerate case where the "high half" of the smaller input is 0. Added a special case for that, for speed, but despite that it helped, this can still be much slower than the "grade school" method. It seems to take a really wild imbalance to trigger that; e.g., a 2**22-bit input times a 1000-bit input on my box runs about twice as slow under k_mul than under x_mul. This still needs to be addressed. I'm also not sure that allocating a->ob_size + b->ob_size digits is enough, given that this is computing k = (ah+al)*(bh+bl) instead of k = (ah-al)*(bl-bh); i.e., it's certainly enough for the final result, but it's vaguely possible that adding in the "artificially" large k may overflow that temporarily. If so, an assert will trigger in the debug build, but we'll probably compute the right result anyway(!).
* Introduced helper functions v_iadd and v_isub, for in-place digit-vectorTim Peters2002-08-121-29/+75
| | | | | | | addition and subtraction. Reworked the tail end of k_mul() to use them. This saves oodles of one-shot longobject allocations (this is a triply- recursive routine, so saving one allocation in the body saves 3**n allocations at depth n; we actually save 2 allocations in the body).
* k_mul(): Repaired another typo in another comment.Tim Peters2002-08-121-1/+1
|
* k_mul(): Repaired typo in comment.Tim Peters2002-08-121-1/+1
|
* Cautious introduction of a patch that started fromTim Peters2002-08-121-85/+264
| | | | | | | | SF 560379: Karatsuba multiplication. Lots of things were changed from that. This needs a lot more testing, for correctness and speed, the latter especially when bit lengths are unbalanced. For now, the Karatsuba code gets invoked if and only if envar KARAT exists.
* staticforward bites the dust.Jeremy Hylton2002-07-171-1/+2
| | | | | | | | | | | | | | | The staticforward define was needed to support certain broken C compilers (notably SCO ODT 3.0, perhaps early AIX as well) botched the static keyword when it was used with a forward declaration of a static initialized structure. Standard C allows the forward declaration with static, and we've decided to stop catering to broken C compilers. (In fact, we expect that the compilers are all fixed eight years later.) I'm leaving staticforward and statichere defined in object.h as static. This is only for backwards compatibility with C extensions that might still use it. XXX I haven't updated the documentation.
* Undef MIN and MAX before defining them, to avoid warnings on certainGuido van Rossum2002-07-131-0/+2
| | | | platforms.
* Patch #568124: Add doc string macros.Martin v. Löwis2002-06-131-2/+2
|
* Clarify return value of PyLong_AsLongLong().Jeremy Hylton2002-04-231-1/+1
| | | | | | The function is documented to return -1 on error. If res was < 0, it returned res. It wasn't clear that the invariant was res < 0 iff res == -1.
* PyObject_Del can now be used as a function designator.Neil Schemenauer2002-04-121-1/+1
|
* Patch #494047: removes 64-bit ?: to cope on plan9.Martin v. Löwis2002-03-091-2/+10
|
* _PyLong_Copy(): was creating a copy of the absolute value, but shouldTim Peters2002-03-021-1/+1
| | | | | | copy the sign too. Added a test to test_descr to ensure that it does. Bugfix candidate.
* Patch #508038: Do not use a type as a variable name.Martin v. Löwis2002-02-161-3/+3
|
* long_true_divide(): decref its converted arguments. test_long_future.pyTim Peters2001-11-041-2/+5
| | | | | run in an infinite loop no longer grows. Thanks to Neal Norwitz for determining that test leaked!
* Enable GC for new-style instances. This touches lots of files, sinceGuido van Rossum2001-10-051-1/+2
| | | | | | | | | | | | | | | | | | | | | | many types were subclassable but had a xxx_dealloc function that called PyObject_DEL(self) directly instead of deferring to self->ob_type->tp_free(self). It is permissible to set tp_free in the type object directly to _PyObject_Del, for non-GC types, or to _PyObject_GC_Del, for GC types. Still, PyObject_DEL was a tad faster, so I'm fearing that our pystone rating is going down again. I'm not sure if doing something like void xxx_dealloc(PyObject *self) { if (PyXxxCheckExact(self)) PyObject_DEL(self); else self->ob_type->tp_free(self); } is any faster than always calling the else branch, so I haven't attempted that -- however those types whose own dealloc is fancier (int, float, unicode) do use this pattern.
* SF [#466125] PyLong_AsLongLong works for any integer.Tim Peters2001-09-301-1/+7
| | | | | | Generalize PyLong_AsLongLong to accept int arguments too. The real point is so that PyArg_ParseTuple's 'L' code does too. That code was undocumented (AFAICT), so documented it.
* Add additional coercion support for "self subtypes" to int, long,Guido van Rossum2001-09-191-0/+5
| | | | float (compare the recent checkin to complex). Added tests for these.
* A fix for SF bug #461546 (bug in long_mul).Guido van Rossum2001-09-151-10/+14
| | | | | | | | | Both int and long multiplication are changed to be more careful in their assumptions about when one of the arguments is a sequence: the assumption that at least one of the arguments must be an int (or long, respectively) is still held, but the assumption that these don't smell like sequences is no longer true: a subtype of int or long may well have a sequence-repeat thingie!
* based upon a suggestion in c.l.py, this slight expansion of theSkip Montanaro2001-09-131-1/+1
| | | | OverflowError message seems reasonable.
* long_invert(): tiny speed and space optimization.Tim Peters2001-09-111-2/+1
|