| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Clean-up floating point issues by adding true division and float constants.
|
| |
|
|
|
|
|
|
| |
on 32-bit systems on 64-bit systems, and vice versa. As a consequence
of the change, Random pickles created by Python 2.6 cannot be loaded
in Python 2.5.
|
| |
|
|
|
|
| |
Needs to be backported.
|
|
|
|
|
|
|
|
|
| |
versus generator period. While this was a real weakness of the
older WH generator for lists with just a few dozen elements,
and so could potentially bite the naive ;-), the Twister should
show excellent behavior up to at least 600 elements.
Module docstring: reflowed some jarringly short lines.
|
|
|
|
| |
work, this time by ugly brute force.
|
|
|
|
| |
Fix the hit and miss style of testing for sets and dicts.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Renamed the new generator at Trevor's recommendation.
The name HardwareRandom suggested a bit more than it
delivered (no radioactive decay detectors or such).
|
|
|
|
| |
you leap" approach. Makes the early call to os.urandom() unnecessary.
|
|
|
|
|
| |
* trap NotImplementedError raised by os.urandom calls when not available
on a particular system.
|
|
|
|
|
|
|
|
|
|
| |
ldexp. Both methods are exact, and return the same results. Turns out
multiplication is a few (but just a few) percent faster on my box.
They're both significantly faster than using struct with a Q format
to convert bytes to a 64-bit long (struct.unpack() appears to lose due
to the tuple creation/teardown overhead), and calling _hexlify is
significantly faster than doing bytes.encode('hex'). So we appear to
have hit a local minimum (wrt speed) here.
|
|
|
|
|
|
|
| |
components without division and without roundoff error for properly
sized mantissas (i.e. on systems with 53 or more mantissa bits per
float). Eliminates the previous implementation's rounding bias as
aptly demonstrated by Tim Peters.
|
|
|
|
|
| |
* Use it for seeding when it is available.
* Provide an alternate generator based on it.
|
| |
|
|
|
|
| |
Add a comment to make the traceback less mysterious.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
* Added C coded getrandbits(k) method that runs in linear time.
* Call the new method from randrange() for ranges >= 2**53.
* Adds a warning for generators not defining getrandbits() whenever they
have a call to randrange() with too large of a population.
|
|
|
|
|
|
|
|
|
|
|
|
| |
random.sample() uses one of two algorithms depending on the ratio of the
sample size to the population size. One of the algorithms accepted any
iterable population argument so long as it defined __len__(). The other
had a stronger requirement that the population argument be indexable.
While it met the documentation specifications which insisted that the
population argument be a sequence, it made random.sample() less usable
with sets. So, the second algorithm was modified to coerce non-indexable
iterables and dictionaries into a tuple before proceeding.
|
| |
|
|
|
|
|
|
|
| |
The default seed is time.time().
Multiplied by 256 before truncating so that fractional seconds are used.
This way, two successive calls to random.seed() are much more likely
to produce different sequences.
|
| |
|
|
|
|
|
| |
* Implement __reduce__() to support pickling.
* Add a test case to prove a successful roundtrip through pickle.
|
|
|
|
| |
explaining what's wrong with the two simpler variants.
|
|
|
|
|
|
| |
some of this code because useless, and (worse) could return a long
instead of int (in Zope that's important, because a long can't be used
as a key in an IOBTree or IIBTree).
|
|
|
|
|
| |
The docs were fine but the "int=int" in the function call was both
ugly and confusing. Moved it inside the body of the function definition.
|
| |
|
|
|
|
| |
* Use Sets module to more clearly articulate a couple of tests.
|
| |
|
| |
|
|
|
|
|
|
|
| |
The range of u=random() is [0,1), so log(u) and 1/x can fail.
Fix by setting u=1-random() or by reselecting for a usable value.
Will backport.
|
|
|
|
|
|
|
| |
It was once available so that faster generators could be substituted. Now,
that is less necessary and preferrably done via subclassing.
Also, clarified and shortened the comments for sample().
|
|
|
|
| |
core generator for random.py.
|
| |
|
|
|
|
|
| |
Replace "type(0)" with "int".
Replace "while 1" with "while True"
|
|
|
|
|
|
|
|
| |
Added design notes in comments.
Used better variable names.
Eliminated the unsavory "pool[-k:]" which was an aspiring bug (for k==0).
Used if/else to show the two algorithms in parallel style.
Added one more test assertion.
|
| |
|
|
|
|
| |
Used for random sampling without replacement.
|
|
|
|
|
|
| |
Loosened the acceptable 'start' and 'stop' arguments so that any
Python (bounded) ints can be used. So, e.g., randrange(-sys.maxint-1,
sys.maxint) no longer blows up.
|
| |
|
| |
|