| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
TAL/PageTemplate package for Zope. This only needed a little boilerplate
change; the tests themselves are unchanged.
|
|
|
|
|
|
|
|
|
|
|
| |
derived from but not quite compatible with that of sgmllib, so it's a
new file. I suppose it needs documentation, and htmllib needs to be
changed to use this instead of sgmllib, and sgmllib needs to be
declared obsolete. But that can all be done later.
This code was first published as part of TAL (part of Zope Page
Templates), but that was strongly based on sgmllib anyway. Authors
are Fred drake and Guido van Rossum.
|
|
|
|
|
| |
but it still can't have any syntax errors. Went a little too fast
there, Jack? :-)
|
|
|
|
| |
build for the runtime model you are currently using for the interpreter.
|
|
|
|
|
|
| |
codec files to codecs.py and added logic so that multi mappings
in the decoding maps now result in mappings to None (undefined mapping)
in the encoding maps.
|
|
|
|
| |
after commas that didn't have any).
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and introduces a new method .decode().
The major change is that strg.encode() will no longer try to convert
Unicode returns from the codec into a string, but instead pass along
the Unicode object as-is. The same is now true for all other codec
return types. The underlying C APIs were changed accordingly.
Note that even though this does have the potential of breaking
existing code, the chances are low since conversion from Unicode
previously took place using the default encoding which is normally
set to ASCII rendering this auto-conversion mechanism useless for
most Unicode encodings.
The good news is that you can now use .encode() and .decode() with
much greater ease and that the door was opened for better accessibility
of the builtin codecs.
As demonstration of the new feature, the patch includes a few new
codecs which allow string to string encoding and decoding (rot13,
hex, zip, uu, base64).
Written by Marc-Andre Lemburg. Copyright assigned to the PSF.
|
|
|
|
|
|
|
|
|
| |
*are* obsolete; three variables and the maketrans() function are not
(yet) obsolete.
Add a compensating warnings.filterwarnings() call to test_strop.py.
Add this to the NEWS.
|
|
|
|
| |
the regression test is run in verbose mode.
|
|
|
|
|
|
|
| |
elements when crunching a list, dict or tuple. Now takes linear time
instead -- huge speedup for even moderately large containers, and the
code is notably simpler too.
Added some basic "is the output correct?" tests to test_pprint.
|
| |
|
| |
|
| |
|
|
|
|
| |
dealing with the file system. As discussed on python-dev and in patch 410465.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comment following used to say:
/* We use ~hash instead of hash, as degenerate hash functions, such
as for ints <sigh>, can have lots of leading zeros. It's not
really a performance risk, but better safe than sorry.
12-Dec-00 tim: so ~hash produces lots of leading ones instead --
what's the gain? */
That is, there was never a good reason for doing it. And to the contrary,
as explained on Python-Dev last December, it tended to make the *sum*
(i + incr) & mask (which is the first table index examined in case of
collison) the same "too often" across distinct hashes.
Changing to the simpler "i = hash & mask" reduced the number of string-dict
collisions (== # number of times we go around the lookup for-loop) from about
6 million to 5 million during a full run of the test suite (these are
approximate because the test suite does some random stuff from run to run).
The number of collisions in non-string dicts also decreased, but not as
dramatically.
Note that this may, for a given dict, change the order (wrt previous
releases) of entries exposed by .keys(), .values() and .items(). A number
of std tests suffered bogus failures as a result. For dicts keyed by
small ints, or (less so) by characters, the order is much more likely to be
in increasing order of key now; e.g.,
>>> d = {}
>>> for i in range(10):
... d[i] = i
...
>>> d
{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9}
>>>
Unfortunately. people may latch on to that in small examples and draw a
bogus conclusion.
test_support.py
Moved test_extcall's sortdict() into test_support, made it stronger,
and imported sortdict into other std tests that needed it.
test_unicode.py
Excluced cp875 from the "roundtrip over range(128)" test, because
cp875 doesn't have a well-defined inverse for unicode("?", "cp875").
See Python-Dev for excruciating details.
Cookie.py
Chaged various output functions to sort dicts before building
strings from them.
test_extcall
Fiddled the expected-result file. This remains sensitive to native
dict ordering, because, e.g., if there are multiple errors in a
keyword-arg dict (and test_extcall sets up many cases like that), the
specific error Python complains about first depends on native dict
ordering.
|
| |
|
|
|
|
| |
Add a comment elsewhere making clear an assumption in the code.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
a bare except clause.
|
|
|
|
| |
so only catch that specific exception.
|
| |
|
|
|
|
|
|
|
|
| |
Remove unused import of "sys".
If the file TESTFN exists before we start, try to remove it.
Add spaces around the = in some assignments.
|
|
|
|
|
|
|
|
| |
A Mystery: test_mutants ran amazingly slowly even before dictobject.c
"got fixed". I don't have a clue as to why. dict comparison was and
remains linear-time in the size of the dicts, and test_mutants only tries
100 dict pairs, of size averaging just 50. So "it should" run in less than
an eyeblink; but it takes at least a second on this 800MHz box.
|
|
|
|
|
| |
and wrap the body in try/finally to ensure TESTFN gets cleaned up no
matter what.
|
| |
|
| |
|
|
|
|
|
|
| |
both weakref.Weak*Dictionary classes.
This closes SF bug #416480.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Fixed a half dozen ways in which general dict comparison could crash
Python (even cause Win98SE to reboot) in the presence of kay and/or
value comparison routines that mutate the dict during dict comparison.
Bugfix candidate.
|
|
|
|
| |
*not* define O_RDWR; get that from the os module.
|
| |
|
|
|
|
|
|
|
| |
meaning infinity -- but at least warn about it in the code! I pissed
away a couple hours on this today, and don't wish the same on the next
in line.
Bugfix candidate.
|
|
|
|
|
|
|
| |
means "replace everything". But the string module, string.replace()
amd test_string.py believe a 0 count means "replace nothing".
"Nothing" wins, strop loses.
Bugfix candidate.
|
|
|
|
|
|
| |
Platform blew up on "123".replace("123", ""). Michael Hudson pinned the
blame on platform malloc(0) returning NULL.
This is a candidate for all bugfix releases.
|
|
|
|
| |
needed now that fcntl exports the constants.
|
|
|
|
| |
warning. This is similar to the TERMIOS backward compatbility module.
|
|
|
|
| |
and using the constants defined there instead of FCNTL.
|
| |
|
|
|
|
|
|
| |
cannot be determined.
Pseudo-fix for SF bug #420724
|
|
|
|
|
|
|
|
| |
Store floats and doubles to full precision in marshal.
Test that floats read from .pyc/.pyo closely match those read from .py.
Declare PyFloat_AsString() in floatobject header file.
Add new PyFloat_AsReprString() API function.
Document the functions declared in floatobject.h.
|
|
|
|
|
|
| |
d1 == d2 and d1 != d2 now work even if the keys and values in d1 and d2
don't support comparisons other than ==, and testing dicts for equality
is faster now (especially when inequality obtains).
|