diff options
author | Martin Panter <vadmium+py@gmail.com> | 2015-11-20 08:13:35 (GMT) |
---|---|---|
committer | Martin Panter <vadmium+py@gmail.com> | 2015-11-20 08:13:35 (GMT) |
commit | e99e97762cc75ad94056275ddcae9c84d63a3412 (patch) | |
tree | f16040197bf0122ce6aded03018b4ee6fee03679 /Misc | |
parent | d13cade3817cf452b7c98e9df96b15c6da68201a (diff) | |
download | cpython-e99e97762cc75ad94056275ddcae9c84d63a3412.zip cpython-e99e97762cc75ad94056275ddcae9c84d63a3412.tar.gz cpython-e99e97762cc75ad94056275ddcae9c84d63a3412.tar.bz2 |
Issue #25626: Change zlib to accept Py_ssize_t and cap to UINT_MAX
The underlying zlib library stores sizes in “unsigned int”. The corresponding
Python parameters are all sizes of buffers filled in by zlib, so it is okay
to reduce higher values to the UINT_MAX internal cap. OverflowError is still
raised for sizes that do not fit in Py_ssize_t.
Sizes are now limited to Py_ssize_t rather than unsigned long, because Python
byte strings cannot be larger than Py_ssize_t. Previously this could result
in a SystemError on 32-bit platforms.
This resolves a regression in the gzip module when reading more than UINT_MAX
or LONG_MAX bytes in one call, introduced by revision 62723172412c.
Diffstat (limited to 'Misc')
-rw-r--r-- | Misc/NEWS | 7 |
1 files changed, 7 insertions, 0 deletions
@@ -77,6 +77,13 @@ Core and Builtins Library ------- +- Issue #25626: Change three zlib functions to accept sizes that fit in + Py_ssize_t, but internally cap those sizes to UINT_MAX. This resolves a + regression in 3.5 where GzipFile.read() failed to read chunks larger than 2 + or 4 GiB. The change affects the zlib.Decompress.decompress() max_length + parameter, the zlib.decompress() bufsize parameter, and the + zlib.Decompress.flush() length parameter. + - Issue #25583: Avoid incorrect errors raised by os.makedirs(exist_ok=True) when the OS gives priority to errors such as EACCES over EEXIST. |