diff options
39 files changed, 690 insertions, 423 deletions
diff --git a/Doc/Makefile b/Doc/Makefile index cb56ea9..13411f2 100644 --- a/Doc/Makefile +++ b/Doc/Makefile @@ -185,7 +185,7 @@ serve: # for development releases: always build autobuild-dev: make update - make dist SPHINXOPTS='-A daily=1' + make dist SPHINXOPTS='-A daily=1 -A versionswitcher=1' # for stable releases: only build if not in pre-release stage (alpha, beta, rc) autobuild-stable: diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst index b7d7e08..31d8cf1 100644 --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -1353,29 +1353,25 @@ are always available. They are listed here in alphabetical order. .. function:: type(object) + type(name, bases, dict) .. index:: object: type - Return the type of an *object*. The return value is a type object and - generally the same object as returned by ``object.__class__``. + + With one argument, return the type of an *object*. The return value is a + type object and generally the same object as returned by ``object.__class__``. The :func:`isinstance` built-in function is recommended for testing the type of an object, because it takes subclasses into account. - With three arguments, :func:`type` functions as a constructor as detailed - below. - - -.. function:: type(name, bases, dict) - :noindex: - Return a new type object. This is essentially a dynamic form of the - :keyword:`class` statement. The *name* string is the class name and becomes the - :attr:`__name__` attribute; the *bases* tuple itemizes the base classes and - becomes the :attr:`__bases__` attribute; and the *dict* dictionary is the - namespace containing definitions for class body and becomes the :attr:`__dict__` - attribute. For example, the following two statements create identical - :class:`type` objects: + With three arguments, return a new type object. This is essentially a + dynamic form of the :keyword:`class` statement. The *name* string is the + class name and becomes the :attr:`__name__` attribute; the *bases* tuple + itemizes the base classes and becomes the :attr:`__bases__` attribute; + and the *dict* dictionary is the namespace containing definitions for class + body and becomes the :attr:`__dict__` attribute. For example, the + following two statements create identical :class:`type` objects: >>> class X: ... a = 1 diff --git a/Doc/library/idle.rst b/Doc/library/idle.rst index 6bd1898..5f28a99 100644 --- a/Doc/library/idle.rst +++ b/Doc/library/idle.rst @@ -154,27 +154,56 @@ The rest of this menu lists the names of all open windows; select one to bring it to the foreground (deiconifying it if necessary). -Debug menu (in the Python Shell window only) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Debug menu +^^^^^^^^^^ + +* in the Python Shell window only Go to file/line - look around the insert point for a filename and linenumber, open the file, and - show the line. + Look around the insert point for a filename and line number, open the file, + and show the line. Useful to view the source lines referenced in an + exception traceback. -Open stack viewer - show the stack traceback of the last exception +Debugger + Run commands in the shell under the debugger. -Debugger toggle - Run commands in the shell under the debugger +Stack viewer + Show the stack traceback of the last exception. -JIT Stack viewer toggle - Open stack viewer on traceback +Auto-open Stack Viewer + Open stack viewer on traceback. .. index:: single: stack viewer single: debugger +Edit context menu +^^^^^^^^^^^^^^^^^ + +* Right-click in Edit window (Control-click on OS X) + +Set Breakpoint + Sets a breakpoint. Breakpoints are only enabled when the debugger is open. + +Clear Breakpoint + Clears the breakpoint on that line. + +.. index:: + single: Set Breakpoint + single: Clear Breakpoint + single: breakpoints + + +Shell context menu +^^^^^^^^^^^^^^^^^^ + +* Right-click in Python Shell window (Control-click on OS X) + +Go to file/line + Same as in Debug menu. + + Basic editing and navigation ---------------------------- diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 20174c5..78a2799 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -2137,13 +2137,13 @@ pairs within braces, for example: ``{'jack': 4098, 'sjoerd': 4127}`` or ``{4098: replaces the value from the positional argument. To illustrate, the following examples all return a dictionary equal to - ``{"one": 1, "two": 2}``:: + ``{"one": 1, "two": 2, "three": 3}``:: - >>> a = dict(one=1, two=2) - >>> b = dict({'one': 1, 'two': 2}) - >>> c = dict(zip(('one', 'two'), (1, 2))) - >>> d = dict([['two', 2], ['one', 1]]) - >>> e = {"one": 1, "two": 2} + >>> a = dict(one=1, two=2, three=3) + >>> b = {'one': 1, 'two': 2, 'three': 3} + >>> c = dict(zip(['one', 'two', 'three'], [1, 2, 3])) + >>> d = dict([('two', 2), ('one', 1), ('three', 3)]) + >>> e = dict({'three': 3, 'one': 1, 'two': 2}) >>> a == b == c == d == e True diff --git a/Doc/library/unittest.rst b/Doc/library/unittest.rst index 72a3a7b..4422533 100644 --- a/Doc/library/unittest.rst +++ b/Doc/library/unittest.rst @@ -99,9 +99,10 @@ need to derive from a specific class. The script :file:`Tools/unittestgui/unittestgui.py` in the Python source distribution is a GUI tool for test discovery and execution. This is intended largely for ease of use - for those new to unit testing. For production environments it is recommended that - tests be driven by a continuous integration system such as `Hudson <http://hudson-ci.org/>`_ - or `Buildbot <http://buildbot.net/trac>`_. + for those new to unit testing. For production environments it is + recommended that tests be driven by a continuous integration system such as + `Buildbot <http://buildbot.net/trac>`_, `Jenkins <http://jenkins-ci.org>`_ + or `Hudson <http://hudson-ci.org/>`_. .. _unittest-minimal-example: diff --git a/Doc/reference/compound_stmts.rst b/Doc/reference/compound_stmts.rst index 003224b..6889176 100644 --- a/Doc/reference/compound_stmts.rst +++ b/Doc/reference/compound_stmts.rst @@ -442,8 +442,7 @@ A function definition defines a user-defined function object (see section decorator: "@" `dotted_name` ["(" [`parameter_list` [","]] ")"] NEWLINE dotted_name: `identifier` ("." `identifier`)* parameter_list: (`defparameter` ",")* - : ( "*" [`parameter`] ("," `defparameter`)* - : [, "**" `parameter`] + : ( "*" [`parameter`] ("," `defparameter`)* ["," "**" `parameter`] : | "**" `parameter` : | `defparameter` [","] ) parameter: `identifier` [":" `expression`] diff --git a/Doc/tools/sphinxext/layout.html b/Doc/tools/sphinxext/layout.html index db4a386..4b16b3f 100644 --- a/Doc/tools/sphinxext/layout.html +++ b/Doc/tools/sphinxext/layout.html @@ -3,18 +3,26 @@ <li><img src="{{ pathto('_static/py.png', 1) }}" alt="" style="vertical-align: middle; margin-top: -1px"/></li> <li><a href="http://www.python.org/">Python</a>{{ reldelim1 }}</li> - <li><a href="{{ pathto('index') }}">{{ shorttitle }}</a>{{ reldelim1 }}</li> + <li> + {%- if versionswitcher is defined %} + <span class="version_switcher_placeholder">{{ release }}</span> + <a href="{{ pathto('index') }}">Documentation</a>{{ reldelim1 }} + {%- else %} + <a href="{{ pathto('index') }}">{{ shorttitle }}</a>{{ reldelim1 }} + {%- endif %} + </li> {% endblock %} {% block extrahead %} <link rel="shortcut icon" type="image/png" href="{{ pathto('_static/py.png', 1) }}" /> {% if not embedded %}<script type="text/javascript" src="{{ pathto('_static/copybutton.js', 1) }}"></script>{% endif %} + {% if versionswitcher is defined and not embedded %}<script type="text/javascript" src="{{ pathto('_static/version_switch.js', 1) }}"></script>{% endif %} {{ super() }} {% endblock %} {% block footer %} <div class="footer"> © <a href="{{ pathto('copyright') }}">Copyright</a> {{ copyright|e }}. <br /> - The Python Software Foundation is a non-profit corporation. + The Python Software Foundation is a non-profit corporation. <a href="http://www.python.org/psf/donations/">Please donate.</a> <br /> Last updated on {{ last_updated|e }}. diff --git a/Doc/tools/sphinxext/static/version_switch.js b/Doc/tools/sphinxext/static/version_switch.js new file mode 100644 index 0000000..cc7be1c --- /dev/null +++ b/Doc/tools/sphinxext/static/version_switch.js @@ -0,0 +1,66 @@ +(function() { + 'use strict'; + + var all_versions = { + '3.4': 'dev (3.4)', + '3.3': '3.3', + '3.2': '3.2', + '2.7': '2.7', + '2.6': '2.6' + }; + + function build_select(current_version, current_release) { + var buf = ['<select>']; + + $.each(all_versions, function(version, title) { + buf.push('<option value="' + version + '"'); + if (version == current_version) + buf.push(' selected="selected">' + current_release + '</option>'); + else + buf.push('>' + title + '</option>'); + }); + + buf.push('</select>'); + return buf.join(''); + } + + function patch_url(url, new_version) { + var url_re = /\.org\/(\d|py3k|dev|((release\/)?\d\.\d[\w\d\.]*))\//, + new_url = url.replace(url_re, '.org/' + new_version + '/'); + + if (new_url == url && !new_url.match(url_re)) { + // python 2 url without version? + new_url = url.replace(/\.org\//, '.org/' + new_version + '/'); + } + return new_url; + } + + function on_switch() { + var selected = $(this).children('option:selected').attr('value'); + + var url = window.location.href, + new_url = patch_url(url, selected); + + if (new_url != url) { + // check beforehand if url exists, else redirect to version's start page + $.ajax({ + url: new_url, + success: function() { + window.location.href = new_url; + }, + error: function() { + window.location.href = 'http://docs.python.org/' + selected; + } + }); + } + } + + $(document).ready(function() { + var release = DOCUMENTATION_OPTIONS.VERSION; + var version = release.substr(0, 3); + var select = build_select(version, release); + + $('.version_switcher_placeholder').html(select); + $('.version_switcher_placeholder select').bind('change', on_switch); + }); +})(); diff --git a/Doc/tutorial/inputoutput.rst b/Doc/tutorial/inputoutput.rst index 73143be..1324359 100644 --- a/Doc/tutorial/inputoutput.rst +++ b/Doc/tutorial/inputoutput.rst @@ -256,9 +256,10 @@ default being UTF-8). ``'b'`` appended to the mode opens the file in :dfn:`binary mode`: now the data is read and written in the form of bytes objects. This mode should be used for all files that don't contain text. -In text mode, the default is to convert platform-specific line endings (``\n`` -on Unix, ``\r\n`` on Windows) to just ``\n`` on reading and ``\n`` back to -platform-specific line endings on writing. This behind-the-scenes modification +In text mode, the default when reading is to convert platform-specific line +endings (``\n`` on Unix, ``\r\n`` on Windows) to just ``\n``. When writing in +text mode, the default is to convert occurrences of ``\n`` back to +platform-specific line endings. This behind-the-scenes modification to file data is fine for text files, but will corrupt binary data like that in :file:`JPEG` or :file:`EXE` files. Be very careful to use binary mode when reading and writing such files. diff --git a/Doc/tutorial/stdlib2.rst b/Doc/tutorial/stdlib2.rst index a9ae871..2265cd0 100644 --- a/Doc/tutorial/stdlib2.rst +++ b/Doc/tutorial/stdlib2.rst @@ -257,9 +257,9 @@ applications include caching objects that are expensive to create:: >>> import weakref, gc >>> class A: ... def __init__(self, value): - ... self.value = value + ... self.value = value ... def __repr__(self): - ... return str(self.value) + ... return str(self.value) ... >>> a = A(10) # create a reference >>> d = weakref.WeakValueDictionary() diff --git a/Lib/cgitb.py b/Lib/cgitb.py index 7b52c8e..6da40e8 100644 --- a/Lib/cgitb.py +++ b/Lib/cgitb.py @@ -293,14 +293,19 @@ class Hook: if self.logdir is not None: suffix = ['.txt', '.html'][self.format=="html"] (fd, path) = tempfile.mkstemp(suffix=suffix, dir=self.logdir) + try: file = os.fdopen(fd, 'w') file.write(doc) file.close() - msg = '<p> %s contains the description of this error.' % path + msg = '%s contains the description of this error.' % path except: - msg = '<p> Tried to save traceback to %s, but failed.' % path - self.file.write(msg + '\n') + msg = 'Tried to save traceback to %s, but failed.' % path + + if self.format == 'html': + self.file.write('<p>%s</p>\n' % msg) + else: + self.file.write(msg + '\n') try: self.file.flush() except: pass diff --git a/Lib/idlelib/NEWS.txt b/Lib/idlelib/NEWS.txt index 3160c74..f234b64 100644 --- a/Lib/idlelib/NEWS.txt +++ b/Lib/idlelib/NEWS.txt @@ -21,6 +21,9 @@ What's New in IDLE 3.2.4? - Issue #14018: Update checks for unstable system Tcl/Tk versions on OS X to include versions shipped with OS X 10.7 and 10.8 in addition to 10.6. +- Issue #15853: Prevent IDLE crash on OS X when opening Preferences menu + with certain versions of Tk 8.5. Initial patch by Kevin Walzer. + What's New in IDLE 3.2.3? ========================= diff --git a/Lib/idlelib/configDialog.py b/Lib/idlelib/configDialog.py index 2701b42..434114e 100644 --- a/Lib/idlelib/configDialog.py +++ b/Lib/idlelib/configDialog.py @@ -821,10 +821,9 @@ class ConfigDialog(Toplevel): fontWeight=tkFont.BOLD else: fontWeight=tkFont.NORMAL - size=self.fontSize.get() - self.editFont.config(size=size, - weight=fontWeight,family=fontName) - self.textHighlightSample.configure(font=(fontName, size, fontWeight)) + newFont = (fontName, self.fontSize.get(), fontWeight) + self.labelFontSample.config(font=newFont) + self.textHighlightSample.configure(font=newFont) def SetHighlightTarget(self): if self.highlightTarget.get()=='Cursor': #bg not possible diff --git a/Lib/idlelib/help.txt b/Lib/idlelib/help.txt index 7bfd2ca..c179555 100644 --- a/Lib/idlelib/help.txt +++ b/Lib/idlelib/help.txt @@ -80,7 +80,7 @@ Shell Menu (only in Shell window): Debug Menu (only in Shell window): Go to File/Line -- look around the insert point for a filename - and linenumber, open the file, and show the line + and line number, open the file, and show the line Debugger (toggle) -- Run commands in the shell under the debugger Stack Viewer -- Show the stack traceback of the last exception Auto-open Stack Viewer (toggle) -- Open stack viewer on traceback @@ -92,7 +92,7 @@ Options Menu: Startup Preferences may be set, and Additional Help Sources can be specified. - On MacOS X this menu is not present, use + On OS X this menu is not present, use menu 'IDLE -> Preferences...' instead. --- Code Context -- Open a pane at the top of the edit window which @@ -120,6 +120,15 @@ Help Menu: --- (Additional Help Sources may be added here) +Edit context menu (Right-click / Control-click in Edit window): + + Set Breakpoint -- Sets a breakpoint (when debugger open) + Clear Breakpoint -- Clears the breakpoint on that line + +Shell context menu (Right-click / Control-click in Shell window): + + Go to file/line -- Same as in Debug menu + ** TIPS ** ========== @@ -222,7 +231,7 @@ Python Shell window: Alt-p retrieves previous command matching what you have typed. Alt-n retrieves next. - (These are Control-p, Control-n on the Mac) + (These are Control-p, Control-n on OS X) Return while cursor is on a previous command retrieves that command. Expand word is also useful to reduce typing. diff --git a/Lib/test/regrtest.py b/Lib/test/regrtest.py index 29f2bf0..e098522 100755 --- a/Lib/test/regrtest.py +++ b/Lib/test/regrtest.py @@ -489,10 +489,10 @@ def main(tests=None, testdir=None, verbose=0, quiet=False, next_single_test = alltests[alltests.index(selected[0])+1] except IndexError: next_single_test = None - # Remove all the tests that precede start if it's set. + # Remove all the selected tests that precede start if it's set. if start: try: - del tests[:tests.index(start)] + del selected[:selected.index(start)] except ValueError: print("Couldn't find starting test (%s), using all tests" % start) if randomize: diff --git a/Lib/test/test_bz2.py b/Lib/test/test_bz2.py index be35580..715468a 100644 --- a/Lib/test/test_bz2.py +++ b/Lib/test/test_bz2.py @@ -1,6 +1,6 @@ #!/usr/bin/env python3 from test import support -from test.support import TESTFN +from test.support import TESTFN, _4G, bigmemtest, findfile import unittest from io import BytesIO @@ -25,6 +25,9 @@ class BaseTest(unittest.TestCase): DATA = b'BZh91AY&SY.\xc8N\x18\x00\x01>_\x80\x00\x10@\x02\xff\xf0\x01\x07n\x00?\xe7\xff\xe00\x01\x99\xaa\x00\xc0\x03F\x86\x8c#&\x83F\x9a\x03\x06\xa6\xd0\xa6\x93M\x0fQ\xa7\xa8\x06\x804hh\x12$\x11\xa4i4\xf14S\xd2<Q\xb5\x0fH\xd3\xd4\xdd\xd5\x87\xbb\xf8\x94\r\x8f\xafI\x12\xe1\xc9\xf8/E\x00pu\x89\x12]\xc9\xbbDL\nQ\x0e\t1\x12\xdf\xa0\xc0\x97\xac2O9\x89\x13\x94\x0e\x1c7\x0ed\x95I\x0c\xaaJ\xa4\x18L\x10\x05#\x9c\xaf\xba\xbc/\x97\x8a#C\xc8\xe1\x8cW\xf9\xe2\xd0\xd6M\xa7\x8bXa<e\x84t\xcbL\xb3\xa7\xd9\xcd\xd1\xcb\x84.\xaf\xb3\xab\xab\xad`n}\xa0lh\tE,\x8eZ\x15\x17VH>\x88\xe5\xcd9gd6\x0b\n\xe9\x9b\xd5\x8a\x99\xf7\x08.K\x8ev\xfb\xf7xw\xbb\xdf\xa1\x92\xf1\xdd|/";\xa2\xba\x9f\xd5\xb1#A\xb6\xf6\xb3o\xc9\xc5y\\\xebO\xe7\x85\x9a\xbc\xb6f8\x952\xd5\xd7"%\x89>V,\xf7\xa6z\xe2\x9f\xa3\xdf\x11\x11"\xd6E)I\xa9\x13^\xca\xf3r\xd0\x03U\x922\xf26\xec\xb6\xed\x8b\xc3U\x13\x9d\xc5\x170\xa4\xfa^\x92\xacDF\x8a\x97\xd6\x19\xfe\xdd\xb8\xbd\x1a\x9a\x19\xa3\x80ankR\x8b\xe5\xd83]\xa9\xc6\x08\x82f\xf6\xb9"6l$\xb8j@\xc0\x8a\xb0l1..\xbak\x83ls\x15\xbc\xf4\xc1\x13\xbe\xf8E\xb8\x9d\r\xa8\x9dk\x84\xd3n\xfa\xacQ\x07\xb1%y\xaav\xb4\x08\xe0z\x1b\x16\xf5\x04\xe9\xcc\xb9\x08z\x1en7.G\xfc]\xc9\x14\xe1B@\xbb!8`' DATA_CRLF = b'BZh91AY&SY\xaez\xbbN\x00\x01H\xdf\x80\x00\x12@\x02\xff\xf0\x01\x07n\x00?\xe7\xff\xe0@\x01\xbc\xc6`\x86*\x8d=M\xa9\x9a\x86\xd0L@\x0fI\xa6!\xa1\x13\xc8\x88jdi\x8d@\x03@\x1a\x1a\x0c\x0c\x83 \x00\xc4h2\x19\x01\x82D\x84e\t\xe8\x99\x89\x19\x1ah\x00\r\x1a\x11\xaf\x9b\x0fG\xf5(\x1b\x1f?\t\x12\xcf\xb5\xfc\x95E\x00ps\x89\x12^\xa4\xdd\xa2&\x05(\x87\x04\x98\x89u\xe40%\xb6\x19\'\x8c\xc4\x89\xca\x07\x0e\x1b!\x91UIFU%C\x994!DI\xd2\xfa\xf0\xf1N8W\xde\x13A\xf5\x9cr%?\x9f3;I45A\xd1\x8bT\xb1<l\xba\xcb_\xc00xY\x17r\x17\x88\x08\x08@\xa0\ry@\x10\x04$)`\xf2\xce\x89z\xb0s\xec\x9b.iW\x9d\x81\xb5-+t\x9f\x1a\'\x97dB\xf5x\xb5\xbe.[.\xd7\x0e\x81\xe7\x08\x1cN`\x88\x10\xca\x87\xc3!"\x80\x92R\xa1/\xd1\xc0\xe6mf\xac\xbd\x99\xcca\xb3\x8780>\xa4\xc7\x8d\x1a\\"\xad\xa1\xabyBg\x15\xb9l\x88\x88\x91k"\x94\xa4\xd4\x89\xae*\xa6\x0b\x10\x0c\xd6\xd4m\xe86\xec\xb5j\x8a\x86j\';\xca.\x01I\xf2\xaaJ\xe8\x88\x8cU+t3\xfb\x0c\n\xa33\x13r2\r\x16\xe0\xb3(\xbf\x1d\x83r\xe7M\xf0D\x1365\xd8\x88\xd3\xa4\x92\xcb2\x06\x04\\\xc1\xb0\xea//\xbek&\xd8\xe6+t\xe5\xa1\x13\xada\x16\xder5"w]\xa2i\xb7[\x97R \xe2IT\xcd;Z\x04dk4\xad\x8a\t\xd3\x81z\x10\xf1:^`\xab\x1f\xc5\xdc\x91N\x14$+\x9e\xae\xd3\x80' + with open(findfile("testbz2_bigmem.bz2"), "rb") as f: + DATA_BIGMEM = f.read() + if has_cmdline_bunzip2: def decompress(self, data): pop = subprocess.Popen("bunzip2", shell=True, @@ -44,6 +47,7 @@ class BaseTest(unittest.TestCase): def decompress(self, data): return bz2.decompress(data) + class BZ2FileTest(BaseTest): "Test BZ2File type miscellaneous methods." @@ -313,6 +317,17 @@ class BZ2CompressorTest(BaseTest): data += bz2c.flush() self.assertEqual(self.decompress(data), self.TEXT) + @bigmemtest(size=_4G, memuse=1.25) + def testBigmem(self, size): + text = b"a" * size + bz2c = bz2.BZ2Compressor() + data = bz2c.compress(text) + bz2c.flush() + del text + text = self.decompress(data) + self.assertEqual(len(text), size) + self.assertEqual(text.strip(b"a"), b"") + + class BZ2DecompressorTest(BaseTest): def test_Constructor(self): self.assertRaises(TypeError, BZ2Decompressor, 42) @@ -351,6 +366,13 @@ class BZ2DecompressorTest(BaseTest): text = bz2d.decompress(self.DATA) self.assertRaises(EOFError, bz2d.decompress, b"anything") + @bigmemtest(size=_4G, memuse=1.25, dry_run=False) + def testBigmem(self, unused_size): + # Issue #14398: decompression fails when output data is >=2GB. + text = bz2.BZ2Decompressor().decompress(self.DATA_BIGMEM) + self.assertEqual(len(text), _4G) + self.assertEqual(text.strip(b"\0"), b"") + class FuncTest(BaseTest): "Test module functions" @@ -374,6 +396,22 @@ class FuncTest(BaseTest): # "Test decompress() function with incomplete data" self.assertRaises(ValueError, bz2.decompress, self.DATA[:-10]) + @bigmemtest(size=_4G, memuse=1.25) + def testCompressBigmem(self, size): + text = b"a" * size + data = bz2.compress(text) + del text + text = self.decompress(data) + self.assertEqual(len(text), size) + self.assertEqual(text.strip(b"a"), b"") + + @bigmemtest(size=_4G, memuse=1.25, dry_run=False) + def testDecompressBigmem(self, unused_size): + # Issue #14398: decompression fails when output data is >=2GB. + text = bz2.decompress(self.DATA_BIGMEM) + self.assertEqual(len(text), _4G) + self.assertEqual(text.strip(b"\0"), b"") + def test_main(): support.run_unittest( BZ2FileTest, diff --git a/Lib/test/test_codecs.py b/Lib/test/test_codecs.py index f342d88..42d0da3 100644 --- a/Lib/test/test_codecs.py +++ b/Lib/test/test_codecs.py @@ -645,6 +645,8 @@ class UTF8Test(ReadTest): self.assertEqual(b"abc\xed\xa0\x80def".decode("utf-8", "surrogatepass"), "abc\ud800def") self.assertTrue(codecs.lookup_error("surrogatepass")) + with self.assertRaises(UnicodeDecodeError): + b"abc\xed\xa0".decode("utf-8", "surrogatepass") class UTF7Test(ReadTest): encoding = "utf-7" diff --git a/Lib/test/test_gdb.py b/Lib/test/test_gdb.py index fb8261b..6d96550 100644 --- a/Lib/test/test_gdb.py +++ b/Lib/test/test_gdb.py @@ -19,39 +19,57 @@ except OSError: # This is what "no gdb" looks like. There may, however, be other # errors that manifest this way too. raise unittest.SkipTest("Couldn't find gdb on the path") -gdb_version_number = re.search(b"^GNU gdb [^\d]*(\d+)\.", gdb_version) -if int(gdb_version_number.group(1)) < 7: +gdb_version_number = re.search(b"^GNU gdb [^\d]*(\d+)\.(\d)", gdb_version) +gdb_major_version = int(gdb_version_number.group(1)) +gdb_minor_version = int(gdb_version_number.group(2)) +if gdb_major_version < 7: raise unittest.SkipTest("gdb versions before 7.0 didn't support python embedding" " Saw:\n" + gdb_version.decode('ascii', 'replace')) +# Location of custom hooks file in a repository checkout. +checkout_hook_path = os.path.join(os.path.dirname(sys.executable), + 'python-gdb.py') + +def run_gdb(*args, **env_vars): + """Runs gdb in --batch mode with the additional arguments given by *args. + + Returns its (stdout, stderr) decoded from utf-8 using the replace handler. + """ + if env_vars: + env = os.environ.copy() + env.update(env_vars) + else: + env = None + base_cmd = ('gdb', '--batch') + if (gdb_major_version, gdb_minor_version) >= (7, 4): + base_cmd += ('-iex', 'add-auto-load-safe-path ' + checkout_hook_path) + out, err = subprocess.Popen(base_cmd + args, + stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env, + ).communicate() + return out.decode('utf-8', 'replace'), err.decode('utf-8', 'replace') + # Verify that "gdb" was built with the embedded python support enabled: -cmd = "--eval-command=python import sys; print sys.version_info" -p = subprocess.Popen(["gdb", "--batch", cmd], - stdout=subprocess.PIPE) -gdbpy_version, _ = p.communicate() -if gdbpy_version == b'': +gdbpy_version, _ = run_gdb("--eval-command=python import sys; print sys.version_info") +if not gdbpy_version: raise unittest.SkipTest("gdb not built with embedded python support") -# Verify that "gdb" can load our custom hooks -p = subprocess.Popen(["gdb", "--batch", cmd, - "--args", sys.executable], - stdout=subprocess.PIPE, stderr=subprocess.PIPE) -__, gdbpy_errors = p.communicate() -if b"auto-loading has been declined" in gdbpy_errors: - msg = "gdb security settings prevent use of custom hooks: %s" - raise unittest.SkipTest(msg % gdbpy_errors) +# Verify that "gdb" can load our custom hooks. In theory this should never +# fail, but we don't handle the case of the hooks file not existing if the +# tests are run from an installed Python (we'll produce failures in that case). +cmd = ['--args', sys.executable] +_, gdbpy_errors = run_gdb('--args', sys.executable) +if "auto-loading has been declined" in gdbpy_errors: + msg = "gdb security settings prevent use of custom hooks: " + raise unittest.SkipTest(msg + gdbpy_errors.rstrip()) def gdb_has_frame_select(): # Does this build of gdb have gdb.Frame.select ? - cmd = "--eval-command=python print(dir(gdb.Frame))" - p = subprocess.Popen(["gdb", "--batch", cmd], - stdout=subprocess.PIPE) - stdout, _ = p.communicate() - m = re.match(br'.*\[(.*)\].*', stdout) + stdout, _ = run_gdb("--eval-command=python print(dir(gdb.Frame))") + m = re.match(r'.*\[(.*)\].*', stdout) if not m: raise unittest.SkipTest("Unable to parse output from gdb.Frame.select test") - gdb_frame_dir = m.group(1).split(b', ') - return b"'select'" in gdb_frame_dir + gdb_frame_dir = m.group(1).split(', ') + return "'select'" in gdb_frame_dir HAS_PYUP_PYDOWN = gdb_has_frame_select() @@ -61,21 +79,6 @@ class DebuggerTests(unittest.TestCase): """Test that the debugger can debug Python.""" - def run_gdb(self, *args, **env_vars): - """Runs gdb with the command line given by *args. - - Returns its stdout, stderr - """ - if env_vars: - env = os.environ.copy() - env.update(env_vars) - else: - env = None - out, err = subprocess.Popen( - args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env, - ).communicate() - return out.decode('utf-8', 'replace'), err.decode('utf-8', 'replace') - def get_stack_trace(self, source=None, script=None, breakpoint=BREAKPOINT_FN, cmds_after_breakpoint=None, @@ -132,7 +135,7 @@ class DebuggerTests(unittest.TestCase): # print ' '.join(args) # Use "args" to invoke gdb, capturing stdout, stderr: - out, err = self.run_gdb(*args, PYTHONHASHSEED='0') + out, err = run_gdb(*args, PYTHONHASHSEED='0') # Ignore some noise on stderr due to the pending breakpoint: err = err.replace('Function "%s" not defined.\n' % breakpoint, '') @@ -149,6 +152,11 @@ class DebuggerTests(unittest.TestCase): 'Do you need "set solib-search-path" or ' '"set sysroot"?\n', '') + err = err.replace('warning: Could not load shared library symbols for ' + 'linux-gate.so.1.\n' + 'Do you need "set solib-search-path" or ' + '"set sysroot"?\n', + '') # Ensure no unexpected error messages: self.assertEqual(err, '') diff --git a/Lib/test/test_import.py b/Lib/test/test_import.py index 0f8f1f5..b10f350 100644 --- a/Lib/test/test_import.py +++ b/Lib/test/test_import.py @@ -20,12 +20,23 @@ from test.support import ( from test import script_helper +def _files(name): + return (name + os.extsep + "py", + name + os.extsep + "pyc", + name + os.extsep + "pyo", + name + os.extsep + "pyw", + name + "$py.class") + +def chmod_files(name): + for f in _files(name): + try: + os.chmod(f, 0o600) + except OSError as exc: + if exc.errno != errno.ENOENT: + raise + def remove_files(name): - for f in (name + ".py", - name + ".pyc", - name + ".pyo", - name + ".pyw", - name + "$py.class"): + for f in _files(name): unlink(f) rmtree('__pycache__') @@ -122,6 +133,45 @@ class ImportTests(unittest.TestCase): remove_files(TESTFN) unload(TESTFN) + def test_rewrite_pyc_with_read_only_source(self): + # Issue 6074: a long time ago on posix, and more recently on Windows, + # a read only source file resulted in a read only pyc file, which + # led to problems with updating it later + sys.path.insert(0, os.curdir) + fname = TESTFN + os.extsep + "py" + try: + # Write a Python file, make it read-only and import it + with open(fname, 'w') as f: + f.write("x = 'original'\n") + # Tweak the mtime of the source to ensure pyc gets updated later + s = os.stat(fname) + os.utime(fname, (s.st_atime, s.st_mtime-100000000)) + os.chmod(fname, 0o400) + m1 = __import__(TESTFN) + self.assertEqual(m1.x, 'original') + # Change the file and then reimport it + os.chmod(fname, 0o600) + with open(fname, 'w') as f: + f.write("x = 'rewritten'\n") + unload(TESTFN) + m2 = __import__(TESTFN) + self.assertEqual(m2.x, 'rewritten') + # Now delete the source file and check the pyc was rewritten + unlink(fname) + unload(TESTFN) + if __debug__: + bytecode_name = fname + "c" + else: + bytecode_name = fname + "o" + os.rename(imp.cache_from_source(fname), bytecode_name) + m3 = __import__(TESTFN) + self.assertEqual(m3.x, 'rewritten') + finally: + chmod_files(TESTFN) + remove_files(TESTFN) + unload(TESTFN) + del sys.path[0] + def test_imp_module(self): # Verify that the imp module can correctly load and find .py files # XXX (ncoghlan): It would be nice to use support.CleanImport diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py index 19b06a0..000ae6a 100644 --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -906,6 +906,21 @@ class UnicodeTest(string_tests.CommonTest, self.assertRaises(ValueError, '{}'.format_map, 'a') self.assertRaises(ValueError, '{a} {}'.format_map, {"a" : 2, "b" : 1}) + def test_format_huge_precision(self): + format_string = ".{}f".format(sys.maxsize + 1) + with self.assertRaises(ValueError): + result = format(2.34, format_string) + + def test_format_huge_width(self): + format_string = "{}f".format(sys.maxsize + 1) + with self.assertRaises(ValueError): + result = format(2.34, format_string) + + def test_format_huge_item_number(self): + format_string = "{{{}:.6f}}".format(sys.maxsize + 1) + with self.assertRaises(ValueError): + result = format_string.format(2.34) + def test_format_auto_numbering(self): class C: def __init__(self, x=100): @@ -990,6 +1005,18 @@ class UnicodeTest(string_tests.CommonTest, self.assertEqual('%f' % INF, 'inf') self.assertEqual('%F' % INF, 'INF') + @support.cpython_only + def test_formatting_huge_precision(self): + from _testcapi import INT_MAX + format_string = "%.{}f".format(INT_MAX + 1) + with self.assertRaises(ValueError): + result = format_string % 2.34 + + def test_formatting_huge_width(self): + format_string = "%{}f".format(sys.maxsize + 1) + with self.assertRaises(ValueError): + result = format_string % 2.34 + def test_startswith_endswith_errors(self): for meth in ('foo'.startswith, 'foo'.endswith): with self.assertRaises(TypeError) as cm: diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py index c6f6f61..3fc499e 100644 --- a/Lib/test/test_urllib.py +++ b/Lib/test/test_urllib.py @@ -268,6 +268,41 @@ Content-Type: text/html; charset=iso-8859-1 finally: self.unfakehttp() + def test_missing_localfile(self): + # Test for #10836 + with self.assertRaises(urllib.error.URLError) as e: + urlopen('file://localhost/a/file/which/doesnot/exists.py') + self.assertTrue(e.exception.filename) + self.assertTrue(e.exception.reason) + + def test_file_notexists(self): + fd, tmp_file = tempfile.mkstemp() + tmp_fileurl = 'file://localhost/' + tmp_file.replace(os.path.sep, '/') + try: + self.assertTrue(os.path.exists(tmp_file)) + with urlopen(tmp_fileurl) as fobj: + self.assertTrue(fobj) + finally: + os.close(fd) + os.unlink(tmp_file) + self.assertFalse(os.path.exists(tmp_file)) + with self.assertRaises(urllib.error.URLError): + urlopen(tmp_fileurl) + + def test_ftp_nohost(self): + test_ftp_url = 'ftp:///path' + with self.assertRaises(urllib.error.URLError) as e: + urlopen(test_ftp_url) + self.assertFalse(e.exception.filename) + self.assertTrue(e.exception.reason) + + def test_ftp_nonexisting(self): + with self.assertRaises(urllib.error.URLError) as e: + urlopen('ftp://localhost/a/file/which/doesnot/exists.py') + self.assertFalse(e.exception.filename) + self.assertTrue(e.exception.reason) + + def test_userpass_inurl(self): self.fakehttp(b"HTTP/1.0 200 OK\r\n\r\nHello!") try: diff --git a/Lib/test/test_urllib2net.py b/Lib/test/test_urllib2net.py index 5fcb4cb..e2b1d95 100644 --- a/Lib/test/test_urllib2net.py +++ b/Lib/test/test_urllib2net.py @@ -156,12 +156,12 @@ class OtherNetworkTests(unittest.TestCase): ## self._test_urls(urls, self._extra_handlers()+[bauth, dauth]) def test_urlwithfrag(self): - urlwith_frag = "http://docs.python.org/glossary.html#glossary" + urlwith_frag = "http://docs.python.org/2/glossary.html#glossary" with support.transient_internet(urlwith_frag): req = urllib.request.Request(urlwith_frag) res = urllib.request.urlopen(req) self.assertEqual(res.geturl(), - "http://docs.python.org/glossary.html#glossary") + "http://docs.python.org/2/glossary.html#glossary") def test_custom_headers(self): url = "http://www.example.com" diff --git a/Lib/test/test_wsgiref.py b/Lib/test/test_wsgiref.py index a08f66b..08f8d9a 100644 --- a/Lib/test/test_wsgiref.py +++ b/Lib/test/test_wsgiref.py @@ -39,9 +39,6 @@ class MockHandler(WSGIRequestHandler): pass - - - def hello_app(environ,start_response): start_response("200 OK", [ ('Content-Type','text/plain'), @@ -63,28 +60,6 @@ def run_amock(app=hello_app, data=b"GET / HTTP/1.0\n\n"): return out.getvalue(), err.getvalue() - - - - - - - - - - - - - - - - - - - - - - def compare_generic_iter(make_it,match): """Utility to compare a generic 2.1/2.2+ iterator with an iterable @@ -122,10 +97,6 @@ def compare_generic_iter(make_it,match): raise AssertionError("Too many items from .__next__()", it) - - - - class IntegrationTests(TestCase): def check_hello(self, out, has_length=True): @@ -195,8 +166,6 @@ class IntegrationTests(TestCase): out) - - class UtilityTests(TestCase): def checkShift(self,sn_in,pi_in,part,sn_out,pi_out): @@ -235,11 +204,6 @@ class UtilityTests(TestCase): util.setup_testing_defaults(kw) self.assertEqual(util.request_uri(kw,query),uri) - - - - - def checkFW(self,text,size,match): def make_it(text=text,size=size): @@ -258,7 +222,6 @@ class UtilityTests(TestCase): it.close() self.assertTrue(it.filelike.closed) - def testSimpleShifts(self): self.checkShift('','/', '', '/', '') self.checkShift('','/x', 'x', '/x', '') @@ -266,7 +229,6 @@ class UtilityTests(TestCase): self.checkShift('/a','/x/y', 'x', '/a/x', '/y') self.checkShift('/a','/x/', 'x', '/a/x', '/') - def testNormalizedShifts(self): self.checkShift('/a/b', '/../y', '..', '/a', '/y') self.checkShift('', '/../y', '..', '', '/y') @@ -280,7 +242,6 @@ class UtilityTests(TestCase): self.checkShift('/a/b', '/x//', 'x', '/a/b/x', '/') self.checkShift('/a/b', '/.', None, '/a/b', '') - def testDefaults(self): for key, value in [ ('SERVER_NAME','127.0.0.1'), @@ -300,7 +261,6 @@ class UtilityTests(TestCase): ]: self.checkDefault(key,value) - def testCrossDefaults(self): self.checkCrossDefault('HTTP_HOST',"foo.bar",SERVER_NAME="foo.bar") self.checkCrossDefault('wsgi.url_scheme',"https",HTTPS="on") @@ -310,7 +270,6 @@ class UtilityTests(TestCase): self.checkCrossDefault('SERVER_PORT',"80",HTTPS="foo") self.checkCrossDefault('SERVER_PORT',"443",HTTPS="on") - def testGuessScheme(self): self.assertEqual(util.guess_scheme({}), "http") self.assertEqual(util.guess_scheme({'HTTPS':"foo"}), "http") @@ -318,10 +277,6 @@ class UtilityTests(TestCase): self.assertEqual(util.guess_scheme({'HTTPS':"yes"}), "https") self.assertEqual(util.guess_scheme({'HTTPS':"1"}), "https") - - - - def testAppURIs(self): self.checkAppURI("http://127.0.0.1/") self.checkAppURI("http://127.0.0.1/spam", SCRIPT_NAME="/spam") @@ -446,15 +401,6 @@ class TestHandler(ErrorHandler): raise # for testing, we want to see what's happening - - - - - - - - - class HandlerTests(TestCase): def checkEnvironAttrs(self, handler): @@ -495,7 +441,6 @@ class HandlerTests(TestCase): h=TestHandler(); h.setup_environ() self.assertEqual(h.environ['wsgi.url_scheme'],'http') - def testAbstractMethods(self): h = BaseHandler() for name in [ @@ -504,7 +449,6 @@ class HandlerTests(TestCase): self.assertRaises(NotImplementedError, getattr(h,name)) self.assertRaises(NotImplementedError, h._write, "test") - def testContentLength(self): # Demo one reason iteration is better than write()... ;) @@ -596,7 +540,6 @@ class HandlerTests(TestCase): "\r\n".encode("iso-8859-1")+MSG)) self.assertIn("AssertionError", h.stderr.getvalue()) - def testHeaderFormats(self): def non_error_app(e,s): @@ -656,40 +599,27 @@ class HandlerTests(TestCase): b"data", h.stdout.getvalue()) -# This epilogue is needed for compatibility with the Python 2.5 regrtest module + def testCloseOnError(self): + side_effects = {'close_called': False} + MSG = b"Some output has been sent" + def error_app(e,s): + s("200 OK",[])(MSG) + class CrashyIterable(object): + def __iter__(self): + while True: + yield b'blah' + raise AssertionError("This should be caught by handler") + def close(self): + side_effects['close_called'] = True + return CrashyIterable() + + h = ErrorHandler() + h.run(error_app) + self.assertEqual(side_effects['close_called'], True) + def test_main(): support.run_unittest(__name__) if __name__ == "__main__": test_main() - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -# the above lines intentionally left blank diff --git a/Lib/test/testbz2_bigmem.bz2 b/Lib/test/testbz2_bigmem.bz2 Binary files differnew file mode 100644 index 0000000..c9a4616 --- /dev/null +++ b/Lib/test/testbz2_bigmem.bz2 diff --git a/Lib/urllib/request.py b/Lib/urllib/request.py index d6f9f9a..eb45c7e 100644 --- a/Lib/urllib/request.py +++ b/Lib/urllib/request.py @@ -1300,9 +1300,9 @@ class FileHandler(BaseHandler): else: origurl = 'file://' + filename return addinfourl(open(localfile, 'rb'), headers, origurl) - except OSError as msg: + except OSError as exp: # users shouldn't expect OSErrors coming from urlopen() - raise URLError(msg) + raise URLError(exp) raise URLError('file not on local host') def _safe_gethostbyname(host): @@ -1361,8 +1361,8 @@ class FTPHandler(BaseHandler): headers += "Content-length: %d\n" % retrlen headers = email.message_from_string(headers) return addinfourl(fp, headers, req.full_url) - except ftplib.all_errors as msg: - exc = URLError('ftp error: %s' % msg) + except ftplib.all_errors as exp: + exc = URLError('ftp error: %r' % exp) raise exc.with_traceback(sys.exc_info()[2]) def connect_ftp(self, user, passwd, host, port, dirs, timeout): @@ -1662,7 +1662,6 @@ class URLopener: if proxy_bypass(realhost): host = realhost - #print "proxy via http:", host, selector if not host: raise IOError('http error', 'no host given') if proxy_passwd: @@ -1753,7 +1752,7 @@ class URLopener: def open_file(self, url): """Use local file or FTP depending on form of URL.""" if not isinstance(url, str): - raise URLError('file error', 'proxy support for file protocol currently not implemented') + raise URLError('file error: proxy support for file protocol currently not implemented') if url[:2] == '//' and url[2:3] != '/' and url[2:12].lower() != 'localhost/': raise ValueError("file:// scheme is supported only on localhost") else: @@ -1768,7 +1767,7 @@ class URLopener: try: stats = os.stat(localname) except OSError as e: - raise URLError(e.errno, e.strerror, e.filename) + raise URLError(e.strerror, e.filename) size = stats.st_size modified = email.utils.formatdate(stats.st_mtime, usegmt=True) mtype = mimetypes.guess_type(url)[0] @@ -1782,23 +1781,22 @@ class URLopener: return addinfourl(open(localname, 'rb'), headers, urlfile) host, port = splitport(host) if (not port - and socket.gethostbyname(host) in (localhost() + thishost())): + and socket.gethostbyname(host) in ((localhost(),) + thishost())): urlfile = file if file[:1] == '/': urlfile = 'file://' + file elif file[:2] == './': raise ValueError("local file url may start with / or file:. Unknown url of type: %s" % url) return addinfourl(open(localname, 'rb'), headers, urlfile) - raise URLError('local file error', 'not on local host') + raise URLError('local file error: not on local host') def open_ftp(self, url): """Use FTP protocol.""" if not isinstance(url, str): - raise URLError('ftp error', 'proxy support for ftp protocol currently not implemented') + raise URLError('ftp error: proxy support for ftp protocol currently not implemented') import mimetypes - from io import StringIO host, path = splithost(url) - if not host: raise URLError('ftp error', 'no host given') + if not host: raise URLError('ftp error: no host given') host, port = splitport(host) user, host = splituser(host) if user: user, passwd = splitpasswd(user) @@ -1847,13 +1845,13 @@ class URLopener: headers += "Content-Length: %d\n" % retrlen headers = email.message_from_string(headers) return addinfourl(fp, headers, "ftp:" + url) - except ftperrors() as msg: - raise URLError('ftp error', msg).with_traceback(sys.exc_info()[2]) + except ftperrors() as exp: + raise URLError('ftp error %r' % exp).with_traceback(sys.exc_info()[2]) def open_data(self, url, data=None): """Use "data" URL.""" if not isinstance(url, str): - raise URLError('data error', 'proxy support for data protocol currently not implemented') + raise URLError('data error: proxy support for data protocol currently not implemented') # ignore POSTed data # # syntax of data URLs: @@ -2184,7 +2182,7 @@ class ftpwrapper: conn, retrlen = self.ftp.ntransfercmd(cmd) except ftplib.error_perm as reason: if str(reason)[:3] != '550': - raise URLError('ftp error', reason).with_traceback( + raise URLError('ftp error: %d' % reason).with_traceback( sys.exc_info()[2]) if not conn: # Set transfer mode to ASCII! @@ -2196,7 +2194,7 @@ class ftpwrapper: try: self.ftp.cwd(file) except ftplib.error_perm as reason: - raise URLError('ftp error', reason) from reason + raise URLError('ftp error: %d' % reason) from reason finally: self.ftp.cwd(pwd) cmd = 'LIST ' + file @@ -2212,13 +2210,7 @@ class ftpwrapper: return (ftpobj, retrlen) def endtransfer(self): - if not self.busy: - return self.busy = 0 - try: - self.ftp.voidresp() - except ftperrors(): - pass def close(self): self.keepalive = False @@ -2470,7 +2462,6 @@ elif os.name == 'nt': test = test.replace("*", r".*") # change glob sequence test = test.replace("?", r".") # change glob char for val in host: - # print "%s <--> %s" %( test, val ) if re.match(test, val, re.I): return 1 return 0 diff --git a/Lib/wsgiref/handlers.py b/Lib/wsgiref/handlers.py index 67064a6..63d5993 100644 --- a/Lib/wsgiref/handlers.py +++ b/Lib/wsgiref/handlers.py @@ -174,11 +174,13 @@ class BaseHandler: in the event loop to iterate over the data, and to call 'self.close()' once the response is finished. """ - if not self.result_is_file() or not self.sendfile(): - for data in self.result: - self.write(data) - self.finish_content() - self.close() + try: + if not self.result_is_file() or not self.sendfile(): + for data in self.result: + self.write(data) + self.finish_content() + finally: + self.close() def get_scheme(self): diff --git a/Makefile.pre.in b/Makefile.pre.in index 91e28d2..44c1f15 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -280,7 +280,7 @@ AST_ASDL= $(srcdir)/Parser/Python.asdl ASDLGEN_FILES= $(srcdir)/Parser/asdl.py $(srcdir)/Parser/asdl_c.py # XXX Note that a build now requires Python exist before the build starts -ASDLGEN= @DISABLE_ASDLGEN@ $(srcdir)/Parser/asdl_c.py +ASDLGEN= @ASDLGEN@ $(srcdir)/Parser/asdl_c.py ########################################################################## # Python @@ -699,6 +699,7 @@ Mark Mc Mahon Gordon McMillan Caolan McNamara Andrew McNamara +Jeff McNeil Craig McPheeters Lambert Meertens Bill van Melle @@ -959,6 +960,7 @@ Jiwon Seo Joakim Sernbrant Roger Serwy Jerry Seutter +Pete Sevander Denis Severson Ian Seyer Ha Shao @@ -1067,6 +1069,7 @@ Richard Townsend Laurence Tratt John Tromp Jason Trowbridge +Brent Tubbs Anthony Tuininga Erno Tukia David Turner @@ -10,6 +10,12 @@ What's New in Python 3.2.4 Core and Builtins ----------------- +- Issue #14700: Fix buggy overflow checks when handling large precisions and + widths in old-style and new-style formatting. + +- Issue #6074: Ensure cached bytecode files can always be updated by the + user that created them, even when the source file is read-only. + - Issue #14783: Improve int() docstring and switch docstrings for str(), range(), and slice() to use multi-line signatures. @@ -129,6 +135,22 @@ Core and Builtins Library ------- +- Issue #12890: cgitb no longer prints spurious <p> tags in text + mode when the logdir option is specified. + +- Issue #16250: Fix URLError invocation with proper args. + +- Issue #16305: Fix a segmentation fault occurring when interrupting + math.factorial. + +- Issue #14398: Fix size truncation and overflow bugs in the bz2 module. + +- Issue #16220: wsgiref now always calls close() on an iterable response. + Patch by Brent Tubbs. + +- Issue #16270: urllib may hang when used for retrieving files via FTP by using + a context manager. Patch by Giampaolo Rodola'. + - Issue #16176: Properly identify Windows 8 via platform.platform() - Issue #16114: The subprocess module no longer provides a misleading @@ -579,6 +601,8 @@ Tests Build ----- +- Issue #16262: fix out-of-src-tree builds, if mercurial is not installed. + - Issue #15923: fix a mistake in asdl_c.py that resulted in a TypeError after 2801bf875a24 (see #15801). @@ -615,6 +639,9 @@ Build Documentation ------------- +- Issue #8040: added a version switcher to the documentation. Patch by + Yury Selivanov. + - Issue #16115: Improve subprocess.Popen() documentation around args, shell, and executable arguments. diff --git a/Modules/bz2module.c b/Modules/bz2module.c index a671e8d..4795965 100644 --- a/Modules/bz2module.c +++ b/Modules/bz2module.c @@ -41,23 +41,8 @@ typedef fpos_t Py_off_t; #define MODE_READ_EOF 2 #define MODE_WRITE 3 -#define BZ2FileObject_Check(v) (Py_TYPE(v) == &BZ2File_Type) - -#ifdef BZ_CONFIG_ERROR - -#if SIZEOF_LONG >= 8 -#define BZS_TOTAL_OUT(bzs) \ - (((long)bzs->total_out_hi32 << 32) + bzs->total_out_lo32) -#elif SIZEOF_LONG_LONG >= 8 -#define BZS_TOTAL_OUT(bzs) \ - (((PY_LONG_LONG)bzs->total_out_hi32 << 32) + bzs->total_out_lo32) -#else -#define BZS_TOTAL_OUT(bzs) \ - bzs->total_out_lo32 -#endif - -#else /* ! BZ_CONFIG_ERROR */ +#ifndef BZ_CONFIG_ERROR #define BZ2_bzRead bzRead #define BZ2_bzReadOpen bzReadOpen @@ -72,8 +57,6 @@ typedef fpos_t Py_off_t; #define BZ2_bzDecompressInit bzDecompressInit #define BZ2_bzDecompressEnd bzDecompressEnd -#define BZS_TOTAL_OUT(bzs) bzs->total_out - #endif /* ! BZ_CONFIG_ERROR */ @@ -90,11 +73,7 @@ typedef fpos_t Py_off_t; #define RELEASE_LOCK(obj) #endif -/* Bits in f_newlinetypes */ -#define NEWLINE_UNKNOWN 0 /* No newline seen, yet */ -#define NEWLINE_CR 1 /* \r newline seen */ -#define NEWLINE_LF 2 /* \n newline seen */ -#define NEWLINE_CRLF 4 /* \r\n newline seen */ +#define MIN(X, Y) (((X) < (Y)) ? (X) : (Y)) /* ===================================================================== */ /* Structure definitions. */ @@ -228,6 +207,20 @@ Util_NewBufferSize(size_t currentsize) return currentsize + (currentsize >> 3) + 6; } +static int +Util_GrowBuffer(PyObject **buf) +{ + size_t size = PyBytes_GET_SIZE(*buf); + size_t new_size = Util_NewBufferSize(size); + if (new_size > size) { + return _PyBytes_Resize(buf, new_size); + } else { /* overflow */ + PyErr_SetString(PyExc_OverflowError, + "Unable to allocate buffer - output too large"); + return -1; + } +} + /* This is a hacked version of Python's fileobject.c:get_line(). */ static PyObject * Util_GetLine(BZ2FileObject *f, int n) @@ -1418,20 +1411,16 @@ static PyObject * BZ2Comp_compress(BZ2CompObject *self, PyObject *args) { Py_buffer pdata; - char *data; - int datasize; - int bufsize = SMALLCHUNK; - PY_LONG_LONG totalout; + size_t input_left; + size_t output_size = 0; PyObject *ret = NULL; bz_stream *bzs = &self->bzs; int bzerror; if (!PyArg_ParseTuple(args, "y*:compress", &pdata)) return NULL; - data = pdata.buf; - datasize = pdata.len; - if (datasize == 0) { + if (pdata.len == 0) { PyBuffer_Release(&pdata); return PyBytes_FromStringAndSize("", 0); } @@ -1443,41 +1432,51 @@ BZ2Comp_compress(BZ2CompObject *self, PyObject *args) goto error; } - ret = PyBytes_FromStringAndSize(NULL, bufsize); + ret = PyBytes_FromStringAndSize(NULL, SMALLCHUNK); if (!ret) goto error; - bzs->next_in = data; - bzs->avail_in = datasize; - bzs->next_out = BUF(ret); - bzs->avail_out = bufsize; + bzs->next_in = pdata.buf; + bzs->avail_in = MIN(pdata.len, UINT_MAX); + input_left = pdata.len - bzs->avail_in; - totalout = BZS_TOTAL_OUT(bzs); + bzs->next_out = BUF(ret); + bzs->avail_out = PyBytes_GET_SIZE(ret); for (;;) { + char *saved_next_out; + Py_BEGIN_ALLOW_THREADS + saved_next_out = bzs->next_out; bzerror = BZ2_bzCompress(bzs, BZ_RUN); + output_size += bzs->next_out - saved_next_out; Py_END_ALLOW_THREADS + if (bzerror != BZ_RUN_OK) { Util_CatchBZ2Error(bzerror); goto error; } - if (bzs->avail_in == 0) - break; /* no more input data */ + if (bzs->avail_in == 0) { + if (input_left == 0) + break; /* no more input data */ + bzs->avail_in = MIN(input_left, UINT_MAX); + input_left -= bzs->avail_in; + } if (bzs->avail_out == 0) { - bufsize = Util_NewBufferSize(bufsize); - if (_PyBytes_Resize(&ret, bufsize) < 0) { - BZ2_bzCompressEnd(bzs); - goto error; + size_t buffer_left = PyBytes_GET_SIZE(ret) - output_size; + if (buffer_left == 0) { + if (Util_GrowBuffer(&ret) < 0) { + BZ2_bzCompressEnd(bzs); + goto error; + } + bzs->next_out = BUF(ret) + output_size; + buffer_left = PyBytes_GET_SIZE(ret) - output_size; } - bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs) - - totalout); - bzs->avail_out = bufsize - (bzs->next_out - BUF(ret)); + bzs->avail_out = MIN(buffer_left, UINT_MAX); } } - if (_PyBytes_Resize(&ret, - (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout)) < 0) + if (_PyBytes_Resize(&ret, output_size) < 0) goto error; RELEASE_LOCK(self); @@ -1501,33 +1500,34 @@ You must not use the compressor object after calling this method.\n\ static PyObject * BZ2Comp_flush(BZ2CompObject *self) { - int bufsize = SMALLCHUNK; + size_t output_size = 0; PyObject *ret = NULL; bz_stream *bzs = &self->bzs; - PY_LONG_LONG totalout; int bzerror; ACQUIRE_LOCK(self); if (!self->running) { - PyErr_SetString(PyExc_ValueError, "object was already " - "flushed"); + PyErr_SetString(PyExc_ValueError, "object was already flushed"); goto error; } self->running = 0; - ret = PyBytes_FromStringAndSize(NULL, bufsize); + ret = PyBytes_FromStringAndSize(NULL, SMALLCHUNK); if (!ret) goto error; bzs->next_out = BUF(ret); - bzs->avail_out = bufsize; - - totalout = BZS_TOTAL_OUT(bzs); + bzs->avail_out = PyBytes_GET_SIZE(ret); for (;;) { + char *saved_next_out; + Py_BEGIN_ALLOW_THREADS + saved_next_out = bzs->next_out; bzerror = BZ2_bzCompress(bzs, BZ_FINISH); + output_size += bzs->next_out - saved_next_out; Py_END_ALLOW_THREADS + if (bzerror == BZ_STREAM_END) { break; } else if (bzerror != BZ_FINISH_OK) { @@ -1535,21 +1535,20 @@ BZ2Comp_flush(BZ2CompObject *self) goto error; } if (bzs->avail_out == 0) { - bufsize = Util_NewBufferSize(bufsize); - if (_PyBytes_Resize(&ret, bufsize) < 0) - goto error; - bzs->next_out = BUF(ret); - bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs) - - totalout); - bzs->avail_out = bufsize - (bzs->next_out - BUF(ret)); + size_t buffer_left = PyBytes_GET_SIZE(ret) - output_size; + if (buffer_left == 0) { + if (Util_GrowBuffer(&ret) < 0) + goto error; + bzs->next_out = BUF(ret) + output_size; + buffer_left = PyBytes_GET_SIZE(ret) - output_size; + } + bzs->avail_out = MIN(buffer_left, UINT_MAX); } } - if (bzs->avail_out != 0) { - if (_PyBytes_Resize(&ret, - (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout)) < 0) + if (output_size != PyBytes_GET_SIZE(ret)) + if (_PyBytes_Resize(&ret, output_size) < 0) goto error; - } RELEASE_LOCK(self); return ret; @@ -1714,18 +1713,14 @@ static PyObject * BZ2Decomp_decompress(BZ2DecompObject *self, PyObject *args) { Py_buffer pdata; - char *data; - int datasize; - int bufsize = SMALLCHUNK; - PY_LONG_LONG totalout; + size_t input_left; + size_t output_size = 0; PyObject *ret = NULL; bz_stream *bzs = &self->bzs; int bzerror; if (!PyArg_ParseTuple(args, "y*:decompress", &pdata)) return NULL; - data = pdata.buf; - datasize = pdata.len; ACQUIRE_LOCK(self); if (!self->running) { @@ -1734,55 +1729,65 @@ BZ2Decomp_decompress(BZ2DecompObject *self, PyObject *args) goto error; } - ret = PyBytes_FromStringAndSize(NULL, bufsize); + ret = PyBytes_FromStringAndSize(NULL, SMALLCHUNK); if (!ret) goto error; - bzs->next_in = data; - bzs->avail_in = datasize; - bzs->next_out = BUF(ret); - bzs->avail_out = bufsize; + bzs->next_in = pdata.buf; + bzs->avail_in = MIN(pdata.len, UINT_MAX); + input_left = pdata.len - bzs->avail_in; - totalout = BZS_TOTAL_OUT(bzs); + bzs->next_out = BUF(ret); + bzs->avail_out = PyBytes_GET_SIZE(ret); for (;;) { + char *saved_next_out; + Py_BEGIN_ALLOW_THREADS + saved_next_out = bzs->next_out; bzerror = BZ2_bzDecompress(bzs); + output_size += bzs->next_out - saved_next_out; Py_END_ALLOW_THREADS + if (bzerror == BZ_STREAM_END) { - if (bzs->avail_in != 0) { + self->running = 0; + input_left += bzs->avail_in; + if (input_left != 0) { Py_DECREF(self->unused_data); self->unused_data = - PyBytes_FromStringAndSize(bzs->next_in, - bzs->avail_in); + PyBytes_FromStringAndSize(bzs->next_in, input_left); + if (self->unused_data == NULL) + goto error; } - self->running = 0; break; } if (bzerror != BZ_OK) { Util_CatchBZ2Error(bzerror); goto error; } - if (bzs->avail_in == 0) - break; /* no more input data */ + if (bzs->avail_in == 0) { + if (input_left == 0) + break; /* no more input data */ + bzs->avail_in = MIN(input_left, UINT_MAX); + input_left -= bzs->avail_in; + } if (bzs->avail_out == 0) { - bufsize = Util_NewBufferSize(bufsize); - if (_PyBytes_Resize(&ret, bufsize) < 0) { - BZ2_bzDecompressEnd(bzs); - goto error; + size_t buffer_left = PyBytes_GET_SIZE(ret) - output_size; + if (buffer_left == 0) { + if (Util_GrowBuffer(&ret) < 0) { + BZ2_bzDecompressEnd(bzs); + goto error; + } + bzs->next_out = BUF(ret) + output_size; + buffer_left = PyBytes_GET_SIZE(ret) - output_size; } - bzs->next_out = BUF(ret); - bzs->next_out = BUF(ret) + (BZS_TOTAL_OUT(bzs) - - totalout); - bzs->avail_out = bufsize - (bzs->next_out - BUF(ret)); + bzs->avail_out = MIN(buffer_left, UINT_MAX); } } - if (bzs->avail_out != 0) { - if (_PyBytes_Resize(&ret, - (Py_ssize_t)(BZS_TOTAL_OUT(bzs) - totalout)) < 0) + if (output_size != PyBytes_GET_SIZE(ret)) + if (_PyBytes_Resize(&ret, output_size) < 0) goto error; - } RELEASE_LOCK(self); PyBuffer_Release(&pdata); @@ -1929,10 +1934,10 @@ static PyObject * bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs) { int compresslevel=9; + int action; Py_buffer pdata; - char *data; - int datasize; - int bufsize; + size_t input_left; + size_t output_size = 0; PyObject *ret = NULL; bz_stream _bzs; bz_stream *bzs = &_bzs; @@ -1943,8 +1948,6 @@ bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs) kwlist, &pdata, &compresslevel)) return NULL; - data = pdata.buf; - datasize = pdata.len; if (compresslevel < 1 || compresslevel > 9) { PyErr_SetString(PyExc_ValueError, @@ -1953,11 +1956,7 @@ bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs) return NULL; } - /* Conforming to bz2 manual, this is large enough to fit compressed - * data in one shot. We will check it later anyway. */ - bufsize = datasize + (datasize/100+1) + 600; - - ret = PyBytes_FromStringAndSize(NULL, bufsize); + ret = PyBytes_FromStringAndSize(NULL, SMALLCHUNK); if (!ret) { PyBuffer_Release(&pdata); return NULL; @@ -1965,10 +1964,12 @@ bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs) memset(bzs, 0, sizeof(bz_stream)); - bzs->next_in = data; - bzs->avail_in = datasize; + bzs->next_in = pdata.buf; + bzs->avail_in = MIN(pdata.len, UINT_MAX); + input_left = pdata.len - bzs->avail_in; + bzs->next_out = BUF(ret); - bzs->avail_out = bufsize; + bzs->avail_out = PyBytes_GET_SIZE(ret); bzerror = BZ2_bzCompressInit(bzs, compresslevel, 0, 0); if (bzerror != BZ_OK) { @@ -1978,38 +1979,53 @@ bz2_compress(PyObject *self, PyObject *args, PyObject *kwargs) return NULL; } + action = BZ_RUN; + for (;;) { + char *saved_next_out; + Py_BEGIN_ALLOW_THREADS - bzerror = BZ2_bzCompress(bzs, BZ_FINISH); + saved_next_out = bzs->next_out; + bzerror = BZ2_bzCompress(bzs, action); + output_size += bzs->next_out - saved_next_out; Py_END_ALLOW_THREADS + if (bzerror == BZ_STREAM_END) { break; - } else if (bzerror != BZ_FINISH_OK) { + } else if (bzerror != BZ_RUN_OK && bzerror != BZ_FINISH_OK) { BZ2_bzCompressEnd(bzs); Util_CatchBZ2Error(bzerror); PyBuffer_Release(&pdata); Py_DECREF(ret); return NULL; } + if (action == BZ_RUN && bzs->avail_in == 0) { + if (input_left == 0) { + action = BZ_FINISH; + } else { + bzs->avail_in = MIN(input_left, UINT_MAX); + input_left -= bzs->avail_in; + } + } if (bzs->avail_out == 0) { - bufsize = Util_NewBufferSize(bufsize); - if (_PyBytes_Resize(&ret, bufsize) < 0) { - BZ2_bzCompressEnd(bzs); - PyBuffer_Release(&pdata); - return NULL; + size_t buffer_left = PyBytes_GET_SIZE(ret) - output_size; + if (buffer_left == 0) { + if (Util_GrowBuffer(&ret) < 0) { + BZ2_bzCompressEnd(bzs); + PyBuffer_Release(&pdata); + return NULL; + } + bzs->next_out = BUF(ret) + output_size; + buffer_left = PyBytes_GET_SIZE(ret) - output_size; } - bzs->next_out = BUF(ret) + BZS_TOTAL_OUT(bzs); - bzs->avail_out = bufsize - (bzs->next_out - BUF(ret)); + bzs->avail_out = MIN(buffer_left, UINT_MAX); } } - if (bzs->avail_out != 0) { - if (_PyBytes_Resize(&ret, (Py_ssize_t)BZS_TOTAL_OUT(bzs)) < 0) { - ret = NULL; - } - } - BZ2_bzCompressEnd(bzs); + if (output_size != PyBytes_GET_SIZE(ret)) + _PyBytes_Resize(&ret, output_size); /* Sets ret to NULL on failure. */ + BZ2_bzCompressEnd(bzs); PyBuffer_Release(&pdata); return ret; } @@ -2025,9 +2041,8 @@ static PyObject * bz2_decompress(PyObject *self, PyObject *args) { Py_buffer pdata; - char *data; - int datasize; - int bufsize = SMALLCHUNK; + size_t input_left; + size_t output_size = 0; PyObject *ret; bz_stream _bzs; bz_stream *bzs = &_bzs; @@ -2035,15 +2050,13 @@ bz2_decompress(PyObject *self, PyObject *args) if (!PyArg_ParseTuple(args, "y*:decompress", &pdata)) return NULL; - data = pdata.buf; - datasize = pdata.len; - if (datasize == 0) { + if (pdata.len == 0) { PyBuffer_Release(&pdata); return PyBytes_FromStringAndSize("", 0); } - ret = PyBytes_FromStringAndSize(NULL, bufsize); + ret = PyBytes_FromStringAndSize(NULL, SMALLCHUNK); if (!ret) { PyBuffer_Release(&pdata); return NULL; @@ -2051,10 +2064,12 @@ bz2_decompress(PyObject *self, PyObject *args) memset(bzs, 0, sizeof(bz_stream)); - bzs->next_in = data; - bzs->avail_in = datasize; + bzs->next_in = pdata.buf; + bzs->avail_in = MIN(pdata.len, UINT_MAX); + input_left = pdata.len - bzs->avail_in; + bzs->next_out = BUF(ret); - bzs->avail_out = bufsize; + bzs->avail_out = PyBytes_GET_SIZE(ret); bzerror = BZ2_bzDecompressInit(bzs, 0, 0); if (bzerror != BZ_OK) { @@ -2065,9 +2080,14 @@ bz2_decompress(PyObject *self, PyObject *args) } for (;;) { + char *saved_next_out; + Py_BEGIN_ALLOW_THREADS + saved_next_out = bzs->next_out; bzerror = BZ2_bzDecompress(bzs); + output_size += bzs->next_out - saved_next_out; Py_END_ALLOW_THREADS + if (bzerror == BZ_STREAM_END) { break; } else if (bzerror != BZ_OK) { @@ -2078,33 +2098,37 @@ bz2_decompress(PyObject *self, PyObject *args) return NULL; } if (bzs->avail_in == 0) { - BZ2_bzDecompressEnd(bzs); - PyErr_SetString(PyExc_ValueError, - "couldn't find end of stream"); - PyBuffer_Release(&pdata); - Py_DECREF(ret); - return NULL; - } - if (bzs->avail_out == 0) { - bufsize = Util_NewBufferSize(bufsize); - if (_PyBytes_Resize(&ret, bufsize) < 0) { + if (input_left == 0) { BZ2_bzDecompressEnd(bzs); + PyErr_SetString(PyExc_ValueError, + "couldn't find end of stream"); PyBuffer_Release(&pdata); + Py_DECREF(ret); return NULL; } - bzs->next_out = BUF(ret) + BZS_TOTAL_OUT(bzs); - bzs->avail_out = bufsize - (bzs->next_out - BUF(ret)); + bzs->avail_in = MIN(input_left, UINT_MAX); + input_left -= bzs->avail_in; } - } - - if (bzs->avail_out != 0) { - if (_PyBytes_Resize(&ret, (Py_ssize_t)BZS_TOTAL_OUT(bzs)) < 0) { - ret = NULL; + if (bzs->avail_out == 0) { + size_t buffer_left = PyBytes_GET_SIZE(ret) - output_size; + if (buffer_left == 0) { + if (Util_GrowBuffer(&ret) < 0) { + BZ2_bzDecompressEnd(bzs); + PyBuffer_Release(&pdata); + return NULL; + } + bzs->next_out = BUF(ret) + output_size; + buffer_left = PyBytes_GET_SIZE(ret) - output_size; + } + bzs->avail_out = MIN(buffer_left, UINT_MAX); } } + + if (output_size != PyBytes_GET_SIZE(ret)) + _PyBytes_Resize(&ret, output_size); /* Sets ret to NULL on failure. */ + BZ2_bzDecompressEnd(bzs); PyBuffer_Release(&pdata); - return ret; } diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index 2c4cc73..142eca4 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -1330,14 +1330,13 @@ factorial_odd_part(unsigned long n) Py_DECREF(outer); outer = tmp; } - - goto done; + Py_DECREF(inner); + return outer; error: Py_DECREF(outer); - done: Py_DECREF(inner); - return outer; + return NULL; } /* Lookup table for small factorial values */ diff --git a/Objects/longobject.c b/Objects/longobject.c index b9a0d85..3a675c4 100644 --- a/Objects/longobject.c +++ b/Objects/longobject.c @@ -926,6 +926,13 @@ _PyLong_AsByteArray(PyLongObject* v, PyObject * PyLong_FromVoidPtr(void *p) { +#if SIZEOF_VOID_P <= SIZEOF_LONG + /* special-case null pointer */ + if (!p) + return PyLong_FromLong(0); + return PyLong_FromUnsignedLong((unsigned long)(Py_uintptr_t)p); +#else + #ifndef HAVE_LONG_LONG # error "PyLong_FromVoidPtr: sizeof(void*) > sizeof(long), but no long long" #endif @@ -936,6 +943,7 @@ PyLong_FromVoidPtr(void *p) if (!p) return PyLong_FromLong(0); return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG)(Py_uintptr_t)p); +#endif /* SIZEOF_VOID_P <= SIZEOF_LONG */ } diff --git a/Objects/stringlib/formatter.h b/Objects/stringlib/formatter.h index 4fdc62d..139b56c 100644 --- a/Objects/stringlib/formatter.h +++ b/Objects/stringlib/formatter.h @@ -73,7 +73,7 @@ static int get_integer(STRINGLIB_CHAR **ptr, STRINGLIB_CHAR *end, Py_ssize_t *result) { - Py_ssize_t accumulator, digitval, oldaccumulator; + Py_ssize_t accumulator, digitval; int numdigits; accumulator = numdigits = 0; for (;;(*ptr)++, numdigits++) { @@ -83,19 +83,17 @@ get_integer(STRINGLIB_CHAR **ptr, STRINGLIB_CHAR *end, if (digitval < 0) break; /* - This trick was copied from old Unicode format code. It's cute, - but would really suck on an old machine with a slow divide - implementation. Fortunately, in the normal case we do not - expect too many digits. + Detect possible overflow before it happens: + + accumulator * 10 + digitval > PY_SSIZE_T_MAX if and only if + accumulator > (PY_SSIZE_T_MAX - digitval) / 10. */ - oldaccumulator = accumulator; - accumulator *= 10; - if ((accumulator+10)/10 != oldaccumulator+1) { + if (accumulator > (PY_SSIZE_T_MAX - digitval) / 10) { PyErr_Format(PyExc_ValueError, "Too many decimal digits in format string"); return -1; } - accumulator += digitval; + accumulator = accumulator * 10 + digitval; } *result = accumulator; return numdigits; diff --git a/Objects/stringlib/string_format.h b/Objects/stringlib/string_format.h index 6c7adcb..c46bdc2 100644 --- a/Objects/stringlib/string_format.h +++ b/Objects/stringlib/string_format.h @@ -197,7 +197,6 @@ get_integer(const SubString *str) { Py_ssize_t accumulator = 0; Py_ssize_t digitval; - Py_ssize_t oldaccumulator; STRINGLIB_CHAR *p; /* empty string is an error */ @@ -209,19 +208,17 @@ get_integer(const SubString *str) if (digitval < 0) return -1; /* - This trick was copied from old Unicode format code. It's cute, - but would really suck on an old machine with a slow divide - implementation. Fortunately, in the normal case we do not - expect too many digits. + Detect possible overflow before it happens: + + accumulator * 10 + digitval > PY_SSIZE_T_MAX if and only if + accumulator > (PY_SSIZE_T_MAX - digitval) / 10. */ - oldaccumulator = accumulator; - accumulator *= 10; - if ((accumulator+10)/10 != oldaccumulator+1) { + if (accumulator > (PY_SSIZE_T_MAX - digitval) / 10) { PyErr_Format(PyExc_ValueError, "Too many decimal digits in format string"); return -1; } - accumulator += digitval; + accumulator = accumulator * 10 + digitval; } return accumulator; } diff --git a/Objects/unicodeobject.c b/Objects/unicodeobject.c index 1dd3a85..3ef9c9b 100644 --- a/Objects/unicodeobject.c +++ b/Objects/unicodeobject.c @@ -9648,7 +9648,7 @@ PyObject *PyUnicode_Format(PyObject *format, c = *fmt++; if (c < '0' || c > '9') break; - if ((width*10) / 10 != width) { + if (width > (PY_SSIZE_T_MAX - ((int)c - '0')) / 10) { PyErr_SetString(PyExc_ValueError, "width too big"); goto onError; @@ -9683,7 +9683,7 @@ PyObject *PyUnicode_Format(PyObject *format, c = *fmt++; if (c < '0' || c > '9') break; - if ((prec*10) / 10 != prec) { + if (prec > (INT_MAX - ((int)c - '0')) / 10) { PyErr_SetString(PyExc_ValueError, "prec too big"); goto onError; diff --git a/Python/codecs.c b/Python/codecs.c index c7f4a9c..90f1cf6 100644 --- a/Python/codecs.c +++ b/Python/codecs.c @@ -821,9 +821,10 @@ PyCodec_SurrogatePassErrors(PyObject *exc) /* Try decoding a single surrogate character. If there are more, let the codec call us again. */ p += start; - if ((p[0] & 0xf0) == 0xe0 || - (p[1] & 0xc0) == 0x80 || - (p[2] & 0xc0) == 0x80) { + if (strlen(p) > 2 && + ((p[0] & 0xf0) == 0xe0 || + (p[1] & 0xc0) == 0x80 || + (p[2] & 0xc0) == 0x80)) { /* it's a three-byte code */ ch = ((p[0] & 0x0f) << 12) + ((p[1] & 0x3f) << 6) + (p[2] & 0x3f); if (ch < 0xd800 || ch > 0xdfff) diff --git a/Python/import.c b/Python/import.c index beb0eec..f655e51 100644 --- a/Python/import.c +++ b/Python/import.c @@ -1172,15 +1172,21 @@ write_compiled_module(PyCodeObject *co, char *cpathname, struct stat *srcstat) FILE *fp; char *dirpath; time_t mtime = srcstat->st_mtime; + int saved; #ifdef MS_WINDOWS /* since Windows uses different permissions */ mode_t mode = srcstat->st_mode & ~S_IEXEC; + /* Issue #6074: We ensure user write access, so we can delete it later + * when the source file changes. (On POSIX, this only requires write + * access to the directory, on Windows, we need write access to the file + * as well) + */ + mode |= _S_IWRITE; #else mode_t mode = srcstat->st_mode & ~S_IXUSR & ~S_IXGRP & ~S_IXOTH; mode_t dirmode = (srcstat->st_mode | S_IXUSR | S_IXGRP | S_IXOTH | S_IWUSR | S_IWGRP | S_IWOTH); #endif - int saved; /* Ensure that the __pycache__ directory exists. */ dirpath = rightmost_sep(cpathname); @@ -645,8 +645,8 @@ MKDIR_P INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM -HAS_PYTHON -DISABLE_ASDLGEN +PYTHON +ASDLGEN HAS_HG HGBRANCH HGTAG @@ -5281,16 +5281,17 @@ else fi -DISABLE_ASDLGEN="" -# Extract the first word of "python", so it can be a program name with args. -set dummy python; ac_word=$2 +for ac_prog in python$PACKAGE_VERSION python3 python +do + # Extract the first word of "$ac_prog", so it can be a program name with args. +set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } -if ${ac_cv_prog_HAS_PYTHON+:} false; then : +if ${ac_cv_prog_PYTHON+:} false; then : $as_echo_n "(cached) " >&6 else - if test -n "$HAS_PYTHON"; then - ac_cv_prog_HAS_PYTHON="$HAS_PYTHON" # Let the user override the test. + if test -n "$PYTHON"; then + ac_cv_prog_PYTHON="$PYTHON" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH @@ -5299,7 +5300,7 @@ do test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then - ac_cv_prog_HAS_PYTHON="found" + ac_cv_prog_PYTHON="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi @@ -5307,22 +5308,26 @@ done done IFS=$as_save_IFS - test -z "$ac_cv_prog_HAS_PYTHON" && ac_cv_prog_HAS_PYTHON="not-found" fi fi -HAS_PYTHON=$ac_cv_prog_HAS_PYTHON -if test -n "$HAS_PYTHON"; then - { $as_echo "$as_me:${as_lineno-$LINENO}: result: $HAS_PYTHON" >&5 -$as_echo "$HAS_PYTHON" >&6; } +PYTHON=$ac_cv_prog_PYTHON +if test -n "$PYTHON"; then + { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHON" >&5 +$as_echo "$PYTHON" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi -if test $HAS_HG != found -o $HAS_PYTHON != found -then - DISABLE_ASDLGEN="@echo hg: $HAS_HG, python: $HAS_PYTHON! cannot run \$(srcdir)/Parser/asdl_c.py #" + test -n "$PYTHON" && break +done +test -n "$PYTHON" || PYTHON="not-found" + +if $PYTHON = not-found; then + ASDLGEN="@echo python: $PYTHON! cannot run \$(srcdir)/Parser/asdl_c.py #" +else + ASDLGEN="$PYTHON" fi diff --git a/configure.ac b/configure.ac index f2951a2..3e05304 100644 --- a/configure.ac +++ b/configure.ac @@ -867,12 +867,12 @@ else HGBRANCH="" fi -AC_SUBST(DISABLE_ASDLGEN) -DISABLE_ASDLGEN="" -AC_CHECK_PROG(HAS_PYTHON, python, found, not-found) -if test $HAS_HG != found -o $HAS_PYTHON != found -then - DISABLE_ASDLGEN="@echo hg: $HAS_HG, python: $HAS_PYTHON! cannot run \$(srcdir)/Parser/asdl_c.py #" +AC_SUBST(ASDLGEN) +AC_CHECK_PROGS(PYTHON, python$PACKAGE_VERSION python3 python, not-found) +if $PYTHON = not-found; then + ASDLGEN="@echo python: $PYTHON! cannot run \$(srcdir)/Parser/asdl_c.py #" +else + ASDLGEN="$PYTHON" fi |