summaryrefslogtreecommitdiffstats
path: root/Lib/test/dtracedata/call_stack.py
diff options
context:
space:
mode:
authorGregory P. Smith <greg@krypto.org>2022-12-11 00:17:39 (GMT)
committerGitHub <noreply@github.com>2022-12-11 00:17:39 (GMT)
commit2e279e85fece187b6058718ac7e82d1692461e26 (patch)
treec0c187ef473fde7f9a9ba0f5ac8f92ade79d02fc /Lib/test/dtracedata/call_stack.py
parent1bb68ba6d9de6bb7f00aee11d135123163f15887 (diff)
downloadcpython-2e279e85fece187b6058718ac7e82d1692461e26.zip
cpython-2e279e85fece187b6058718ac7e82d1692461e26.tar.gz
cpython-2e279e85fece187b6058718ac7e82d1692461e26.tar.bz2
gh-88500: Reduce memory use of `urllib.unquote` (#96763)
`urllib.unquote_to_bytes` and `urllib.unquote` could both potentially generate `O(len(string))` intermediate `bytes` or `str` objects while computing the unquoted final result depending on the input provided. As Python objects are relatively large, this could consume a lot of ram. This switches the implementation to using an expanding `bytearray` and a generator internally instead of precomputed `split()` style operations. Microbenchmarks with some antagonistic inputs like `mess = "\u0141%%%20a%fe"*1000` show this is 10-20% slower for unquote and unquote_to_bytes and no different for typical inputs that are short or lack much unicode or % escaping. But the functions are already quite fast anyways so not a big deal. The slowdown scales consistently linear with input size as expected. Memory usage observed manually using `/usr/bin/time -v` on `python -m timeit` runs of larger inputs. Unittesting memory consumption is difficult and does not seem worthwhile. Observed memory usage is ~1/2 for `unquote()` and <1/3 for `unquote_to_bytes()` using `python -m timeit -s 'from urllib.parse import unquote, unquote_to_bytes; v="\u0141%01\u0161%20"*500_000' 'unquote_to_bytes(v)'` as a test.
Diffstat (limited to 'Lib/test/dtracedata/call_stack.py')
0 files changed, 0 insertions, 0 deletions