summaryrefslogtreecommitdiffstats
path: root/doc/lz4_manual.html
diff options
context:
space:
mode:
authorYann Collet <Cyan4973@users.noreply.github.com>2019-04-23 17:18:40 (GMT)
committerGitHub <noreply@github.com>2019-04-23 17:18:40 (GMT)
commit398e36c756a3067de8e2b35dd380baef040dfe0d (patch)
treefe7d22f46d7345bf1316a91c2eedad4765f997f1 /doc/lz4_manual.html
parent131896ab9d4fc9b8c606616327ed223d5d86472b (diff)
parentf665291e6cb651cb084bf9450a071ae0fd494782 (diff)
downloadlz4-1.9.1.zip
lz4-1.9.1.tar.gz
lz4-1.9.1.tar.bz2
Merge pull request #692 from lz4/devv1.9.1
v1.9.1
Diffstat (limited to 'doc/lz4_manual.html')
-rw-r--r--doc/lz4_manual.html33
1 files changed, 15 insertions, 18 deletions
diff --git a/doc/lz4_manual.html b/doc/lz4_manual.html
index 356a60d..3a9e0db 100644
--- a/doc/lz4_manual.html
+++ b/doc/lz4_manual.html
@@ -1,10 +1,10 @@
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
-<title>1.9.0 Manual</title>
+<title>1.9.1 Manual</title>
</head>
<body>
-<h1>1.9.0 Manual</h1>
+<h1>1.9.1 Manual</h1>
<hr>
<a name="Contents"></a><h2>Contents</h2>
<ol>
@@ -454,21 +454,18 @@ union LZ4_streamDecode_u {
</p></pre><BR>
-<pre><b>LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe() instead") LZ4LIB_API
-int LZ4_decompress_fast (const char* src, char* dst, int originalSize);
-LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe_continue() instead") LZ4LIB_API
-int LZ4_decompress_fast_continue (LZ4_streamDecode_t* LZ4_streamDecode, const char* src, char* dst, int originalSize);
-LZ4_DEPRECATED("This function is deprecated and unsafe. Consider using LZ4_decompress_safe_usingDict() instead") LZ4LIB_API
-int LZ4_decompress_fast_usingDict (const char* src, char* dst, int originalSize, const char* dictStart, int dictSize);
-</b><p> These functions used to be a bit faster than LZ4_decompress_safe(),
- but situation has changed in recent versions.
- Now, `LZ4_decompress_safe()` is as fast and sometimes even faster than `LZ4_decompress_fast()`.
- Moreover, LZ4_decompress_safe() is protected vs malformed input, while `LZ4_decompress_fast()` is not, making it a security liability.
+<pre><b></b><p> These functions used to be faster than LZ4_decompress_safe(),
+ but it has changed, and they are now slower than LZ4_decompress_safe().
+ This is because LZ4_decompress_fast() doesn't know the input size,
+ and therefore must progress more cautiously in the input buffer to not read beyond the end of block.
+ On top of that `LZ4_decompress_fast()` is not protected vs malformed or malicious inputs, making it a security liability.
As a consequence, LZ4_decompress_fast() is strongly discouraged, and deprecated.
- Last LZ4_decompress_fast() specificity is that it can decompress a block without knowing its compressed size.
- Note that even that functionality could be achieved in a more secure manner if need be,
- though it would require new prototypes, and adaptation of the implementation to this new use case.
+ The last remaining LZ4_decompress_fast() specificity is that
+ it can decompress a block without knowing its compressed size.
+ Such functionality could be achieved in a more secure manner,
+ by also providing the maximum size of input buffer,
+ but it would require new prototypes, and adaptation of the implementation to this new use case.
Parameters:
originalSize : is the uncompressed size to regenerate.
@@ -477,9 +474,9 @@ int LZ4_decompress_fast_usingDict (const char* src, char* dst, int originalSize,
The function expects to finish at block's end exactly.
If the source stream is detected malformed, the function stops decoding and returns a negative result.
note : LZ4_decompress_fast*() requires originalSize. Thanks to this information, it never writes past the output buffer.
- However, since it doesn't know its 'src' size, it may read an unknown amount of input, and overflow input buffer.
- Also, since match offsets are not validated, match reads from 'src' may underflow.
- These issues never happen if input data is correct.
+ However, since it doesn't know its 'src' size, it may read an unknown amount of input, past input buffer bounds.
+ Also, since match offsets are not validated, match reads from 'src' may underflow too.
+ These issues never happen if input (compressed) data is correct.
But they may happen if input data is invalid (error or intentional tampering).
As a consequence, use these functions in trusted environments with trusted data **only**.