summaryrefslogtreecommitdiffstats
path: root/lib/lz4.c
Commit message (Collapse)AuthorAgeFilesLines
* Fix Dict Size Test in `LZ4_compress_fast_continue()`W. Felix Handte2018-12-051-4/+2
| | | | | | | Dictionaries don't need to be > 4 bytes, they need to be >= 4 bytes. This test was overly conservative. Also removes the test in `LZ4_attach_dictionary()`.
* Don't Attach Very Small DictionariesW. Felix Handte2018-12-041-1/+3
| | | | | | | | | | | | | | | | | Fixes a mismatch in behavior between loading into the context (via `LZ4_loadDict()`) a very small (<= 4 bytes) non-contiguous dictionary, versus attaching it with `LZ4_attach_dictionary()`. Before this patch, this divergence could be reproduced by running ``` make -C tests fuzzer MOREFLAGS="-m32" tests/fuzzer -v -s1239 -t3146 ``` Making sure these two paths behave exactly identically is an easy way to test the correctness of the attach path, so it's desirable that this remain an unpolluted, high signal test.
* Enable amalgamation of lz4hc.c and lz4.cBing Xu2018-11-161-1/+14
|
* Some followups and renamingsOleg Khabinov2018-10-011-7/+8
|
* Rename initCheck to dirtyContext and use it in LZ4_resetStream_fast() to ↵Oleg Khabinov2018-09-281-8/+32
| | | | check if full reset is needed.
* Merge pull request #578 from lz4/support128bitYann Collet2018-09-261-11/+14
|\ | | | | Support for 128bit pointers like AS400
| * increase size of LZ4 contexts for 128-bit systemsYann Collet2018-09-181-1/+2
| |
| * use byU32 mode for any pointer > 32-bitYann Collet2018-09-141-10/+12
| | | | | | | | including 128-bit, like IBM AS-400
* | tried to clean another bunch of cppcheck warningsYann Collet2018-09-191-4/+5
| | | | | | | | | | | | | | | | | | | | | | | | so "funny" thing with cppcheck is that no 2 versions give the same list of warnings. On Mac, I'm using v1.81, which had all warnings fixed. On Travis CI, it's v1.61, and it complains about a dozen more/different things. On Linux, it's v1.72, and it finds a completely different list of a half dozen warnings. Some of these seems to be bugs/limitations in cppcheck itself. The TravisCI version v1.61 seems unable to understand %zu correctly, and seems to assume it means %u.
* | fixed minor cppcheck warnings in libYann Collet2018-09-181-198/+200
|/
* clarify constant MFLIMITYann Collet2018-09-111-4/+5
| | | | | | | and separate it from MATCH_SAFEGUARD_DISTANCE. While both constants have same value, they do not seve same purpose, hence should not be confused.
* fixed minor warning in fuzzer.cYann Collet2018-09-101-6/+8
| | | | added a few more comments and assert()
* restored nullifying outputYann Collet2018-09-101-1/+5
| | | | to counter possible (offset==0)
* removed temporary debug tracesYann Collet2018-09-101-2/+0
|
* fixed fuzzer testYann Collet2018-09-081-4/+6
| | | | and removed one blind copy, since there is no more guarantee that at least 4 bytes are still available in output buffer
* first sketch for a byte-accurate partial decoderYann Collet2018-09-071-47/+79
|
* updated API documentationYann Collet2018-09-071-2/+2
|
* Also Fix Appveyor Cast WarningW. Felix Handte2018-05-221-1/+1
|
* Remove #define-rename of `LZ4_decompress_safe_forceExtDict`W. Felix Handte2018-05-221-8/+8
|
* Test Linking C-Compiled Library and C++-Compiled TestsW. Felix Handte2018-05-221-0/+15
|
* small extDict : fixed side-effectYann Collet2018-05-061-3/+6
| | | | | | don't fix dictionaries of size 0. setting dictEnd == source triggers prefix mode, thus removing possibility to use CDict.
* fixed frametest errorYann Collet2018-05-061-1/+12
| | | | | | | | | | | | | | | | | | | | The error can be reproduced using following command : ./frametest -v -i100000000 -s1659 -t31096808 It's actually a bug in the stream LZ4 API, when starting a new stream and providing a first chunk to complete with size < MINMATCH. In which case, the chunk becomes a dictionary. No hash was generated and stored, but the chunk is accessible as default position 0 points to dictStart, and position 0 is still within MAX_DISTANCE. Then, next attempt to read 32-bits from position 0 fails. The issue would have been mitigated by starting from index 64 KB, effectively eliminating position 0 as too far away. The proper fix is to eliminate such "dictionary" as too small. Which is what this patch does.
* fix comments / indentationCyan49732018-05-031-10/+8
| | | | as requested by @terrelln
* introduce LZ4_decoderRingBufferSize()Yann Collet2018-05-021-7/+26
| | | | fuzzer : fix and robustify ring buffer tests
* simplify shortcutYann Collet2018-05-021-55/+22
|
* Merge pull request #527 from svpv/fastDecYann Collet2018-04-301-25/+82
|\ | | | | lz4.c: two-stage shortcut for LZ4_decompress_generic
| * lz4.c: two-stage shortcut for LZ4_decompress_genericAlexey Tourbin2018-04-281-25/+82
| |
* | Merge pull request #515 from svpv/refactorDecYann Collet2018-04-291-49/+114
|\ \ | |/ |/| lz4.c: refactor the decoding routines
| * lz4.c: fixed the LZ4_decompress_fast_continue caseAlexey Tourbin2018-04-271-2/+22
| | | | | | | | | | | | | | The change is very similar to that of the LZ4_decompress_safe_continue case. The only reason a make this a separate change is to ensure that the fuzzer, after it's been enhanced, can detect the flaw in LZ4_decompress_fast_continue, and that the change indeed fixes the flaw.
| * lz4.c: fixed the LZ4_decompress_safe_continue caseAlexey Tourbin2018-04-261-17/+36
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The previous change broke decoding with a ring buffer. That's because I didn't realize that the "double dictionary mode" was possible, i.e. that the decoding routine can look both at the first part of the dictionary passed as prefix and the second part passed via dictStart+dictSize. So this change introduces the LZ4_decompress_safe_doubleDict helper, which handles this "double dictionary" situation. (This is a bit of a misnomer, there is only one dictionary, but I can't think of a better name, and perhaps the designation is not all too bad.) The helper is used only once, in LZ4_decompress_safe_continue, it should be inlined with LZ4_FORCE_O2_GCC_PPC64LE attached to LZ4_decompress_safe_continue. (Also, in the helper functions, I change the dictStart parameter type to "const void*", to avoid a cast when calling helpers. In the helpers, the upcast to "BYTE*" is still required, for compatibility with C++.) So this fixes the case of LZ4_decompress_safe_continue, and I'm surprised by the fact that the fuzzer is now happy and does not detect a similar problem with LZ4_decompress_fast_continue. So before fixing LZ4_decompress_fast_continue, the next logical step is to enhance the fuzzer.
| * lz4.c: refactor the decoding routinesAlexey Tourbin2018-04-251-53/+79
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I noticed that LZ4_decompress_generic is sometimes instantiated with identical set of parameters, or (what's worse) with a subtly different sets of parameters. For example, LZ4_decompress_fast_withPrefix64k is instantiated as follows: return LZ4_decompress_generic(source, dest, 0, originalSize, endOnOutputSize, full, 0, withPrefix64k, (BYTE*)dest - 64 KB, NULL, 64 KB); while the equivalent withPrefix64k call in LZ4_decompress_usingDict_generic passes 0 for the last argument instead of 64 KB. It turns out that there is no difference in this case: if you change 64 KB to 0 KB in LZ4_decompress_fast_withPrefix64k, you get the same binary code. Moreover, because it's been clarified that LZ4_decompress_fast doesn't check match offsets, it is now obvious that both of these fast/withPrefix64k instantiations are simply redundant. Exactly because LZ4_decompress_fast doesn't check offsets, it serves well with any prefixed dictionary. There's a difference, though, with LZ4_decompress_safe_withPrefix64k. It also passes 64 KB as the last argument, and if you change that to 0, as in LZ4_decompress_usingDict_generic, you get a completely different binary code. It seems that passing 0 enables offset checking: const int checkOffset = ((safeDecode) && (dictSize < (int)(64 KB))); However, the resulting code seems to run a bit faster. How come enabling extra checks can make the code run faster? Curiouser and curiouser! This needs extra study. Currently I take the view that the dictSize should be set to non-zero when nothing else will do, i.e. when passing the external dictionary via dictStart. Otherwise, lowPrefix betrays just enough information about the dictionary. * * * Anyway, with this change, I instantiate all the necessary cases as functions with distinctive names, which also take fewer arguments and are therefore less error-prone. I also make the functions non-inline. (The compiler won't inline the functions because they are used more than once. Hence I attach LZ4_FORCE_O2_GCC_PPC64LE to the instances while removing from the callers.) The number of instances is now is reduced from 18 (safe+fast+partial+4*continue+4*prefix+4*dict+2*prefix64+forceExtDict) down to 7 (safe+fast+partial+2*prefix+2*dict). The size of the code is not the only issue here. Separate helper function are much more amenable to profile-guided optimization: it is enough to profile only a few basic functions, while the other less-often used functions, such as LZ4_decompress_*_continue, will benefit automatically. This is the list of LZ4_decompress* functions in liblz4.so, sorted by size. Exported functions are marked with a capital T. $ nm -S lib/liblz4.so |grep -wi T |grep LZ4_decompress |sort -k2 0000000000016260 0000000000000005 T LZ4_decompress_fast_withPrefix64k 0000000000016dc0 0000000000000025 T LZ4_decompress_fast_usingDict 0000000000016d80 0000000000000040 T LZ4_decompress_safe_usingDict 0000000000016d10 000000000000006b T LZ4_decompress_fast_continue 0000000000016c70 000000000000009f T LZ4_decompress_safe_continue 00000000000156c0 000000000000059c T LZ4_decompress_fast 0000000000014a90 00000000000005fa T LZ4_decompress_safe 0000000000015c60 00000000000005fa T LZ4_decompress_safe_withPrefix64k 0000000000002280 00000000000005fa t LZ4_decompress_safe_withSmallPrefix 0000000000015090 000000000000062f T LZ4_decompress_safe_partial 0000000000002880 00000000000008ea t LZ4_decompress_fast_extDict 0000000000016270 0000000000000993 t LZ4_decompress_safe_forceExtDict
* | Merge pull request #519 from lz4/fdParserYann Collet2018-04-271-3/+3
|\ \ | | | | | | Faster decoding speed
| * | ensure favorDecSpeed is properly initializedYann Collet2018-04-271-3/+3
| | | | | | | | | | | | | | | | | | | | | also : - fix a potential malloc error - proper use of ALLOC macro inside lz4hc - update html API doc
* | | Merge _destSize Compress Variant into LZ4_compress_generic()W. Felix Handte2018-04-261-190/+66
|/ /
* | Merge pull request #511 from lz4/decFastYann Collet2018-04-241-3/+5
|\ \ | | | | | | Fixed performance issue with LZ4_decompress_fast()
| * | re-ordered parenthesisCyan49732018-04-241-2/+3
| | | | | | | | | | | | | | | to avoid mixing && and & as suggested by @terrelln
| * | disable shortcut for LZ4_decompress_fast()Cyan49732018-04-231-3/+4
| | | | | | | | | | | | improving speed
* | | Merge pull request #507 from lz4/clangPerfYann Collet2018-04-231-5/+11
|\ \ \ | |/ / |/| | fixed lz4_fast clang performance
| * | fixed incorrect commentCyan49732018-04-211-3/+3
| | |
| * | fixed clang performance in lz4_fastYann Collet2018-04-211-3/+9
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The simple change from `matchIndex+MAX_DISTANCE < current` towards `current - matchIndex > MAX_DISTANCE` is enough to generate a 10% performance drop under clang. Quite massive. (I missed as my eyes were concentrated on gcc performance at that time). The second version is more robust, because it also survives a situation where `matchIndex > current` due to overflows. The first version requires matchIndex to not overflow. Hence were added `assert()` conditions. The only case where this can happen is with dictCtx compression, in the case where the dictionary context is not initialized before loading the dictionary. So it's enough to always initialize the context while loading the dictionary.
* | Fix compilation error and assert.Nick Terrell2018-04-231-1/+1
| |
* | Fix input size validation edge casesNick Terrell2018-04-231-2/+6
|/ | | | | | | | | | | | | | | | | The bug is a read up to 2 bytes past the end of the buffer. There are three cases for this bug, one for each test case added. * An empty input causes `token = *ip++` to read one byte too far. * A one byte input with `(token >> ML_BITS) == RUN_MASK` causes one extra byte to be read without validation. This could be combined with the first bug to cause 2 extra bytes to be read. * The case pointed out in issue #508, where `ip == iend` at the beginning of the loop after taking the shortcut. Benchmarks show no regressions on clang or gcc-7 on both my mac and devserver. Fixes #508.
* Merge pull request #503 from lz4/l120Yann Collet2018-04-191-24/+70
|\ | | | | minor length reduction of several large lines
| * modified indentation for consistencyYann Collet2018-04-191-17/+33
| |
| * minor length reduction of several large linesYann Collet2018-04-181-23/+53
| |
* | Merge pull request #502 from lhacc1/devYann Collet2018-04-191-0/+4
|\ \ | |/ |/| Wrap likely/unlikely macroses with #ifndef
| * Wrap likely/unlikely macroses with #ifndefDmitrii Rodionov2018-04-181-0/+4
| | | | | | | | | | It prevent redefine error when project using lz4 has its own likely/unlikely macroses.
* | fixed LZ4_compress_fast_extState_fastReset() in 32-bit modeYann Collet2018-04-171-2/+2
| |
* | fix dictDelta setting errorYann Collet2018-04-171-1/+1
| | | | | | | | wrong test
* | fix matchIndex overflowYann Collet2018-04-171-12/+4
| | | | | | | | can happen with dictCtx