summaryrefslogtreecommitdiffstats
path: root/lib
Commit message (Collapse)AuthorAgeFilesLines
* Fix AIX errors/warningsNorm Green2019-04-171-0/+6
|
* ensure consistent definition and usage of FREEMEMYann Collet2019-04-162-6/+6
| | | | as suggested by @sloutsky in #671
* simplified output_directiveYann Collet2019-04-151-15/+17
|
* fix comma for pedanticYann Collet2019-04-151-1/+1
|
* unified limitedOutput_directiveYann Collet2019-04-152-35/+27
| | | | | | | | | | | between lz4.c and lz4hc.c . was left in a strange state after the "amalgamation" patch. Now only 3 directives remain, same name across both implementations, single definition place. Might allow some light simplification due to reduced nb of states possible.
* decompress*_fast() function do not generate deprecation warningsYann Collet2019-04-151-13/+14
| | | | | | | | | they are classified as deprecated in the API documentation (lz4.h) but do not yet trigger a warning, to give time to existing applications to move away. Also, the _fast() variants are still ~5% faster than the _safe() ones after Dave's patch.
* fixed lz4frame with linked blocksYann Collet2019-04-151-11/+9
| | | | | | | when one block was not compressible, it would tag the context as `dirty`, resulting in compression automatically bailing out of all future blocks, making the rest of the frame uncompressible.
* Merge branch 'dev' of github.com:Cyan4973/lz4 into devYann Collet2019-04-133-16/+16
|\
| * fix minor visual warningYann Collet2019-04-121-2/+2
| | | | | | | | | | | | yet some overly cautious overflow risk flag, while it's actually impossible, due to previous test just one line above. Changing the cast position, just to be please the thing.
| * fixed minor Visual warningsYann Collet2019-04-122-14/+14
| | | | | | | | | | | | | | | | since Visual 2017, worries about potential overflow, which are actually impossible. Replaced (c * a) by (c ? a : 0). Will likely replaced a * by a cmov. Probably harmless for performance.
* | fixed incorrect assertion conditionYann Collet2019-04-131-1/+1
|/ | | | output can use the full length of output buffer
* updated doc to underline difference between block and frameYann Collet2019-04-123-16/+24
| | | | as this is a very frequent source of confusion for new users.
* improved documentation for LZ4 dictionary compressionYann Collet2019-04-112-4/+27
|
* introduce LZ4_DISTANCE_MAX build macroYann Collet2019-04-113-23/+35
| | | | | | | | | | make it possible to generate LZ4-compressed block with a controlled maximum offset (necessarily <= 65535). This could be useful for compatibility with decoders using a very limited memory budget (<64 KB). Answer #154
* Merge pull request #663 from lz4/headerSizeYann Collet2019-04-102-57/+99
|\ | | | | made LZ4F_getHeaderSize() public
| * made LZ4F_getHeaderSize() publicYann Collet2019-04-102-57/+99
| |
* | added versions in commentsYann Collet2019-04-102-2/+7
|/
* fixed loadDictHCYann Collet2019-04-091-10/+18
| | | | | by making a full initialization instead of a fast reset.
* re-enable LZ4_resetStreamHC()Yann Collet2019-04-091-1/+1
| | | | towards deprecation, but still available and fully supported
* modified LZ4_initStreamHC() to look like LZ4_initStream()Yann Collet2019-04-094-40/+79
| | | | | it is now a pure initializer, for statically allocated states. It can initialize any memory area, and because of this, requires size.
* check some more initialization resultYann Collet2019-04-081-1/+5
| | | | ensure it's not NULL.
* removed LZ4_stream_t alignment test on VisualYann Collet2019-04-081-0/+8
| | | | | | | it fails on x86 32-bit mode : Visual reports an alignment of 8-bytes (even with alignof()) but actually only align LZ4_stream_t on 4 bytes. The alignment check then fails, resulting in missed initialization.
* LZ4_initStream() checks alignment restrictionYann Collet2019-04-082-7/+17
| | | | updated associated documentation
* added comment on initStream + _extState_Yann Collet2019-04-051-4/+8
| | | | as suggested by @felixhandte
* created LZ4_initStream()Yann Collet2019-04-054-53/+60
| | | | | | | | | | - promoted LZ4_resetStream_fast() to stable - moved LZ4_resetStream() into deprecate, but without triggering a compiler warning - update all sources to no longer rely on LZ4_resetStream() note : LZ4_initStream() proposal is slightly different : it's able to initialize any buffer, provided that it's large enough. To this end, it accepts a void*, and returns an LZ4_stream_t*.
* fixed strict iso C90Yann Collet2019-04-051-1/+1
|
* created LZ4_initStreamHC()Yann Collet2019-04-053-143/+190
| | | | | | | | - promoted LZ4_resetStreamHC_fast() to stable - moved LZ4_resetStreamHC() to deprecated (but do not generate a warning yet) - Updated doc, to highlight difference between init and reset - switched all invocations of LZ4_resetStreamHC() onto LZ4_initStreamHC() - misc: ensure `make all` also builds /tests
* make `_fast*()` decoder generate a deprecation warningYann Collet2019-04-042-9/+25
| | | | updated modification
* moved LZ4_decompress_fast*() into deprecated sectionYann Collet2019-04-042-24/+30
|
* fixed an old bug in LZ4F_flush()Yann Collet2019-04-032-66/+109
| | | | | | | which remained undetected so far, as it requires a fairly large number of conditions to be triggered, starting with enabling Block checksum, which is disabled by default, and which usage is known to be extremely rare.
* fixed docYann Collet2019-04-031-7/+10
| | | | and bumped version number fo v1.9.0
* moved _destSize() into "stable API" statusYann Collet2019-04-031-41/+41
| | | | as requested in #642
* minor comments and reformattingYann Collet2019-04-031-12/+17
|
* fixed minor conversion warningsYann Collet2019-04-031-14/+10
|
* created LZ4_FAST_DEC_LOOP build macroYann Collet2019-04-022-9/+31
|
* fixed a few minor conversion warningsYann Collet2019-04-021-20/+22
|
* Merge pull request #652 from vtorri/devYann Collet2019-03-031-1/+1
|\ | | | | Allow installation of lz4 for Windows 10 with MSYS2
| * Allow installation of lz4 for Windows 10 with MSYS2Vincent Torri2019-03-031-1/+1
| |
* | Merge pull request #645 from djwatson/optimize_decompress_genericYann Collet2019-02-121-19/+245
|\ \ | | | | | | Optimize decompress generic
| * | decompress_generic: Limit fastpath to x86Dave Watson2019-02-111-3/+9
| | | | | | | | | | | | | | | New fastpath currently shows a regression on qualcomm arm chips. Restrict it to x86 for now
| * | decompress_generic: Add fastpath for small offsetsDave Watson2019-02-081-19/+59
| | | | | | | | | | | | | | | | | | | | | | | | For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t, and then use it to do a memset() variation. In particular, this makes the somewhat-common RLE (offset 1) about 2-4x faster than the previous implementation - we avoid not only the load blocked by store, but also avoid the loads entirely.
| * | decompress_generic: Unroll loops a bit moreDave Watson2019-02-081-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | Generally we want our wildcopy loops to look like the memcpy loops from our libc, but without the final byte copy checks. We can unroll a bit to make long copies even faster. The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
| * | decompress_generic: remove msan writeDave Watson2019-02-081-5/+0
| | | | | | | | | | | | | | | This store is also causing load-blocked-by-store issues, remove it. The msan warning will have to be fixed another way if it is still an issue.
| * | decompress_generic: re-add fastpathDave Watson2019-02-081-4/+19
| | | | | | | | | | | | | | | This is the remaineder of the original 'shortcut'. If true, we can avoid the loop in LZ4_wildCopy, and directly copy instead.
| * | decompress_generic: drop partial copy check in fast loopDave Watson2019-02-081-15/+0
| | | | | | | | | | | | | | | | | | We've already checked that we are more than FASTLOOP_SAFE_DISTANCE away from the end, so this branch can never be true, we will have already jumped to the second decode loop.
| * | decompress_generic: Optimize literal copiesDave Watson2019-02-081-12/+21
| | | | | | | | | | | | | | | | | | | | | | | | Use LZ4_wildCopy16 for variable-length literals. For literal counts that fit in the flag byte, copy directly. We can also omit oend checks for roughly the same reason as the previous shortcut: We check once that both match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including wildcopy distance.
| * | decompress_generic: optimize match copyDave Watson2019-02-081-23/+28
| | | | | | | | | | | | | | | | | | Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up to 16 bytes, and use it for match copy. On x64, this avoids many blocked loads due to store forwarding, similar to issue #411.
| * | decompress_generic: Add a loop fastpathDave Watson2019-02-081-5/+153
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Copy the main loop, and change checks such that op is always less than oend-SAFE_DISTANCE. Currently these are added for the literal copy length check, and for the match copy length check. Otherwise the first loop is exactly the same as the second. Follow on diffs will optimize the first copy loop based on this new requirement. I also tried instead making a separate inlineable function for the copy loop (similar to existing partialDecode flags, etc), but I think the changes might be significant enough to warrent doubling the code, instead pulling out common functionality to separate functions. This is the basic transformation that will allow several following optimisations.
| * | decompress_generic: Refactor variable length fieldsDave Watson2019-02-081-12/+35
| | | | | | | | | | | | | | | Make a helper function to read variable lengths for literals and match length.
* | | Eliminate optimize attribute warning with clang on PPC64LEJeremy Maitin-Shepard2019-02-041-1/+1
|/ /