| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
ensure it's not NULL.
|
|
|
|
|
|
|
| |
it fails on x86 32-bit mode :
Visual reports an alignment of 8-bytes (even with alignof())
but actually only align LZ4_stream_t on 4 bytes.
The alignment check then fails, resulting in missed initialization.
|
|
|
|
| |
not liked by mingw
|
| |
|
| |
|
|
|
|
|
| |
level down 5->4
size down 6G->5G
|
| |
|
|
|
|
| |
updated associated documentation
|
|
|
|
| |
as suggested by @felixhandte
|
|
|
|
|
|
|
|
|
|
| |
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()
note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
- promoted LZ4_resetStreamHC_fast() to stable
- moved LZ4_resetStreamHC() to deprecated (but do not generate a warning yet)
- Updated doc, to highlight difference between init and reset
- switched all invocations of LZ4_resetStreamHC() onto LZ4_initStreamHC()
- misc: ensure `make all` also builds /tests
|
|\
| |
| | |
Deprecated LZ4_decompres_fast*() functions
|
| |
| |
| |
| | |
updated modification
|
|/ |
|
|\
| |
| | |
moved _destSize() into "stable API" status
|
| |
| |
| |
| |
| |
| |
| | |
which remained undetected so far,
as it requires a fairly large number of conditions to be triggered,
starting with enabling Block checksum, which is disabled by default,
and which usage is known to be extremely rare.
|
| |
| |
| |
| | |
and bumped version number fo v1.9.0
|
|/
|
|
| |
as requested in #642
|
|\
| |
| | |
LZ4_FAST_DEC_LOOP macros
|
| | |
|
| | |
|
| | |
|
|/ |
|
|\
| |
| | |
Allow installation of lz4 for Windows 10 with MSYS2
|
| | |
|
|\ \
| | |
| | | |
Optimize decompress generic
|
| | |
| | |
| | |
| | |
| | | |
New fastpath currently shows a regression on qualcomm
arm chips. Restrict it to x86 for now
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t,
and then use it to do a memset() variation. In particular, this makes
the somewhat-common RLE (offset 1) about 2-4x faster than the previous
implementation - we avoid not only the load blocked by store, but also
avoid the loads entirely.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Generally we want our wildcopy loops to look like the
memcpy loops from our libc, but without the final byte copy checks.
We can unroll a bit to make long copies even faster.
The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
|
| | |
| | |
| | |
| | |
| | | |
This store is also causing load-blocked-by-store issues, remove it.
The msan warning will have to be fixed another way if it is still an issue.
|
| | |
| | |
| | |
| | |
| | | |
This is the remaineder of the original 'shortcut'. If true, we can avoid
the loop in LZ4_wildCopy, and directly copy instead.
|
| | |
| | |
| | |
| | |
| | |
| | | |
We've already checked that we are more than FASTLOOP_SAFE_DISTANCE
away from the end, so this branch can never be true, we will have
already jumped to the second decode loop.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use LZ4_wildCopy16 for variable-length literals. For literal counts that
fit in the flag byte, copy directly. We can also omit oend checks for
roughly the same reason as the previous shortcut: We check once that both
match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including
wildcopy distance.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up
to 16 bytes, and use it for match copy. On x64, this avoids many
blocked loads due to store forwarding, similar to issue #411.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Copy the main loop, and change checks such that op is always less
than oend-SAFE_DISTANCE. Currently these are added for the literal
copy length check, and for the match copy length check.
Otherwise the first loop is exactly the same as the second. Follow on
diffs will optimize the first copy loop based on this new requirement.
I also tried instead making a separate inlineable function for the copy
loop (similar to existing partialDecode flags, etc), but I think the
changes might be significant enough to warrent doubling the code, instead
pulling out common functionality to separate functions.
This is the basic transformation that will allow several following optimisations.
|
| | |
| | |
| | |
| | |
| | | |
Make a helper function to read variable lengths for literals and
match length.
|
|\ \ \
| | | |
| | | | |
Build fixed by removing unavailable project
|
|/ / / |
|
|\ \ \
| |/ /
|/| | |
Eliminate optimize attribute warning with clang on PPC64LE
|
|/ / |
|
|\ \
| | |
| | | |
meson: Add -DLZ4_DLL_EXPORT=1 to build dynamic lib on Windows
|
|/ /
| |
| |
| | |
Thanks @nacho for pointing it out.
|
|\ \
| | |
| | | |
Travis: Clean up .travis.yml
|
| | | |
|
| | | |
|
|\ \ \
| | | |
| | | | |
meson: Small improvements
|
| | | | |
|