| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
as suggested by @sloutsky in #671
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
between lz4.c and lz4hc.c .
was left in a strange state after the "amalgamation" patch.
Now only 3 directives remain,
same name across both implementations,
single definition place.
Might allow some light simplification due to reduced nb of states possible.
|
|
|
|
|
|
|
|
|
| |
they are classified as deprecated in the API documentation (lz4.h)
but do not yet trigger a warning,
to give time to existing applications to move away.
Also, the _fast() variants are still ~5% faster than the _safe() ones
after Dave's patch.
|
|
|
|
|
|
|
| |
when one block was not compressible,
it would tag the context as `dirty`,
resulting in compression automatically bailing out of all future blocks,
making the rest of the frame uncompressible.
|
|\ |
|
| |
| |
| |
| |
| |
| | |
yet some overly cautious overflow risk flag,
while it's actually impossible, due to previous test just one line above.
Changing the cast position, just to be please the thing.
|
| |
| |
| |
| |
| |
| |
| |
| | |
since Visual 2017,
worries about potential overflow, which are actually impossible.
Replaced (c * a) by (c ? a : 0).
Will likely replaced a * by a cmov.
Probably harmless for performance.
|
|/
|
|
| |
output can use the full length of output buffer
|
|
|
|
| |
as this is a very frequent source of confusion for new users.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
make it possible to generate LZ4-compressed block
with a controlled maximum offset (necessarily <= 65535).
This could be useful for compatibility with decoders
using a very limited memory budget (<64 KB).
Answer #154
|
|\
| |
| | |
made LZ4F_getHeaderSize() public
|
| | |
|
|/ |
|
|
|
|
|
| |
by making a full initialization
instead of a fast reset.
|
|
|
|
| |
towards deprecation, but still available and fully supported
|
|
|
|
|
| |
it is now a pure initializer, for statically allocated states.
It can initialize any memory area, and because of this, requires size.
|
|
|
|
| |
ensure it's not NULL.
|
|
|
|
|
|
|
| |
it fails on x86 32-bit mode :
Visual reports an alignment of 8-bytes (even with alignof())
but actually only align LZ4_stream_t on 4 bytes.
The alignment check then fails, resulting in missed initialization.
|
|
|
|
| |
updated associated documentation
|
|
|
|
| |
as suggested by @felixhandte
|
|
|
|
|
|
|
|
|
|
| |
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()
note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
|
| |
|
|
|
|
|
|
|
|
| |
- promoted LZ4_resetStreamHC_fast() to stable
- moved LZ4_resetStreamHC() to deprecated (but do not generate a warning yet)
- Updated doc, to highlight difference between init and reset
- switched all invocations of LZ4_resetStreamHC() onto LZ4_initStreamHC()
- misc: ensure `make all` also builds /tests
|
|
|
|
| |
updated modification
|
| |
|
|
|
|
|
|
|
| |
which remained undetected so far,
as it requires a fairly large number of conditions to be triggered,
starting with enabling Block checksum, which is disabled by default,
and which usage is known to be extremely rare.
|
|
|
|
| |
and bumped version number fo v1.9.0
|
|
|
|
| |
as requested in #642
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Allow installation of lz4 for Windows 10 with MSYS2
|
| | |
|
|\ \
| | |
| | | |
Optimize decompress generic
|
| | |
| | |
| | |
| | |
| | | |
New fastpath currently shows a regression on qualcomm
arm chips. Restrict it to x86 for now
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t,
and then use it to do a memset() variation. In particular, this makes
the somewhat-common RLE (offset 1) about 2-4x faster than the previous
implementation - we avoid not only the load blocked by store, but also
avoid the loads entirely.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Generally we want our wildcopy loops to look like the
memcpy loops from our libc, but without the final byte copy checks.
We can unroll a bit to make long copies even faster.
The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
|
| | |
| | |
| | |
| | |
| | | |
This store is also causing load-blocked-by-store issues, remove it.
The msan warning will have to be fixed another way if it is still an issue.
|
| | |
| | |
| | |
| | |
| | | |
This is the remaineder of the original 'shortcut'. If true, we can avoid
the loop in LZ4_wildCopy, and directly copy instead.
|
| | |
| | |
| | |
| | |
| | |
| | | |
We've already checked that we are more than FASTLOOP_SAFE_DISTANCE
away from the end, so this branch can never be true, we will have
already jumped to the second decode loop.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use LZ4_wildCopy16 for variable-length literals. For literal counts that
fit in the flag byte, copy directly. We can also omit oend checks for
roughly the same reason as the previous shortcut: We check once that both
match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including
wildcopy distance.
|
| | |
| | |
| | |
| | |
| | |
| | | |
Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up
to 16 bytes, and use it for match copy. On x64, this avoids many
blocked loads due to store forwarding, similar to issue #411.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Copy the main loop, and change checks such that op is always less
than oend-SAFE_DISTANCE. Currently these are added for the literal
copy length check, and for the match copy length check.
Otherwise the first loop is exactly the same as the second. Follow on
diffs will optimize the first copy loop based on this new requirement.
I also tried instead making a separate inlineable function for the copy
loop (similar to existing partialDecode flags, etc), but I think the
changes might be significant enough to warrent doubling the code, instead
pulling out common functionality to separate functions.
This is the basic transformation that will allow several following optimisations.
|
| | |
| | |
| | |
| | |
| | | |
Make a helper function to read variable lengths for literals and
match length.
|
|/ / |
|