| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
which is not correct when using LZ4_HC with dictionary and starting from a low address (<0x10000).
|
|
|
|
|
|
|
| |
it was a fairly complex scenario,
involving source files > 64K
and some extraordinary conditions related to specific layout of ranges of zeroes.
and only on level 9.
|
| |
|
|\
| |
| | |
Add --fast command to cli
|
| |
| |
| |
| | |
negative compresion level
|
|/
|
|
| |
Fixes #549.
|
|\
| |
| | |
allow to override uname when cross-compiling
|
| |
| |
| |
| |
| |
| | |
When cross-compiling for example from Darwin to Linux it might be
useful to override uname output to force Linux and create Linux
libraries instead of Darwin libraries.
|
| | |
|
| | |
|
| | |
|
|/ |
|
|
|
|
| |
lz4 1.8.2 works fine on Haiku and passes all tests.
|
|\
| |
| | |
Speed optimization for optimal parser
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
which measurably improves speed
on levels 9+
|
| |
| |
| |
| |
| |
| | |
also :
reserved PA for levels 9+ (instead of 8+).
In most cases, speed is lower, and compression benefit is not worth.
|
| |
| |
| |
| |
| | |
the trade off is not good for regular HC parser :
compression is a little bit better, but speed cost is too large in comparison.
|
| |
| |
| |
| |
| |
| |
| |
| | |
Only enabled when searching forward.
note : it slighly improves compression ratio,
but measurably decreases speed.
Trade-off to analyse.
|
| |
| |
| |
| | |
when combining both PA and CS optimizations
|
| |
| |
| |
| |
| |
| |
| | |
slower than expected
Pattern analyzer and Chain Swapper
work slower when both activated.
Reasons unclear.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
greatly improves speed compared to non-accelerated,
especially for slower files.
On my laptop, -b12 :
```
calgary.tar : 4.3 MB/s => 9.0 MB/s
enwik7 : 10.2 MB/s => 13.3 MB/s
silesia.tar : 4.0 MB/s => 8.7 MB/s
```
Note : this is the simplified version,
without handling dictionaries, external buffer, nor pattern analyzer.
Current `dev` branch on these samples gives :
```
calgary.tar : 4.2 MB/s
enwik7 : 9.7 MB/s
silesia.tar : 3.5 MB/s
```
interestingly, it's slower,
presumably due to handling of dictionaries.
|
| |
| |
| |
| |
| |
| | |
simplified match finder
only searching forward and within current buffer,
for easier testing of optimizations.
|
|\ \
| | |
| | | |
Fix frametest error
|
| | |
| | |
| | |
| | |
| | |
| | | |
don't fix dictionaries of size 0.
setting dictEnd == source triggers prefix mode,
thus removing possibility to use CDict.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The error can be reproduced using following command :
./frametest -v -i100000000 -s1659 -t31096808
It's actually a bug in the stream LZ4 API,
when starting a new stream
and providing a first chunk to complete with size < MINMATCH.
In which case, the chunk becomes a dictionary.
No hash was generated and stored,
but the chunk is accessible as default position 0 points to dictStart,
and position 0 is still within MAX_DISTANCE.
Then, next attempt to read 32-bits from position 0 fails.
The issue would have been mitigated by starting from index 64 KB,
effectively eliminating position 0 as too far away.
The proper fix is to eliminate such "dictionary" as too small.
Which is what this patch does.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
* Uninstall didn't remove the pkg-config correctly.
* Fix `mandir`
* Allow overriding either upper- or lower-case location variables, but
always use the lower case variables.
* Add test case that ensures overriding both upper- and lower-case
variables is the same, and that the directory is empty after uninstall.
|
|\ \
| |/
|/| |
LZ4F: Only Reset the LZ4_stream_t when Init'ing a Streaming Block
|
| | |
|
|\ \ |
|
| |\ \
| | | |
| | | | |
Faster decoding speed
|
| | | |
| | | |
| | | |
| | | | |
as requested by @terrelln
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
the initial intention was to update lz4f ring buffer strategy,
but lz4f doesn't use ring buffer.
Instead, it uses the destination buffer as much as possible,
and merely copies just what's required to preserve history
into its own buffer, at the end.
Pretty efficient.
This patch just clarifies a few comments and add some assert().
It's built on top of #528.
It also updates doc.
|
|\ \ \ \
| |/ / /
| | / /
| |/ /
|/| | |
|
| | |
| | |
| | |
| | | |
shaves one more kilobyte from silesia.tar
|
| | |
| | |
| | |
| | | |
fuzzer : fix and robustify ring buffer tests
|
| | | |
|
|\ \ \
| |/ / |
|
| |\ \
| | | |
| | | | |
fix lz4hc -BD non-determinism
|
| | | |
| | | |
| | | |
| | | | |
lowLimit -> lowestMatchIndex
|
| | | |
| | | |
| | | |
| | | |
| | | | |
to reduce confusion
dictLowLimit => dictStart
|
| | |\ \ |
|
| | | | |
| | | | |
| | | | |
| | | | | |
related to chain table update
|
| | | | | |
|
| | |_|/
| |/| |
| | | |
| | | | |
restrictions for ring buffer
|
|\ \ \ \
| |/ / /
|/| | | |
lz4.c: two-stage shortcut for LZ4_decompress_generic
|
| | | | |
|
|\ \ \ \
| | | | |
| | | | | |
lib/Makefile: show commands with V=1
|
| |/ / /
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
`make V=1` will now show the commands executed to build the library.
A similar technique is used in e.g. linux/Makefile.
The bulk of this change is produced with the following vim command:
:g!/^\t@echo\>/s/^\t@/\t\$(Q)/
|