Difference between revisions of "User:StefanoZacchiroli/Content deduplication"
Jump to navigation
Jump to search
Line 21: | Line 21: | ||
* prime: 3 | * prime: 3 | ||
* window_size: 48 KB | * window_size: 48 KB | ||
− | * | + | * chunk size (min/avg/max): 2 KB / 8 KB / 64 KB |
− | |||
− | |||
Results: | Results: | ||
Line 36: | Line 34: | ||
* prime: 3 | * prime: 3 | ||
* window_size: 48 KB | * window_size: 48 KB | ||
− | * | + | * chunk size (min/avg/max): 512 B / 2 KB / 8 KB |
− | |||
− | |||
Results: | Results: |
Revision as of 10:20, 8 January 2018
Some experiments on deduplicating contents at sub-file granularity.
Datasets
Linux kernel, Git repo
- origin: git.kernel.org, on 2018-01-06
- 1.653.941 content blobs, for a total of 19 GB (compressed)
- original size (uncompressed): 55.89 GB
Rabin fingerprints
- Approach: use Rabin fingerprints
- Implementation: swh-dedup-blocks.py
test 1
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
Results:
- average chunk size (effective): 9.37 KB
- dedup chunk size (uncompressed): 19.87 GB (35.55%)
test 2
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 512 B / 2 KB / 8 KB
Results:
- average chunk size (effective): 5.07 KB
- dedup chunk size (uncompressed): 16.19 GB (28.96%)