Difference between revisions of "User:StefanoZacchiroli/Content deduplication"
Jump to navigation
Jump to search
(→test 2) |
|||
Line 37: | Line 37: | ||
Results: | Results: | ||
− | * average chunk size (effective): | + | * average chunk size (effective): 2.86 KB |
− | * dedup chunk size (uncompressed): | + | * dedup chunk size (uncompressed): 9.09 GB (16.26%) |
=== test 3 === | === test 3 === |
Revision as of 11:10, 8 January 2018
Some experiments on deduplicating contents at sub-file granularity.
Datasets
Linux kernel, Git repo
- origin: git.kernel.org, on 2018-01-06
- 1.653.941 content blobs, for a total of 19 GB (compressed)
- original size (uncompressed): 55.89 GB
Rabin fingerprints
- Approach: use Rabin fingerprints
- Implementation: swh-dedup-blocks.py
test 1
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
Results:
- average chunk size (effective): 9.37 KB
- dedup chunk size (uncompressed): 19.87 GB (35.55%)
test 2
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 512 B / 2 KB / 8 KB
Results:
- average chunk size (effective): 2.86 KB
- dedup chunk size (uncompressed): 9.09 GB (16.26%)
test 3
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 512 B / 1 KB / 8 KB
Results:
- average chunk size (effective):
- dedup chunk size (uncompressed):