Difference between revisions of "User:StefanoZacchiroli/Content deduplication"
Jump to navigation
Jump to search
Line 52: | Line 52: | ||
* average chunk size (effective): 1.72 KB | * average chunk size (effective): 1.72 KB | ||
* dedup chunk size (uncompressed): '''6.49 GB (11.60%)''' | * dedup chunk size (uncompressed): '''6.49 GB (11.60%)''' | ||
+ | |||
+ | === test 4 === | ||
+ | |||
+ | TODO | ||
+ | |||
+ | contents: 4111582 | ||
+ | chunks: 164052312 | ||
+ | average chunk size: 1597.60 | ||
+ | total content size: 308154402601 | ||
+ | total chunk size: 262090652269 (85.05%) | ||
+ | |||
+ | real 22m20,334s | ||
+ | user 15m56,586s | ||
+ | sys 2m49,606s | ||
== References == | == References == |
Revision as of 21:43, 16 January 2018
Some experiments on deduplicating contents at sub-file granularity.
Datasets
Linux kernel, Git repo
- origin: git.kernel.org, on 2018-01-06
- 1.653.941 content blobs, for a total of 19 GB (compressed)
- original size (uncompressed): 55.89 GB
Rabin fingerprint chunking
- Approach: use Rabin fingerprints as in LBFS
- Implementation: swh-dedup-blocks.py
test 1
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
Results:
- average chunk size (effective): 9.37 KB
- dedup chunk size (uncompressed): 19.87 GB (35.55%)
test 2
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 512 B / 2 KB / 8 KB
Results:
- average chunk size (effective): 2.86 KB
- dedup chunk size (uncompressed): 9.09 GB (16.26%)
test 3
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48 KB
- chunk size (min/avg/max): 512 B / 1 KB / 8 KB
Results:
- average chunk size (effective): 1.72 KB
- dedup chunk size (uncompressed): 6.49 GB (11.60%)
test 4
TODO
contents: 4111582 chunks: 164052312 average chunk size: 1597.60 total content size: 308154402601 total chunk size: 262090652269 (85.05%)
real 22m20,334s user 15m56,586s sys 2m49,606s