Difference between revisions of "User:StefanoZacchiroli/Content deduplication"
Jump to navigation
Jump to search
m |
|||
(15 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
* origin: git.kernel.org, on 2018-01-06 | * origin: git.kernel.org, on 2018-01-06 | ||
* 1.653.941 content blobs, for a total of 19 GB (compressed) | * 1.653.941 content blobs, for a total of 19 GB (compressed) | ||
− | * original size (uncompressed): 55.89 GB | + | * original size (uncompressed): '''55.89 GB''' |
− | == | + | === Random sample 0.1% === |
− | * Approach: use [https://en.wikipedia.org/wiki/Rabin_fingerprint Rabin fingerprints] | + | * random 0.1% Bernoulli sample of Software Heritage contents, took on 2018-01-09 |
− | * Implementation: [https://forge.softwareheritage.org/ | + | * 4.111.582 contents blobs, for a total of 155 GB (compressed) |
+ | * original size (uncompressed): '''286.99 GB''' | ||
+ | |||
+ | == Rabin fingerprint chunking == | ||
+ | |||
+ | * Approach: use [https://en.wikipedia.org/wiki/Rabin_fingerprint Rabin fingerprints] as in LBFS | ||
+ | * Implementation: [https://forge.softwareheritage.org/source/snippets/browse/master/zack/swh-dedup/ swh-dedup-blocks.py] | ||
=== test 1 === | === test 1 === | ||
Line 20: | Line 26: | ||
Rabin fingerprint parameters: | Rabin fingerprint parameters: | ||
* prime: 3 | * prime: 3 | ||
− | * window_size: 48 | + | * window_size: 48 |
− | * | + | * chunk size (min/avg/max): 2 KB / 8 KB / 64 KB |
− | |||
− | |||
Results: | Results: | ||
* average chunk size (effective): 9.37 KB | * average chunk size (effective): 9.37 KB | ||
− | * dedup chunk size (uncompressed): 19.87 GB (35.55%) | + | * dedup chunk size (uncompressed): '''19.87 GB (35.55%)''' |
=== test 2 === | === test 2 === | ||
Line 35: | Line 39: | ||
Rabin fingerprint parameters: | Rabin fingerprint parameters: | ||
* prime: 3 | * prime: 3 | ||
− | * window_size: 48 KB | + | * window_size: 48 |
− | * | + | * chunk size (min/avg/max): 512 B / 2 KB / 8 KB |
− | * | + | |
− | * | + | Results: |
+ | * average chunk size (effective): 2.86 KB | ||
+ | * dedup chunk size (uncompressed): '''9.09 GB (16.26%)''' | ||
+ | |||
+ | === test 3 === | ||
+ | |||
+ | Dataset: linux.git | ||
+ | |||
+ | Rabin fingerprint parameters: | ||
+ | * prime: 3 | ||
+ | * window_size: 48 | ||
+ | * chunk size (min/avg/max): 512 B / 1 KB / 8 KB | ||
+ | |||
+ | Results: | ||
+ | * average chunk size (effective): 1.72 KB | ||
+ | * dedup chunk size (uncompressed): '''6.49 GB (11.60%)''' | ||
+ | |||
+ | === test 4 === | ||
+ | |||
+ | Dataset: random sample 0.1% | ||
+ | |||
+ | Rabin fingerprint parameters: | ||
+ | * prime: 3 | ||
+ | * window_size: 48 | ||
+ | * chunk size (min/avg/max): 512 B / 1 KB / 8 KB | ||
Results: | Results: | ||
− | * average chunk size (effective): | + | * average chunk size (effective): 1.56 KB |
− | * dedup chunk size (uncompressed): | + | * dedup chunk size (uncompressed): '''244.09 GB (85.05%)''' |
== References == | == References == | ||
Line 50: | Line 78: | ||
* [https://pdfs.semanticscholar.org/5231/82a9a66f96241d4bab0b92cc92e9cbda22c7.pdf DCT] | * [https://pdfs.semanticscholar.org/5231/82a9a66f96241d4bab0b92cc92e9cbda22c7.pdf DCT] | ||
* [https://github.com/nexB/scancode-toolkit/blob/ba5be09e54f57bd519239fd81746ebc4d0f9af9c/src/licensedcode/tokenize.py#L188 ScanCode tokenizer] | * [https://github.com/nexB/scancode-toolkit/blob/ba5be09e54f57bd519239fd81746ebc4d0f9af9c/src/licensedcode/tokenize.py#L188 ScanCode tokenizer] | ||
+ | * [https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/ Pcompres chunking] (linked from [http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html casync]) |
Latest revision as of 14:32, 4 May 2018
Some experiments on deduplicating contents at sub-file granularity.
Datasets
Linux kernel, Git repo
- origin: git.kernel.org, on 2018-01-06
- 1.653.941 content blobs, for a total of 19 GB (compressed)
- original size (uncompressed): 55.89 GB
Random sample 0.1%
- random 0.1% Bernoulli sample of Software Heritage contents, took on 2018-01-09
- 4.111.582 contents blobs, for a total of 155 GB (compressed)
- original size (uncompressed): 286.99 GB
Rabin fingerprint chunking
- Approach: use Rabin fingerprints as in LBFS
- Implementation: swh-dedup-blocks.py
test 1
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48
- chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
Results:
- average chunk size (effective): 9.37 KB
- dedup chunk size (uncompressed): 19.87 GB (35.55%)
test 2
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48
- chunk size (min/avg/max): 512 B / 2 KB / 8 KB
Results:
- average chunk size (effective): 2.86 KB
- dedup chunk size (uncompressed): 9.09 GB (16.26%)
test 3
Dataset: linux.git
Rabin fingerprint parameters:
- prime: 3
- window_size: 48
- chunk size (min/avg/max): 512 B / 1 KB / 8 KB
Results:
- average chunk size (effective): 1.72 KB
- dedup chunk size (uncompressed): 6.49 GB (11.60%)
test 4
Dataset: random sample 0.1%
Rabin fingerprint parameters:
- prime: 3
- window_size: 48
- chunk size (min/avg/max): 512 B / 1 KB / 8 KB
Results:
- average chunk size (effective): 1.56 KB
- dedup chunk size (uncompressed): 244.09 GB (85.05%)
References
- LBFS
- Hailstorm
- DCT
- ScanCode tokenizer
- Pcompres chunking (linked from casync)