Difference between revisions of "User:StefanoZacchiroli/Content deduplication"

From Software Heritage Wiki
Jump to navigation Jump to search
m
 
(2 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
=== Random sample 0.1% ===
 
=== Random sample 0.1% ===
  
* random sample of Software Heritage contents
+
* random 0.1% Bernoulli sample of Software Heritage contents, took on 2018-01-09
* 0.1%, Bernoulli sampling; 4.111.582 contents
+
* 4.111.582 contents blobs, for a total of 155 GB (compressed)
* random sample done on 2018-01-09
+
* original size (uncompressed): '''286.99 GB'''
  
 
== Rabin fingerprint chunking ==
 
== Rabin fingerprint chunking ==
Line 26: Line 26:
 
Rabin fingerprint parameters:
 
Rabin fingerprint parameters:
 
* prime: 3
 
* prime: 3
* window_size: 48 KB
+
* window_size: 48
 
* chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
 
* chunk size (min/avg/max): 2 KB / 8 KB / 64 KB
  
Line 39: Line 39:
 
Rabin fingerprint parameters:
 
Rabin fingerprint parameters:
 
* prime: 3
 
* prime: 3
* window_size: 48 KB
+
* window_size: 48
 
* chunk size (min/avg/max): 512 B / 2 KB / 8 KB
 
* chunk size (min/avg/max): 512 B / 2 KB / 8 KB
  
Line 52: Line 52:
 
Rabin fingerprint parameters:
 
Rabin fingerprint parameters:
 
* prime: 3
 
* prime: 3
* window_size: 48 KB
+
* window_size: 48
 
* chunk size (min/avg/max): 512 B / 1 KB / 8 KB
 
* chunk size (min/avg/max): 512 B / 1 KB / 8 KB
  
Line 65: Line 65:
 
Rabin fingerprint parameters:
 
Rabin fingerprint parameters:
 
* prime: 3
 
* prime: 3
* window_size: 48 KB
+
* window_size: 48
 
* chunk size (min/avg/max): 512 B / 1 KB / 8 KB
 
* chunk size (min/avg/max): 512 B / 1 KB / 8 KB
  
Line 78: Line 78:
 
* [https://pdfs.semanticscholar.org/5231/82a9a66f96241d4bab0b92cc92e9cbda22c7.pdf DCT]
 
* [https://pdfs.semanticscholar.org/5231/82a9a66f96241d4bab0b92cc92e9cbda22c7.pdf DCT]
 
* [https://github.com/nexB/scancode-toolkit/blob/ba5be09e54f57bd519239fd81746ebc4d0f9af9c/src/licensedcode/tokenize.py#L188 ScanCode tokenizer]
 
* [https://github.com/nexB/scancode-toolkit/blob/ba5be09e54f57bd519239fd81746ebc4d0f9af9c/src/licensedcode/tokenize.py#L188 ScanCode tokenizer]
 +
* [https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/ Pcompres chunking] (linked from [http://0pointer.net/blog/casync-a-tool-for-distributing-file-system-images.html casync])

Latest revision as of 14:32, 4 May 2018

Some experiments on deduplicating contents at sub-file granularity.

Datasets

Linux kernel, Git repo

  • origin: git.kernel.org, on 2018-01-06
  • 1.653.941 content blobs, for a total of 19 GB (compressed)
  • original size (uncompressed): 55.89 GB

Random sample 0.1%

  • random 0.1% Bernoulli sample of Software Heritage contents, took on 2018-01-09
  • 4.111.582 contents blobs, for a total of 155 GB (compressed)
  • original size (uncompressed): 286.99 GB

Rabin fingerprint chunking

test 1

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48
  • chunk size (min/avg/max): 2 KB / 8 KB / 64 KB

Results:

  • average chunk size (effective): 9.37 KB
  • dedup chunk size (uncompressed): 19.87 GB (35.55%)

test 2

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48
  • chunk size (min/avg/max): 512 B / 2 KB / 8 KB

Results:

  • average chunk size (effective): 2.86 KB
  • dedup chunk size (uncompressed): 9.09 GB (16.26%)

test 3

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48
  • chunk size (min/avg/max): 512 B / 1 KB / 8 KB

Results:

  • average chunk size (effective): 1.72 KB
  • dedup chunk size (uncompressed): 6.49 GB (11.60%)

test 4

Dataset: random sample 0.1%

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48
  • chunk size (min/avg/max): 512 B / 1 KB / 8 KB

Results:

  • average chunk size (effective): 1.56 KB
  • dedup chunk size (uncompressed): 244.09 GB (85.05%)

References