Difference between revisions of "User:StefanoZacchiroli/Content deduplication"

From Software Heritage Wiki
Jump to navigation Jump to search
Line 39: Line 39:
 
* average chunk size (effective): 5.07 KB
 
* average chunk size (effective): 5.07 KB
 
* dedup chunk size (uncompressed): 16.19 GB (28.96%)
 
* dedup chunk size (uncompressed): 16.19 GB (28.96%)
 +
 +
=== test 3 ===
 +
 +
Dataset: linux.git
 +
 +
Rabin fingerprint parameters:
 +
* prime: 3
 +
* window_size: 48 KB
 +
* chunk size (min/avg/max): 512 B / 1 KB / 8 KB
 +
 +
Results:
 +
* average chunk size (effective):
 +
* dedup chunk size (uncompressed):
  
 
== References ==
 
== References ==

Revision as of 10:21, 8 January 2018

Some experiments on deduplicating contents at sub-file granularity.

Datasets

Linux kernel, Git repo

  • origin: git.kernel.org, on 2018-01-06
  • 1.653.941 content blobs, for a total of 19 GB (compressed)
  • original size (uncompressed): 55.89 GB

Rabin fingerprints

test 1

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48 KB
  • chunk size (min/avg/max): 2 KB / 8 KB / 64 KB

Results:

  • average chunk size (effective): 9.37 KB
  • dedup chunk size (uncompressed): 19.87 GB (35.55%)

test 2

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48 KB
  • chunk size (min/avg/max): 512 B / 2 KB / 8 KB

Results:

  • average chunk size (effective): 5.07 KB
  • dedup chunk size (uncompressed): 16.19 GB (28.96%)

test 3

Dataset: linux.git

Rabin fingerprint parameters:

  • prime: 3
  • window_size: 48 KB
  • chunk size (min/avg/max): 512 B / 1 KB / 8 KB

Results:

  • average chunk size (effective):
  • dedup chunk size (uncompressed):

References