The Linux file-system defragmentation Project

Main Page | Theory | Changelog | Download | Sample Output | Screenshots | FAQ

Linux file-system DO fragment over time!

Partial references:
1, File Layout and File System Performance
2, File system aging¡ªincreasing the relevance of file system benchmarks
3, Workload-Specific File System Benchmarks

Defragmentation: The best way for defragmentation is always dump-and-restore, so this is why the best defragmentation tool in windows is *Norton GHOST*. However this makes your system a stop time. So, the central concept "defragfs" is: analysis file-system fragmentation on-line and report, let yourself decide how much files you need a dump-and-restore on-line. Simple it is, effective it is.

Example:

1, The file 1.avi(~700MB) is downloaded by aMule, the file-system is Reiser3(usage 70%)
#hdparm -t /dev/hda
/dev/hda:
Timing buffered disk reads: 90 MB in 3.05 seconds = 29.48 MB/sec

2, Before:
#filefrag 1.avi
1.avi: 48044 extents found
#time cat 1.avi>/dev/null
real 3m1.478s
user 0m0.020s
sys 0m2.086s

3, After:
#cp 1.avi 2.avi
#filefrag 2.avi
2.avi: 128 extents found
#time cat 2.avi>/dev/null
real 0m25.329s
user 0m0.012s
sys 0m1.329s

4, Conclusion:
After re-allocation by the file-system itself, file fragments decreased to 1/375, file read performance improved 7 times, which means much less disk-seek during movie playing.

512MB(90% full) | 768MB(60% full) | 1024MB(40% full) | Defragmentation

Results after defragmentation coming soon...