ntfsclone can be useful to make backups, an exact snapshot of an NTFS filesystem and restore it later on, or for developers to test NTFS read/write functionality, troubleshot/investigate users’ issues using the clone without the risk of destroying the original filesystem.
The clone is an exact copy of the original NTFS filesystem from sector to sector thus it can be also mounted just like the original NTFS filesystem. For example if you clone to a file and the kernel has loopback device and NTFS support then the file can be mounted as
mount -t ntfs -o loop ntfsclone.img /mnt/ntfsclone
bzip2 compresses large sparse files much better than gzip but it does so also much slower. Moreover neither of them handles large sparse files efficiently during uncompression from disk space usage point of view. A possible workaround is if you pipe the uncompressed stream through cp, for example this way,
At present the most efficient way, both speed and space-wise, to compress and uncompress large sparse files by common tools is using tar with the options -S (handle sparse files "efficiently") and -j (filter the archive through bzip2). Altough tar still reads and analyses the entire file, it doesn’t pass on the large data blocks having only zeros to filters and it also avoids writing large amount of zeros to the disk needlessly. But since tar can’t create an archive from the standard input, you can’t do this in-place by just reading ntfsclone standard output.bunzip2 -c image.bz2 | cp --sparse=always /proc/self/fd/0 image
The metadata-only image can be compressed very well, usually to not more than 1-3 MB thus it’s relatively easy to transfer it for investigation to NTFS experts.
In this mode of ntfsclone, NONE of the user’s data is saved, including the resident user’s data embedded into metadata. All is filled with zeros. Moreover all the file timestamps, deleted and unused spaces inside the metadata are filled with zeros. Thus this mode is inappropriate for example for forensic analyses.
Please note, filenames are not wiped out. They might contain sensitive information, so think twice before sending such an image to anybody.
Restoring a clone image to its original partitionntfsclone --output ntfs.img /dev/hda1
Space and speed-wise the most efficient way to compress a clone imagentfsclone --overwrite /dev/hda1 ntfs.img
Uncompressing a tar archived clone imagetar -cjSf ntfs.img.tar.bz2 ntfs.img
In-place compressing an NTFS volume. Note, gzip is faster usually at least 2-4 times but it creates also bigger compressed files.tar -xjSf ntfs.img.tar.bz2
Restoring an NTFS volume from a compressed imagentfsclone --output ntfs.img /dev/hda1 | bzip2 -c > ntfs.img.bz2
Backup an NTFS volume to a remote host, using ssh default compression.bunzip2 -c ntfs.img.bz2 | dd of=/dev/hda1
Clone an NTFS volume to a remote host, using ssh default compression (type everything in one line).ntfsclone -o - /dev/hda1 | ssh -C host ’bzip -c9 > ntfs.img.bz2’
Clone a remote NTFS volume to the local filesystem via ssh using a custom compression level (type everything in one line). Speed-wise the optimal compression level depends on your network, disk and CPU speed, saturation.ntfsclone -o - /dev/hda1 | \\
ssh -C host ’cat | cp --sparse=always /proc/self/fd/0 ntfs.img’
Pack NTFS metadata for NTFS expertsssh host ’ntfsclone -o - /dev/hda1 | gzip -2c’ | \\
gunzip -c | cp --sparse=always /proc/self/fd/0 ntfs.img
ntfsclone --metadata --output ntfsmeta.img /dev/hda1
tar -cjSf ntfsmeta.img.tar.bz2 ntfsmeta.img
Sometimes it might appear ntfsclone froze if the clone is on ReiserFS and even CTRL-C won’t stop it. This is not a bug in ntfsclone, however it’s due to ReiserFS being extremely inefficient creating large sparse files and not handling signals during this operation. This ReiserFS problem was improved in kernel 2.4.22. XFS, JFS and ext3 don’t have this problem.