Comments on How to clone disks with Linux dd command
In this tutorial we'll refer to a practical example of Linux dd command that can be used by system administrators to migrate or clone a Windows Operating System or a Linux OS from a larger HDD partitioned in MBR or GPT layout style to a smaller SSD.
20 Comment(s)
Comments
I did this yesterday on a physical disk connected to my Fedora 26 desktop via a USB toaster type box. The disk was 1TB. Installed on the disk was a sysprep'ed Windows 10 (1 partition) and Fedora 26 (3 partitions all xfs). I didn't try to shrink anything. ddrescue wrote out a 932GB .img file and it took a few hours.
Then I ran the xz command on the .img file with 6 threads... and it took about 3 hours but it compressed the 932GB .img file down to 71GB. Of course a lot depends on how much software you have installed within the OSes on the disk.
The extra steps you gave probably made it go faster.
after step one, you can use gnome-disk-utility 3.24.0 UDisks 2.1.8 (built against 2.1.8) Clone Disks and then restored it to any drive or partition you want with the same application . all that will be in GUI interface. thanks
Hi,
Your description of the bs switch is misleading. It is not block size, but buffer size and is not related to the block size of the media. Using bs=512 or bs=2k results in a very slow copy. Using bs=64M or larger dramatically speeds up the copy.
Cheers,
Effrafax.
I'm pretty sure the variable bs is referencing to block size, not buffer size. From the dd manual: https://www.freebsd.org/cgi/man.cgi?dd(1)
bs=n Set both input and output block size to n bytes, superseding the ibs and obs operands. If no conversion values other than noerror, notrunc or sync are specified, then each input block is copied to the output as a single block without any aggregation of short blocks.Does anyone know how to nic boot a Linux ISO?
On recent versions of dd you can monitor the cloning process with status=progress Cheers
Why Linux dd command? dd is not part of the kernel else of GNU coreutils.
hello, i have many linux arch linux distro and ubntu in efi, and with secure boot deactivated,i usually have two partitions one in / efi fat32 by one giga, and one with 15-20 gb per root, the disk is all in gpt, i have no partzone / home, you could indicate the steps to clone the partition / efi luck 32 and the / root.thank youhector
Gparted> copy partition> update grub :)
I am planning to clone internal 320 GB HDD to external 1tb HDD using dd command through Ubuntu installed on yet another external HDD of 80 GB.
I have some doubts.
1. Will partition table and boot loader copied from source HDD to external 1tb HDD?
2. What will happen to remaining around 700gb portion of 1tb HDD?
Cloned 500GB Western Digital D HDD Windows 7 to 500GB Samsung SSD in Linux Mint 19.1. Followed instructions above, I just hooked up the SATA cable to the new drive, which was recognized easily then cloned the original, then replaced the cable & internal mount position from the old to new once it was completed. Only took about 45 minutes tops to complete the entire process, the disk was only 70% full.
dd is a block-by-block copy. A drive that is 0% full and a drive that is 100% full will take the same length of time to copy this way.
Could I use this method to dd a turnkey linux or other type of linux that is designed to work off of a usb drive, to a hardd drive or ssd?
OMG THANK YOU !!!!
You saved me soooo thank you! I was seeing every sites talking about the complexity of cloning a drive, others tells to use a paid software, others tells to use CloneZilla... I tried CloneZilla but it failed cause I have "MBR and GPT" partitions, so I found you method, tried it and it works perfectly!!!
I managed to clone my 500GB HDD (shrinked to 80GB just for C: and the "system reserved" part) to my new SSD drive of 480 GB without any problem in some 30 minutes folowing your instructions!
Again I thank you so much for these great explanations and examples, you've done a very helpfull job!
As others have said, dd has the ability to display progress so you could in theory remove the requirement of piping it through pv. pv does however give you a percentage complete and computes an ETA (assuming you provide it the correct value after "-s"). You could do the same manually with the information provided out of dd's progress (speed and amount completed) but why compute something manually when passing it through pv is of very little cost (a couple extra cpu cycles).My big issue with the article that you should look to fix however is when changing the blocksize using bs and using pipes between "dd if" and "dd of".You should set try to set the block size on either side of the pipe to be the same size or larger on the write side. Ideally you want to use the blocksize most efficient for your destination disk. When you do "dd if=/dev/<drive> of=/dev/<drive>" setting bs does this to both the read and the write automatically. When you do "dd if=/dev/<drive> bs=<size> | dd of=/dev/<drive>" you will loose a lot of transfer speed and spend a lot of CPU time converting the input data back into the default 512bytes blocksize before writing it out.In testing, I saw no performance gain when changing bs only on the "input" side of the pipe, getting only 40MB/s with block sizes of 512, 4096 and 65536. Top showed my "output" dd running at 60% CPU, monitoring Disk IO with atop and dstat showed inconsistent throughput from the read side withe drive fluctuating between 50% busy and idle while the write/out side was pegged at 100% busy. I'm copying from a WD Red 5400-5900rpm SATA disk rated for 150MB/s to a Seagate Exos 7200rpm SAS disk rated for 225MB/s so even accepting that sequential writes are more intensive than sequential reads, I would not expect to be IO bound by the write side though that's what the stats were indicating.When I modified the "output" side of the pipe to include "bs=" and used a blocksize greater than or equal to the block size coming from the input side, my throughput jumped to 180MB/s and cpu dropped to 12-13%. Monitoring disk IO showed consistent read and write throughput on both sides and my read disk was consistently at 100% busy while my write side disk fluctuated between 50% and 100%. At 180MB/s I'm IO bound by the read side disk more so than the write side which is more inline with expectation. In fact at 180MB/s I'm running about 120% of the expected sustained throughput of the WD Red Drive while I'm running at about 80% of the expected throughput of the Seagate Exos drive. This would explain why the write drive fluctuates between 50% and 100% as every few seconds, it has to wait for data from the WD Red drive (my Exos drive can write in 4 seconds what it takes the Red drive 5 seconds to read so every 2 seconds, the exos drive can slow down to 50% for 1 second and still write out the same amount of data as the Red drive has offered it).
## chech source disks and partition by:
udisksctl status
##then run :
sudo dd bs=400M conv=sync.noerror status=progress if=/dev/hda of=/dev/hdb
dd worked flawlessly to clone a complete boot HD with NTFS partitions to ssd. Minitool ShadowMaker clone feature running MS Windows 7 failed. GNU Linux safed my Windows ass :-)
hello, wery nice manuala, I have question about example 2:
command in text is: sudo dd if=/dev/sda bs=4096 count=2481920 conv=sync,noerror | pv -s 9G |sudo dd of=/dev/sdb
but in imagee 4.png are bs and count values swaped: bs=2481920 count=4096...
How come nobody mentioned partclone ?
I would only use dd if there's no other choice because it's very slow and ineficient
Actually, dd stands for "device-to-device copy". It was originally used for writing data on magnetic tape drives with a given block size. Newer kinds of devices have fixed block sizes that are hidden from the user, but for magtape you must specify the block size, and dd did that with the bs= option. You can tell the extreme age of the command, as it is the only one in the entire Unix bestiary of basic commands left that uses xx=value rather than -xx value.
--jh--