The ability to make backup clones of an OS link Linux and macOS (eg, Carbon Copy Cloner, Super Duper, ChronoSync, etc.) offers wonderful peace of mind. For years I've struggled with this seeming to not be available for Windows / Windows Server. Recently I inquired of a few different companies as to whether they offered a product that could. On macOS X, popular disk cloning titles include Carbon Copy Cloner and SuperDuper. On Linux, clonezilla provides a command-line and GUI-based disk cloning solution. BSD, Linux, and macOS X also provide the low-level terminal command dd, which writes raw data directly to disk. In Git, a clone is a copy of an existing local or remote. Carbon Copy Cloner Carbon Copy Cloner is an OS X utility from Bombich Software. In order to exclude the KeyAccess files from the images it writes, edit the file at Library/Preferences/com.bombich.ccc.plist (probably relative to your home directory). Find the cacheItems section that starts with. I used Carbon Copy Cloner to clone my 1TB secondary drive to a new 3TB drive. After I did it, I swapped drives and now it doesn't boot properly. I use an SSD as my main drive and the HDD as a storage drive with all the main file directories linked on there (Music, Pictures, Movies, etc.). Carbon is the easiest way to create and share beautiful images of your source code.

Before I start, I would like to clarify that this step by step tutorial applies not only to duplicating hard drives that have Linux OS on them. You can clone pretty much any drive. What is on the hard disk is irrelevant; it could be Windows, Mac OS, Linux, just data, etc. There have to be just a few basic things in place:
- The target drive should be the same size or bigger than the source disk drive.
- Have a Linux Live CD or a Linux bootable USB drive or some other way of booting into Linux (we will be using Ubuntu’s Live CD for this tutorial).
- Access to the internet.
- There is a presumption that you know how to install a hard drive.
Making an exact copy of a hard drive (or any drive for that matter – CD, DVD, USB, etc.) is very easy and quick with Linux. One of the most popular commands on Linux to do this is dd. It is a very powerful utility that was originally developed for the UNIX operating system and is now default on every Linux distribution. It does a bit for bit copy of the data and it does not care about cylinders, partitions or files. Here is an example of a dd command that would make an exact copy of one disk to another:
Carbon Copy Cloner Mac
dd if=/dev/sda of=/dev/sdb bs=64k
The bs option specifies the block size and it could be omitted, but it would speed up the process since the default block size is only 512 bits. dd is very effective and powerful command but it is not very suitable when you are trying to make a copy of a failing or failed disk. dd is not designed to read and recover bad sectors.
There are a number of other open source programs developed since dd (dd variants) that would address situations where there might be some bad drive sectors and they perform faster and more efficient than dd. Some of those are: dd_rescue, dd_rhelp and GNU ddrescue. GNU ddrescue is the one that I would recommend using if you want to clone a drive. It works both for a perfectly good drives that you would like to clone and for failed drives that you would want to recover data from.
Install the new drive.
The new drive should be of the same or bigger size. You might have to get the BIOS to recognize the new disk; in most cases that is not necessary. After you have put the drive in, boot into Linux from another device. An Ubuntu Live CD would be perfect for that. You can download an ISO image from here.
Now you have to find out what the drives’ logical names are. Open up a terminal window: Accessories -> Terminal or Alt + F2, then type in gnome-terminal and hit Enter.
In the terminal window type sudo lshw -C disk:
In my case I have two disks- disk:0 and disk:1. The logical name of disk:0 is /dev/sda and the one for the second disk is /dev/sdb. Make a note of that. In your case that might be different. Identify which drive will be the source and which one the copy. There are 2 things in the above output that will help you do that- the product and the size. You can also use the command sudo fdisk -l. It will show you the hard drives and their partitions.

Prepare the target drive.
Now that you have identified the target drive you need to put an initial partition on it. In the terminal window you have opened execute:
The cfdisk program will start, then type W and then yes to confirm. This is simple enough but you could also use the GParted program that comes with Ubuntu to do the same.
Install the GNU ddrescue program
Before you can install ddrescue you need to enable the Universe Software Repository. Go to System -> Administration -> Software Sources and then check the box next to “Community-maintained Open Source software (universe)“. Close the window. It will ask you to whether you want to refresh the list of software- go ahead and agree to that. After it finishes you can install ddrescue by running this in the terminal window:
Clone the disk.
Now you are ready to clone the drive by executing ddrescue. Specify the source disk first and then the target disk. You can use the -v option to be able to see the progress of the operation:
Carbon Copy Cloner Linux
Make sure you get the order of the drives right or you could overlay the old drive with the new drive and loose all the data!
Depending on the size of your source drive this operation could take a couple of hours or even more. Once it finishes the new drive will be an exact copy of the old one. You can run a quick check on the file systems of the new drive:
If the new drive is bigger than the old one you need to extend the partition(s) on it or create another one to make use of the rest of the space. The GParted program that comes with Ubuntu is ideal for this.
Carbon Copy Cloner Crack
Once you are done, remove the old drive and boot from the new disk.
Credit: Ubuntu Kung Fu. Published article from the book.
From OS X Scientific Computing
|
There are two types of filesystem disk formats we need to worry about, HFS+, which is the default OS X filesystem format, and UFS, which is the 'normal' unix file system format. All of the standard unix utilities were originally designed to cope with UFS. They will work on HFS+, but unless they are explicitly 'resource-fork-aware' (as is the case with those distributed in 10.4.X and above), they will break what are called 'resource forks' because this is a unique structure to HFS+. So, for example, if you back up or copy with normal unix utilities a carbon application or a file having an icon, you may find that it will break the application or make the icon disappear. In some cases, this could create a genuine problem. At the very least it will lead to annoyances like losing the information that tells OS X which application to use to open a particular file.
Resource Forks and Metadata
Some Unix utilities play nice
Time Machine does automatic hourly backups
As of 10.5, Apple comes with a free, incremental backup system called Time Machine. The GUI is a serious piece of eye-candy:
As Apple's own backup utility, it knows how to handle resource forks. One unusual feature is that it uses hard links to directories to create a browsable incrementally backed up file system.
The only additional requirement is a dedicated (usually external) hard drive. A 1TB drive is now comparatively inexpensive and will suit most people's needs.
The backup routine can be customized to omit directories on your hard drive. Omitting /sw might be a good idea for example, as the incremental changes would accumulate very rapidly, there is generally no need to save older increments, and the whole installation should be replaceable with relative ease.
A further description of Time Machine's features is available on Apple's website.
Time Machine Editor
It is hard to imagine that this won't be sufficient for most people's needs. However, one annoyance I find is that itwants to do incremental backups every single hour when it is activated, and currently there is not a convenient way tochange this backup interval.
Fortunately, a free third-party application called TimeMachineEditor provides a simpleinterface that permits the user to customize the frequency of backups.
Before Time Machine appeared, this was my favorite GUI-based backup system. It is still quite useful, and especially so for making a clone of a drive. It's engine is based on a resource-respecting emulation of rsync, called psync, and Apple's nicely-named ditto. Newer version's of Apple's rsync now should be safe to use.
Carbon Copy Cloner is a donation-ware GUI wrapper for psync and ditto. It allows you to make a bootable clone of your startup disk and makes automated updates extremely straightforward and easy to implement. The author insists that academic users not pay for this.
Backing up HFS+ filesystems with resource forks
(This is a more compehensive backup procedure in that you can do it all this way).
If you want to back up OS X files, applications, etc, to another OS X HFS+ disk, either on your machine, or remotely, you can:
A. Use Time MachineMount the other computer or external drive and manually drag and drop files in the finder window. This will give you exact copies of your files on the target backup HFS+ formatted disk.
B. Use OSX-specific copying programs like the newer cp, tar, mv, rsync, or the older CpMac, psync, or ditto, as described above. Each of these procedures will give you exact copies of your files on the target backup HFS+ formatted disk. psync allows you to do incremental backups, copying only what has changed between the source and target directories.
C. Use OSX-specific archiving software like hfspax or hfstar. These create a compressed archive of your files that you can copy and store on ANY unix disk and that can then be expanded on an HFS+ formatted disk.
D. Use Carbon Copy Cloner.
Backing up normal unix files
This is a more protable backup procedure that should be filesystem-independent.
If you want to back up 'normal' unix files (i.e, most crystallographic softwware, input and output, all ascii text, everything in /sw and /usr/local), you can either use the above tools, or you can use the standard unix equivalents (cp, tar, etc.) that reside within (for example) fink. You can use these procedures to back up everything. In other words, you can't hurt a normal unix file by backing it up with all of your other OSX files. However, this may be overkill, so you might prefer to back up normal unix files using the normal procedures. The advantage of the latter is that you can then unpack and read these files on any unix file system. That is why I am including them both
Again, this procedure will only back up normal unix files correctly. It will not honor resource forks.

If, for example, you have installed fink, you will have a 'normal' version of tar that resides in /sw/bin/tar.
The tar in /usr/bin/tar will respect resource forks, but if you are backing up something that you wish to unpack on (for example) a Linux system, you might want to strip off the resource forks. You can either explicitly omit all files of the form
or use /sw/bin/tar explicitly:
Fink in fact puts its own tar at the head of the path, so be careful to use the right version of tar for the right occasion. I usually alias tar to /usr/bin/tar, i.e,
so that by default, I will be using a tar that respects resource forks. When in doubt, it is better to include them than to strip them off.

Comments are closed.