Easier way to clone a disk


Recommended Posts

Today, to clone a disk, I fail the array, then rebuild onto another disk.  This results in a new disk being inserted into the array, and I have a clone in my hands, but it is the old disk from the array that I take away.  To leave the original disk in the array is a messy process of new config, and meticulously adding all the disks back correctly, and putting the old disk back in the old slot.  Very messy and prone to user error.

 

If I want to make a clone of a disk just like the above, but I don't want to end up with a new disk in the array.  Is there any easier way to do it rather than failing the array and rebuilding? 

 

Somebody tell me there is a clone disk plugin that I haven't heard about yet...

Link to comment

Never heard about a clone disk plugin for unRAID, probably because most people settle for backing up the contents of their disks rather than duplicating details that rarely matter.

 

If I were to make a clone, for example to experiment with potentially invasive file recovery, I'd simply use dd. Pro: No need for breaking the array. Con: Not for the faint of hearth, easy to cause a monumental mess.

 

Why do you want to clone disks?

Link to comment

You can use dd, always double check destination disk, wrong disk and you'll lose data, and stop the array before cloning, cloning with possible open files is not a good idea.

 

dd if=/dev/sdX of=/dev/sdY bs=32M status=progress

 

X=source

Y=destination

 

Ok, trying on unRaid 6.1.7 I get this:

 

root@Tower3:/boot/bunker# dd if=/dev/sdg of=/dev/sdi bs=32M status=progress
dd: invalid status flag: ‘progress’
Try 'dd --help' for more information.
root@Tower3:/boot/bunker#

 

 

Link to comment
  • 2 months later...

Status flag probably doesn't work on v6.1, it works on v6.2, you can use the command without it, there won't be any progress report, when you have the cursor it's done.

 

dd if=/dev/sdX of=/dev/sdY bs=32M

 

Could you mount a disk from another server, or does dd only work on disks internal to the server??

Link to comment

Could you mount a disk from another server, or does dd only work on disks internal to the server??

 

Don't know the answer to that, I'm sure someone else does.

Only marginally more help, but dd is just a way to copy bits from one file to another. It can operate on any file, so if you can figure out a way to make a remote disk show up as a writeable file on your box, dd can operate on it. Thing is, I don't know how much waiting around it will do if the interface drops, so error recovery is probably going to be a manual type thing. It's not meant to be used that way AFAIK.

 

http://unix.stackexchange.com/questions/132797/how-to-dd-a-remote-disk-using-ssh-on-local-machine-and-save-to-a-local-disk

Link to comment

Could you mount a disk from another server, or does dd only work on disks internal to the server??

 

Don't know the answer to that, I'm sure someone else does.

Only marginally more help, but dd is just a way to copy bits from one file to another. It can operate on any file, so if you can figure out a way to make a remote disk show up as a writeable file on your box, dd can operate on it. Thing is, I don't know how much waiting around it will do if the interface drops, so error recovery is probably going to be a manual type thing. It's not meant to be used that way AFAIK.

 

http://unix.stackexchange.com/questions/132797/how-to-dd-a-remote-disk-using-ssh-on-local-machine-and-save-to-a-local-disk

 

Thanks for the ideas.  My other server is on the same gbit lan, and I rsync daily via a nfs mount like:

 

mkdir /mnt/s1disk1
mount -t nfs server1:/mnt/disk1/ /mnt/s1disk1

 

Cloning a disk directly from one server to another would be a timesaver for me, and it would allow me to convert the disks over to xfs at the same time.

Link to comment

DD is intended to be used on drives in the same system.  You can use it over the wire, but doing dd across the wire is slow... you have to use netcat or ssh (netcat is unencrypted and faster, and you can pipe it through bzip.  ssh is secure, but slow.)

Link to comment

DD is intended to be used on drives in the same system.  You can use it over the wire, but doing dd across the wire is slow... you have to use netcat or ssh (netcat is unencrypted and faster, and you can pipe it through bzip.  ssh is secure, but slow.)

 

Thanks for the confirmation.  I suspected this was a fools gold request.  ("yeah, you could make it work, BUT"....., OR more simply put, "Sorry, it doesn't work that way....")

 

Just having the dd option, even if you have to do it in the same server is still a major improvement.  Thanks all...

Link to comment

DD is intended to be used on drives in the same system.

Don't know the answer to that, I'm sure someone else does.

dd is just a way to copy bits from one file to another.

I followed bubbaq advice that the safest way is to do it with internal disks.

 

I have 2 servers where the backup server is a disk for disk match to my main server.  My main server is XFS now, but the backup server is still RFS.  It is time to convert the backup server to XFS by cloning the disks in the main server and installing them in the backup server.

 

Using 3 spare disks, I created 3 xfs disks from the main server while the array was stopped using:

 

dd if=/dev/sdX of=/dev/sdY bs=32M

 

And then pulled them from the main server and installed them in the backup server with the array stopped.  Again I used the same code to write to the existing 3 disks in the array.  The intent was to end up with 3 newly XFS disks in the backup server.  The problem was that once the RFS disks were overwritten with XFS, and I tried to start the array, it wouldn't start.  Claimed that the disks were unmountable. (unRaid 6.1.9)

 

I then did a reboot and they were still unmountable.  I finally did a new config and this allowed me to add them back in and they showed up as expected, converted to XFS.  I will now create a fresh parity.

 

Is this expected behavior for the disks to show up unmountable when the complete disk is rewritten from RFS to XFS?

Link to comment

You probably had the disks filesystem set to Reiser, after doing the new config it was reset to default (auto).

 

Default file system is now set to XFS.  Not sure what is would have been before doing the new config. 

 

Could be RFS as this was a system that ran with V5 for a long time before upgrading to V6 only a few months ago. 

Link to comment

dd if=/dev/sdX of=/dev/sdY bs=32M

 

After converting a number of disks, from RFS to XFS via dd, I have this situation.  After dd was finished, the disk was pulled and moved to another server and hot plugged and mounted via unassigned devices.  The other times I have rebooted after doing an RFS->XFS conversion via DD, but since this is a different server, and was running processes that I didn't want to interrupt, I did it this way.

 

Now I know hot-plugging a drive is not fully supported but I have done it many-many times without issue.  The disk does not mount and we get the following log entries for unassigned devices...

 

Jun 21 23:28:49 Adding disk '/dev/sdg1'...
Jun 21 23:28:49 Mount drive command: /sbin/mount -t xfs -o rw,noatime,nodiratime '/dev/sdg1' '/mnt/disks/WDC_6128'
Jun 21 23:28:49 Mount of '/dev/sdg1' failed. Error message: mount: wrong fs type, bad option, bad superblock on /dev/sdg1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so

 

and syslog has the following hot-plug events, and then the mount error:

 

Jun 21 23:42:26 Kim kernel: ata4: exception Emask 0x10 SAct 0x0 SErr 0x4090000 action 0xe frozen
Jun 21 23:42:26 Kim kernel: ata4: irq_stat 0x00400040, connection status changed
Jun 21 23:42:26 Kim kernel: ata4: SError: { PHYRdyChg 10B8B DevExch }
Jun 21 23:42:26 Kim kernel: ata4: hard resetting link
Jun 21 23:42:27 Kim kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jun 21 23:42:32 Kim kernel: ata4: hard resetting link
Jun 21 23:42:32 Kim kernel: ata4: SATA link down (SStatus 0 SControl 300)
Jun 21 23:42:32 Kim kernel: ata4: limiting SATA link speed to 1.5 Gbps
Jun 21 23:42:37 Kim kernel: ata4: hard resetting link
Jun 21 23:42:37 Kim kernel: ata4: SATA link down (SStatus 0 SControl 310)
Jun 21 23:42:37 Kim kernel: ata4.00: disabled
Jun 21 23:42:37 Kim kernel: ata4: EH complete
Jun 21 23:42:37 Kim kernel: ata4.00: detaching (SCSI 5:0:0:0)
Jun 21 23:42:37 Kim kernel: sd 5:0:0:0: [sdg] Synchronizing SCSI cache
Jun 21 23:42:37 Kim kernel: sd 5:0:0:0: [sdg] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=0x00
Jun 21 23:42:37 Kim kernel: sd 5:0:0:0: [sdg] Stopping disk
Jun 21 23:42:37 Kim kernel: sd 5:0:0:0: [sdg] Start/Stop Unit failed: Result: hostbyte=0x04 driverbyte=0x00
Jun 21 23:42:48 Kim kernel: ata1: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen
Jun 21 23:42:48 Kim kernel: ata1: irq_stat 0x00000040, connection status changed
Jun 21 23:42:48 Kim kernel: ata1: SError: { CommWake DevExch }
Jun 21 23:42:48 Kim kernel: ata1: hard resetting link
Jun 21 23:42:53 Kim kernel: ata1: link is slow to respond, please be patient (ready=0)
Jun 21 23:42:57 Kim kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out
Jun 21 23:42:57 Kim kernel: ata1.00: ATA-9: WDC WD30EZRX-00DC0B0, WD-WMC1T0416128, 80.00A80, max UDMA/133
Jun 21 23:42:57 Kim kernel: ata1.00: 5860533168 sectors, multi 0: LBA48 NCQ (depth 31/32), AA
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd ef/10:06:00:00:00:00 (SET FEATURES) succeeded
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd f5/00:00:00:00:00:00 (SECURITY FREEZE LOCK) filtered out
Jun 21 23:42:57 Kim kernel: ata1.00: ACPI cmd b1/c1:00:00:00:00:00 (DEVICE CONFIGURATION OVERLAY) filtered out
Jun 21 23:42:57 Kim kernel: ata1.00: configured for UDMA/133
Jun 21 23:42:57 Kim kernel: ata1: EH complete
Jun 21 23:42:57 Kim kernel: scsi 2:0:0:0: Direct-Access ATA WDC WD30EZRX-00D 0A80 PQ: 0 ANSI: 5
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: Attached scsi generic sg7 type 0
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] 5860533168 512-byte logical blocks: (3.00 TB/2.73 TiB)
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] 4096-byte physical blocks
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] Write Protect is off
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] Mode Sense: 00 3a 00 00
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Jun 21 23:42:57 Kim kernel: sdg: sdg1
Jun 21 23:42:57 Kim kernel: sd 2:0:0:0: [sdg] Attached SCSI disk
Jun 21 23:43:28 Kim kernel: XFS (sdg1): Filesystem has duplicate UUID 822e2574-c2ee-4fcf-a00b-38a4acbc4865 - can't mount
Jun 21 23:13:49 Kim kernel: XFS (sdg1): Filesystem has duplicate UUID 822e2574-c2ee-4fcf-a00b-38a4acbc4865 - can't mount

 

I suppose a reboot is in order, but I have never had to do that before.  Moving the disk back to the original server where it was converted via dd and it hot-plugs and mounts via unassigned devices as a new XFS disk without issue.  However on the original server it seems to conflict with disk4 (disk4 was the source copy of the dd command so this makes sense), and disk4 shows unmountable if this disk is mounted via unassigned devices.  Unmounting this disk allows the array to start and disk4 is good, but now this disk won't mount. 

 

Jun 22 16:12:49 Server1 kernel: XFS (sdb1): Filesystem has duplicate UUID 822e2574-c2ee-4fcf-a00b-38a4acbc4865 - can't mount

 

I guess I need to understand XFS better.  Puzzling...

 

Link to comment

Post in the unassigned devices plugin thread, maybe they can help.

 

It appears that I am running into this issue, do you agree??

 

http://unix.stackexchange.com/questions/12858/how-to-change-partition-uuid-2-same-uuid

 

And now the uuid change process is not working...

 

root@Server1:~# xfs_admin -U generate /dev/sdb1
xfs_admin: only 'rewrite' supported on V5 fs

 

This UUID change limitation is acknowledged here

 

https://bugzilla.redhat.com/show_bug.cgi?id=1233220

Link to comment

The fact that a clone disk has the same UUID and can't be mounted together with the source is expected, same happens with a unRAID rebuild, can't see how you would get the same UUID on a different server, coincidence or you cloned the same disk twice?

 

As for changing the UUID, never tried that, but according to the link you posted looks like at the moment it's not possible with XFS.

Link to comment

dd if=/dev/sdX of=/dev/sdY bs=32M

 

After converting a number of disks, from RFS to XFS via dd, I have this situation.  After dd was finished, the disk was pulled and moved to another server and hot plugged and mounted via unassigned devices.  The other times I have rebooted after doing an RFS->XFS conversion via DD, but since this is a different server, and was running processes that I didn't want to interrupt, I did it this way.

 

I might have misunderstood you, but you can't change the filesystem from rfs to xfs with dd. The target disk will have the same file system as the source disk.

Link to comment

I might have misunderstood you, but you can't change the filesystem from rfs to xfs with dd. The target disk will have the same file system as the source disk.

 

From the OP's earlier posts it appears that he's got two servers, the contents of the disks being the same but the main one having XFS-formatted disks and the backup one having ReiserFS-formatted disks. He's cloning the disks from the main server one at a time and inserting the clones into the backup server in place of the equivalent ReiserFS disk. It isn't the way I'd go about converting from ReiserFS to XFS or how I'd maintain a backup server, but as an experiment it's interesting to watch. I wouldn't attempt the hot-plugging either.

 

EDIT: There's more in this thread: https://lime-technology.com/forum/index.php?topic=49938.0

Link to comment

I might have misunderstood you, but you can't change the filesystem from rfs to xfs with dd. The target disk will have the same file system as the source disk.

 

From the OP's earlier posts it appears that he's got two servers, the contents of the disks being the same but the main one having XFS-formatted disks and the backup one having ReiserFS-formatted disks. He's cloning the disks from the main server one at a time and inserting the clones into the backup server in place of the equivalent ReiserFS disk. It isn't the way I'd go about converting from ReiserFS to XFS or how I'd maintain a backup server, but as an experiment it's interesting to watch. I wouldn't attempt the hot-plugging either.

 

EDIT: There's more in this thread: https://lime-technology.com/forum/index.php?topic=49938.0

 

Basically correct.  I have several backup servers replicating the main server.  These are kept up to date daily or weekly with rsync and each disk is replicated separately (for example disk1 on each server is the same size and has the same contents).  However the painful and involved process required to convert the main server to XFS was not repeated for the backup servers.  I figured that for a one time event, it would be OK to simply clone the disks in the main server and replace them in the backup servers.

 

Initially, I would fail a drive in the main server, replace the disk with a spare, and rebuild causing a clone to be made.  This clone now could make its way into one of the backup servers.  Since I often didn't want the spare drive to remain in the main server, I would pull this and do a "New Config".  This meant that all drives had to be properly linked / reconnected to their proper place in unRaid.  This is a huge opportunity for user error to be introduced, so I was looking for a way to clone an XFS disk.

 

Finally, this "dd" method was pointed out by @johnnie.black

 

Works perfectly, provided you understand the limitations.  The fact that you cannot right now change the UUID of an XFS drive is troubling.  If I should need to yank a drive and put it into another server due to some sort of disaster, I will have a few UUID challenges to overcome.

Link to comment

I can't help wondering why it's so important for the contents of the disks to be identical across the servers and why you would ever consider moving a disk from one server to another, like that? The alternative - letting a failed disk rebuild in situ from parity - is surely better in almost every respect. Just curious - if it works for you then no problem.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.