Migrating from Inoperable Synology SHR Volume to Unraid


Recommended Posts

Noob in need of help.

 

I am new to the community and have just completed my first server build after our Synology 916+ died during COVID-19. I decided to switch from Synology products for many reasons, but I am not a Linux expert . So now the rubber meets the road.

 

I was able to configure and install Unraid with the Space Invaderone guides and finally got a small volume up and running (no cache or parity yet). 

 

Now I need to mount and begin the transfer of my Synology SHR volume's content to the new Unraid volume. I have 14 drives mounted in a Supermicro 846 chasis, 7 are part of the SHR volume the remaining are unassigned. Currently, Synology SHR drives are listed as "unassigned devices" and I have made them read-only to keep things safe. The SHR volume mostly contains movies, so I am not too fussed if something goes wrong during transfer, but I would like to do it as safely and quickly as possible. 

 

I planned to mount the Synology drives inside on Ubuntu but do it in a VM as detailed in Synology's recovery instructions (https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC) and expand the Unraid volume and rsync important shares back to the unraid volume. 

 

Is this possible? Does the community have a better method?

 

As mentioned in the title, the Synology 916+ is inoperable. Therefore I don't have access to that device anymore, and the standalone desktop PC that we have running Windows10 does not have the SATA ports or chassis to support the drives that I already have physically mounted in the Supermicro 4U chassis. 

 

Help please! So much appreciated!!

 

Stay safe, everyone!

 

Supermicro X9DR3-F Version 0123456789

BIOS: American Megatrends Inc. Version 3.3. Dated: 07/12/2018

CPU: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz

HVM: Enabled

IOMMU: Enabled

Cache: 512 KiB, 2048 KiB, 20480 KiB, 512 KiB, 2048 KiB, 20480 KiB

Memory: 64 GiB DDR3 Multi-bit ECC (max. installable capacity 192 GiB)

Network: bond0: fault-tolerance (active-backup), mtu 1500

 eth0: interface down

 eth1: 1000 Mbps, full duplex, mtu 1500

 eth2: interface down

Kernel: Linux 4.19.107-Unraid x86_64

OpenSSL: 1.1.1d

Uptime: 0 days, 08:18:13

Link to comment
7 hours ago, Madman2012 said:

I planned to mount the Synology drives inside on Ubuntu but do it in a VM as detailed in Synology's recovery instructions

Setting up a Ubuntu VM and passing through the physical drives shouldn't be a big deal.

 

Create a share in Unraid where you wanna copy your data to. Depending on the amount of data you wanna recover, you need space on the Unraid array. Having the array up and running with parity already configured is the way I would go. Keep in mind, no disk in the array can be bigger than the parity drive. So don't start with a let's say 1TB parity + 1TB+500GB+250GB array drive when you wanna add bigger drives to the array anyways.

 

Setup the VM first without any drives from the Synology attached. Make sure it is working and has access to the unraid shares. Shutdown the VM and attach the drives to the VM. You have a couple options. You can either passthrough the controller where the drives are connected to in case you have an extra raid card or hba or you can pass them through by-id. Let me explain.

 

First check under Unassigned devices the ID/Name of the drives. Make sure they are not mounted and make notes for all drive members of that Synology SHR raid. For example

grafik.png.bf4803502911ba72ee830411dc3b27b8.png

Open up a terminal and navigate to /dev/disk/by-id/ and list all the devices

grafik.png.cf0c23fa0e5803e7fcc60021be692fd6.png

 

You should see all drives and partitions in your Unraid box. In my example you see 2 entries for the drive we wanna use. 

 

"ata-ST2000LM007-1R8174_WDZ5J72F" is the full device

"ata-ST2000LM007-1R8174_WDZ5J72F-part1" is only one partition of the disk.

 

You need the full drive passed through to the VM. Make notes for all the drives you need and go to the next step. Shutdown the VM and edit the template.

Click the small + button below the already existing vdisk and add the first disk. Change it from "Auto" to "Manual" and enter the full path

 

grafik.png.a2140dcad6944bb10e401b644e2fb962.png

 

Again, don't passthrough individual partitions, only use the full disk ID. Repeat the steps and add all the SHR drive members. If you start your VM you should have access to all the drives and can use the Synology tutorial to recover the data and copy it to the Unraid share.

 

Link to comment

Thank you for this! I got the VM up and running, finally but cannot pass the drives yet...

 

Now the issue I am having is persisting settings to pass the physical drives through to the VM. When I follow your instructions above, I hit update button and it just stays there with updating... but never provides a confirmation and if I refresh the settings didnt persist. I deleted the VM and started over using the same and I am having the same issue with the settings not being confirmed. I have even restarted the server, tried different browsers. Not sure what is going on ATM. Screen shot below. 

 

 

Any advice? I sincerely appreciate the community's help on this project. I am starting to get a feel for the power of Unraid but wish I could get this array mounted and transfered.

Capture.JPG

Link to comment

Ok so I tried to edit the xml file to mount the drives. It appeared to work for the first drive but after I added the others now I get a different error when trying to start the VM. Below is the log. Please I kindly would appreciate the help as I am trying to recover the data from this device and use unraid going forward. 

 

Quote

-device ide-hd,bus=ide.3,drive=libvirt-7-format,id=sata0-0-3,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_S4D0LSW7","node-name":"libvirt-6-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-6-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-6-storage"}' \
-device ide-hd,bus=ide.4,drive=libvirt-6-format,id=sata0-0-4,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_S4D0PBKR","node-name":"libvirt-5-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-5-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-5-storage"}' \
-device ide-hd,bus=ide.5,drive=libvirt-5-format,id=sata0-0-5,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST6000VN0021-1ZA17Z_Z4D1ECBT","node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-4-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-4-storage"}' \
-device ide-hd,bus=ide.6,drive=libvirt-4-format,id=sata0-0-6,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z301XY3E","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \
-device ide-hd,bus=ide.7,drive=libvirt-3-format,id=sata0-0-7,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z300X4MA","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-hd,bus=ide.8,drive=libvirt-2-format,id=sata0-0-8,write-cache=on \
-blockdev '{"driver":"host_device","filename":"/dev/disk/by-id/ata-ST4000DM000-1F2168_Z300X9RS","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-hd,bus=ide.9,drive=libvirt-1-format,id=sata0-0-9,write-cache=on \
-fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/Media/ \
-device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=Media,bus=pci.1,addr=0x0 \
-netdev tap,fd=36,id=hostnet0,vhost=on,vhostfd=37 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:92:93:93,bus=pci.3,addr=0x0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=38,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=2 \
-vnc 0.0.0.0:0,websocket=5700 \
-k en-us \
-device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
-device usb-host,hostbus=1,hostaddr=3,id=hostdev0,bus=usb.0,port=1 \
-device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2020-05-01 19:48:13.804+0000: Domain id=28 is tainted: high-privileges
2020-05-01 19:48:13.804+0000: Domain id=28 is tainted: host-cpu
char device redirected to /dev/pts/1 (label charserial0)
2020-05-01T19:48:13.937495Z qemu-system-x86_64: -device ide-hd,bus=ide.6,drive=libvirt-4-format,id=sata0-0-6,write-cache=on: Bus 'ide.6' not found

Here is the XML:

Quote

Version: 6.8.3 

Server
Description
Registration
UptimeTitan • 10.0.1.5
Media server
Unraid OS Trial24 days remaining
23 hours, 55 minutes

DASHBOARD

MAIN

SHARES

USERS

SETTINGS

PLUGINS

DOCKER

VMS

APPS

STATS

TOOLS

Update VM

XML VIEW

Icon:

Autostart:

NO

 

1

<?xml version='1.0' encoding='UTF-8'?>

2

<domain type='kvm'>

3

<name>Ubuntu</name>

4

<uuid>bc82ea8c-20a7-50b0-4432-06c988b1ce06</uuid>

5

<metadata>

6

<vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>

7

</metadata>

8

<memory unit='KiB'>7864320</memory>

9

<currentMemory unit='KiB'>1048576</currentMemory>

10

<memoryBacking>

11

<nosharepages/>

12

</memoryBacking>

13

<vcpu placement='static'>8</vcpu>

14

<cputune>

15

<vcpupin vcpu='0' cpuset='0'/>

16

<vcpupin vcpu='1' cpuset='16'/>

17

<vcpupin vcpu='2' cpuset='1'/>

18

<vcpupin vcpu='3' cpuset='17'/>

19

<vcpupin vcpu='4' cpuset='2'/>

20

<vcpupin vcpu='5' cpuset='18'/>

21

<vcpupin vcpu='6' cpuset='3'/>

22

<vcpupin vcpu='7' cpuset='19'/>

23

</cputune>

24

<os>

25

<type arch='x86_64' machine='pc-q35-4.2'>hvm</type>

26

<loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>

27

<nvram>/etc/libvirt/qemu/nvram/bc82ea8c-20a7-50b0-4432-06c988b1ce06_VARS-pure-efi.fd</nvram>

28

</os>

29

<features>

30

<acpi/>

31

<apic/>

32

</features>

33

<cpu mode='host-passthrough' check='none'>

34

<topology sockets='1' cores='4' threads='2'/>

35

<cache mode='passthrough'/>

36

</cpu>

37

<clock offset='utc'>

38

<timer name='rtc' tickpolicy='catchup'/>

39

<timer name='pit' tickpolicy='delay'/>

40

<timer name='hpet' present='no'/>

41

</clock>

42

<on_poweroff>destroy</on_poweroff>

43

<on_reboot>restart</on_reboot>

 

 Array StartedUnraid® webGui ©2020, Lime Technology, Inc.  manual

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.