JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 26 minutes ago, JorgeB said: use -L Cool got that done. Re-ran the UUIC change, gave an error Apr 12 05:20:28 Tower unassigned.devices: Error: shell_exec(/usr/sbin/xfs_admin -U generate /dev/sdd1) took longer than 1s! Apr 12 05:20:28 Tower unassigned.devices: Changing disk '/dev/sdd' UUID. Result: command timed out Quote Link to comment
JorgeB Posted April 12, 2021 Share Posted April 12, 2021 Try manually: xfs_admin -U generate /dev/sdd1 Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 12 minutes ago, JorgeB said: Try manually: xfs_admin -U generate /dev/sdd1 God dang you are a hero. That worked. Now, the hard part? 1. Start the Array 2. Mount the ZAD8EYTY drive 3. rsync between the two? Should I do that in the UNRAID terminal? Quote Link to comment
JorgeB Posted April 12, 2021 Share Posted April 12, 2021 16 minutes ago, JasenHicks said: Cool got that done. Re-ran the UUIC change, gave an error Apr 12 05:20:28 Tower unassigned.devices: Error: shell_exec(/usr/sbin/xfs_admin -U generate /dev/sdd1) took longer than 1s! Apr 12 05:20:28 Tower unassigned.devices: Changing disk '/dev/sdd' UUID. Result: command timed out @dlandonLooks like you need to make the timeout for this longer. 2 minutes ago, JasenHicks said: 1. Start the Array 2. Mount the ZAD8EYTY drive 3. rsync between the two? Should I do that in the UNRAID terminal? Yep, you can use for example: rsync -av /mnt/disks/name_of_UD_disk/ /mnt/diskX/ Replace X with correct disk number. Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 Is the name_of_UD_disk the stand alone disk we just did all the XFS repair on? Quote Link to comment
JorgeB Posted April 12, 2021 Share Posted April 12, 2021 1 minute ago, JasenHicks said: Is the name_of_UD_disk the stand alone disk we just did all the XFS repair on? Nope, click the + sign, it's below that, e.g.: Note that you can change name before mounting the disk by clicking on it. Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 Sorry, I should have been more clear. Apologies... Is the first mounted disk in the rsync command the disk I just ran the XFS repair on and the second disk is the disk on the array I want to sync it to (in my case disk 7) Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 I'm sorry for the silly questions back and forth and am truly grateful for the assistance. I re-read the post and was like "DUH" UD = Unassigned device. Ran the command: rsync -av /mnt/disks/oldSeagate/ /mnt/disk7/ Output (error) below: Linux 4.19.107-Unraid. Last login: Mon Apr 12 05:29:17 -0700 2021 on /dev/pts/0. root@Zeus:~# rsync -av /mnt/disks/oldSeagate/ /mnt/disk7/ sending incremental file list rsync: ERROR: cannot stat destination "/mnt/disk7/": Input/output error (5) rsync error: errors selecting input/output files, dirs (code 3) at main.c(642) [Receiver=3.1.3] root@Zeus:~# Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 zeus-diagnostics-20210412-2307.zip Quote Link to comment
JorgeB Posted April 12, 2021 Share Posted April 12, 2021 Fix filesystem on disk7, then try again, with the array in maintenance mode: xfs_repair -v /dev/md7 If it asks for -L use it. Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 Cool, just stopped the array, went into maintenance mode and ran the command. Once its done, Ill go back to non-maintenance mode and re-run the rsync. Quote Link to comment
JasenHicks Posted April 12, 2021 Author Share Posted April 12, 2021 OMG.... rsync is doing something. @JorgeB - I could kiss you on the face right now! I'll refrain until this is done and actually works or something Seriously, let me know how I can send you something as a token of my gratitude. 1 Quote Link to comment
JasenHicks Posted April 13, 2021 Author Share Posted April 13, 2021 @JorgeB - you are the man. The only thing that seems to still need attention is my cache pool. Quote Link to comment
JorgeB Posted April 13, 2021 Share Posted April 13, 2021 According to that there are two missing pool devices. Quote Link to comment
JasenHicks Posted April 13, 2021 Author Share Posted April 13, 2021 Not sure why. They are all in the system. I bet I dorked something up when adding them back; just not sure what. Quote Link to comment
JorgeB Posted April 13, 2021 Share Posted April 13, 2021 They should have a blue icon, try this: Unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array. Quote Link to comment
JasenHicks Posted April 13, 2021 Author Share Posted April 13, 2021 OK. Did the following. 1. Stopped the Array. 2. Unassigned all the cache drives. It wouldn't let me start the array though. 3. Reassigned the cache drives. Started up the array. 4. Now we have GREEN BUBBLES next to each cache drive but still have the "Unmountable: No File System" on the NVME drive. Quote Link to comment
JorgeB Posted April 13, 2021 Share Posted April 13, 2021 Grab current diags, reboot, grab new diags and post both here. Quote Link to comment
JasenHicks Posted April 13, 2021 Author Share Posted April 13, 2021 Current.zeus-diagnostics-20210414-0540.zip Quote Link to comment
JasenHicks Posted April 13, 2021 Author Share Posted April 13, 2021 Post Reboot Diags. zeus-diagnostics-20210414-0554.zip Quote Link to comment
JorgeB Posted April 14, 2021 Share Posted April 14, 2021 When you started the array before the first screenshot, i.e., when both additional cache devices had a blue icon they were wiped: Apr 13 01:31:05 Zeus emhttpd: shcmd (453): /sbin/wipefs -a /dev/sdc1 Apr 13 01:31:05 Zeus root: /dev/sdc1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d Apr 13 01:31:05 Zeus emhttpd: shcmd (454): /sbin/wipefs -a /dev/sdd1 Apr 13 01:31:06 Zeus root: /dev/sdd1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d Apr 13 01:31:06 Zeus emhttpd: cache uuid: 9ac8c9e6-3103-4734-a27f-a043b30a7659 There should have been an "all data on this device will be deleted at array start" warning in red in front of both, now you can try this, with some luck it might work. With the array stopped type on the console: btrfs-select-super -s 1 /dev/sdc1 then btrfs-select-super -s 1 /dev/sdd1 Now start the array. Quote Link to comment
JasenHicks Posted April 14, 2021 Author Share Posted April 14, 2021 33 minutes ago, JorgeB said: When you started the array before the first screenshot, i.e., when both additional cache devices had a blue icon they were wiped: Apr 13 01:31:05 Zeus emhttpd: shcmd (453): /sbin/wipefs -a /dev/sdc1 Apr 13 01:31:05 Zeus root: /dev/sdc1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d Apr 13 01:31:05 Zeus emhttpd: shcmd (454): /sbin/wipefs -a /dev/sdd1 Apr 13 01:31:06 Zeus root: /dev/sdd1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d Apr 13 01:31:06 Zeus emhttpd: cache uuid: 9ac8c9e6-3103-4734-a27f-a043b30a7659 There should have been an "all data on this device will be deleted at array start" warning in red in front of both, now you can try this, with some luck it might work. With the array stopped type on the console: btrfs-select-super -s 1 /dev/sdc1 then btrfs-select-super -s 1 /dev/sdd1 Now start the array. You are a god damn hero. Thank you! I seem to be full up round again. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.