dopeytree Posted January 12 Share Posted January 12 I followed the steps in the manual for removing a drive. First perform a balance to change to a single drive. It confirmed it was in s single disk mode. I thought that meant it was then ok to stop the array and remove the drive. When I restarted the array it says there is not file system and it cannot read data on either disks. While there is no critical data. What steps should I have taken? Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 16 minutes ago, dopeytree said: First perform a balance to change to a single drive. Can you post a link to where you saw that? You can only remove a device from a redundant pool, the pool is then automatically converted to single (if there's only one device remaining). Quote Link to comment
dopeytree Posted January 12 Author Share Posted January 12 https://wiki.unraid.net/Manual/Storage_Management#Removing_disks_from_a_multi-device_pool I think I've probably killed the data which is ok as all the appdata is backed up to array. How is one supposed to remove a drive from a 2 drive cache pool. Pool TWO has 2 drives in raid1. I want to eliminate the mirror and run in single mode. Is it better to copy the data to another drive and wipe both drives? The system did hang during one of the BTFS balance check but this was actually when it was checking another pool not this one. Thanks for you help 🙂 Quote Link to comment
dopeytree Posted January 12 Author Share Posted January 12 [email protected]:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="72c4582b-ce97-49cb-b904-4e2c05073dda" UUID_SUB="30473b54-e841-4935-9880-ea16cc627683" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2dc-01" /dev/nvme0n1p1: UUID="24e861c6-e86f-4b55-b120-6ad8847c8d97" UUID_SUB="2f33bc93-9362-490f-a71e-04c42a03d74b" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2db-01" /dev/sdd1: UUID="aaa46ac2-3000-4696-b0eb-cea90fb5ea17" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="998d4295-3b12-4efe-a724-bd27751fe6f4" /dev/sdb1: UUID="6c8c21d2-3659-46bd-b2bc-7a32eca49098" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="0ec34a92-1dfd-48a3-9dbe-adf6bd457e17" /dev/sdg1: UUID="72c4582b-ce97-49cb-b904-4e2c05073dda" UUID_SUB="ca0c75b3-f370-4caf-988c-b6001af533be" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="506db2d2-01" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="39879032-0914-4994-b0cb-53c9fd59f0b1" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="32000865-aad5-4bfe-ba13-02573c2916c0" /dev/sdc1: UUID="ffafdb22-0f4d-49bf-b29c-e7521e488a3e" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="2d2ef932-f44d-37fc-375d-6a90c02b990f" /dev/md2: UUID="aaa46ac2-3000-4696-b0eb-cea90fb5ea17" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="39879032-0914-4994-b0cb-53c9fd59f0b1" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="ffafdb22-0f4d-49bf-b29c-e7521e488a3e" BLOCK_SIZE="512" TYPE="xfs" /dev/nvme1n1p1: PARTUUID="506db2d0-01" [email protected]:~# Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 6 hours ago, dopeytree said: First perform a balance to change to a single drive. And where do you see this, it explicitly says: Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 8 minutes ago, dopeytree said: How is one supposed to remove a drive from a 2 drive cache pool. If it's a raid1 pool just stop array, uassign the device you want to remove, start array. Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 If you didn't yet reboot post the diags and the pool might still be salvageable. Quote Link to comment
dopeytree Posted January 12 Author Share Posted January 12 4 minutes ago, JorgeB said: If it's a raid1 pool just stop array, uassign the device you want to remove, start array. Ok thanks - It said disk missing and wouldn't start the array. I must have missed a step. or just messed it up with the balance. Thanks - I suppose I don't really know what the highlighted bit in yellow means. Could we word it with an example in the manual. Diags attached. moulin-rouge-diagnostics-20230112-1654.zip Quote Link to comment
JorgeB Posted January 12 Share Posted January 12 19 minutes ago, dopeytree said: I suppose I don't really know what the highlighted bit in yellow means. Could we word it with an example in the manual. It just means the pool needs to be redundant, for example raid1, it won't work for a single profile or raid0 pool. Diags are after rebooting so assuming nvme1n1 was the other pool member and with the array stopped type: btrfs-select-super -s 1 /dev/nvme1n1p1 Then and without starting the array post the output of btrfs fi show P.S. btrfs is detecting data corruption in multiple devices, unless these are old errors that were never reset you likely have a RAM problem. Quote Link to comment
itimpi Posted January 12 Share Posted January 12 8 hours ago, dopeytree said: I followed the steps in the manual for removing a drive. First perform a balance to change to a single drive. It confirmed it was in s single disk mode. I thought that meant it was then ok to stop the array and remove the drive. When I restarted the array it says there is not file system and it cannot read data on either disks. While there is no critical data. What steps should I have taken? The instructions say to make sure that is in a redundant mode BEFORE you remove the drive. From your description this is not what you did? Quote Link to comment
dopeytree Posted January 12 Author Share Posted January 12 (edited) I don't really understand.. it says to scroll down to balance. It should then say scroll backup and just stop the array and remove the drive. I'm sure I did try this and it wouldn't let me re-start the array but perhaps it was after I did something else. BTRFS can add and remove devices online, and freely convert between RAID levels after the file system has been created. - I guess this doesn't mean single disk mode? What case use would someone use the balance single drive. Is that what you'd do after removing the mirrored cache? Anyway thanks for guidance guys. Am running memtest. You can't easily make the uefi image on mac anymore. Probably could have done it on the steamdeck in desktop mode.. Anyway my brother imaged the usb stick on windows. Will run those commands once memtest finishes. Edited January 12 by dopeytree Quote Link to comment
JorgeB Posted January 13 Share Posted January 13 12 hours ago, dopeytree said: What case use would someone use the balance single drive. Is that what you'd do after removing the mirrored cache? No, that is done automatically after you remove a drive (if only one remains), you might want to do that manually if for example you have two devices and want to use their full space for storage instead of having a mirror. Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 (edited) Something strange is happening. Server name has changed to the default Tower when should be moulin-rouge. What causes this? Ran memtest overnight. I've read some people do memory test for 24hrs beyond but the free uefi one stoped after doing the 4 tests types. So not sure if there's a difference between the memtest website version compares to unraid bundled version. Result is Memtest said it passed but I will try some other ram to be safe. Here's the output commands requested above: [email protected]:~# btrfs-select-super -s 1 /dev/nvme1n1p1 using SB copy 1, bytenr 67108864 [email protected]:~# btrfs fi show Label: none uuid: 72c4582b-ce97-49cb-b904-4e2c05073dda Total devices 2 FS bytes used 889.57GiB devid 1 size 1.86TiB used 869.03GiB path /dev/sdg1 devid 2 size 1.86TiB used 869.03GiB path /dev/sdf1 Label: none uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97 Total devices 2 FS bytes used 1.11TiB devid 2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1 devid 3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1 [email protected]:~# I think this indicates I made the NVMe drives unsynced? The other pool (sdg1 & sdf1) has disconnected but am I able to mount both the drives with the unassigned plugin. The nvme drives are not mountable with the unassigned plugin. Edited January 13 by dopeytree Quote Link to comment
itimpi Posted January 13 Share Posted January 13 4 minutes ago, dopeytree said: Server name has changed to the default Tower when should be moulin-rouge. What causes this? This will happen if Unraid cannot read the configuration information off the flash drive. You are likely to get better informed feedback if you attach your system’s diagnostics zip file to your next post in this thread taken while this problem is being encountered. Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 (edited) I see it has the FSCK0000.REC etc hmm ok so we are back to a bad usb stick... I had that a few months ago when initialising the server. We ended up replacing the usb3 samsung drive with a usb2 drive and everything was good until about last week. If I download the USB stick backup from my-servers it would be cool to be able to also jump back a week. The current one is from yesturday 24hours ago & I think the stick corruption happened today as I was still logging in with moulin-rouge.local yesterday so actually all ok. Anyway diags attached. Many thanks. Also there's a small bug in the code on the forum here to do with colour. Sometimes if you copy and past it chooses the wrong colour. It needs something that says if background is light use dark colour & vice versa for dark background. Or could be a safari bug. tower-diagnostics-20230113-1502.zip Edited January 13 by dopeytree Quote Link to comment
JorgeB Posted January 13 Share Posted January 13 50 minutes ago, dopeytree said: Label: none uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97 Total devices 2 FS bytes used 1.11TiB devid 2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1 devid 3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1 Pool is back, to use it you first need to reset it, so unassign all members from this pool, start array, stop array, re-assign both pool devices and start the array. 1 Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 Ace thanks @JorgeB Both pools backup. Panic over. Although still going to try different ram & usb stick for stability. Please could you just talk me through my original intention which was to remove the mirror on the NVMe pool & instead have 2nvme pools. 3 pools total. 1x each for NVMe drive (2) Then the existing pool two which is mirrored and stays mirrored. Do I need to do anything to check the NVMe are up to date as it sounds like one has more data than the other nvme drive? 2 size 1.86TiB used 516.03GiB path /dev/nvme1n1p1 3 size 1.82TiB used 630.03GiB path /dev/nvme0n1p1 Before then stoping array. remove nvme disk from cache pool. restart array with single nvme. Thanks Quote Link to comment
JorgeB Posted January 13 Share Posted January 13 7 minutes ago, dopeytree said: Please could you just talk me through my original intention which was to remove the mirror on the NVMe pool & instead have Please post current diags to see pool profile in use, if it's using the single profile (in part or fully) it's normal for one device to have more data. Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 Current Diagnostics. tower-diagnostics-20230113-1502.zip Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 With array started tower-diagnostics-20230113-1859.zip Quote Link to comment
JorgeB Posted January 13 Share Posted January 13 If you still want to remove a device from that pool, first balance the pool to raid1, when that is done, stop the array, unassign the device you want to remove (leave it unassigned for now), start the array, wait for the balance to finish, when done you can stop the array and create a new pool with the old device. Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 Thanks so much @JorgeB Quote Link to comment
dopeytree Posted January 13 Author Share Posted January 13 'Perform full balance' or 'convert to raid1'? Quote Link to comment
dopeytree Posted January 14 Author Share Posted January 14 (edited) I cant get it to let me start the array with 1 drive removed from the pool. [email protected]:~# btrfs fi show Label: none uuid: 24e861c6-e86f-4b55-b120-6ad8847c8d97 Total devices 2 FS bytes used 1.12TiB devid 2 size 1.86TiB used 1.13TiB path /dev/nvme1n1p1 devid 3 size 1.82TiB used 1.13TiB path /dev/nvme0n1p1 Label: none uuid: 72c4582b-ce97-49cb-b904-4e2c05073dda Total devices 2 FS bytes used 898.84GiB devid 1 size 1.86TiB used 910.03GiB path /dev/sdg1 devid 2 size 1.86TiB used 910.03GiB path /dev/sdf1 Label: none uuid: 5179e4f0-91cc-4afb-a038-692b2e3afe1c Total devices 1 FS bytes used 468.00KiB devid 1 size 10.00GiB used 126.38MiB path /dev/loop2 [email protected]:~# tower-diagnostics-20230114-0419.zip Edited January 14 by dopeytree Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.