eXorQue Posted May 2 Share Posted May 2 I had a configuration like so: Array Devices DEVICE IDENTIFICATION TEMP. READS WRITES ERRORS FS SIZE USED FREE Parity unassigned Parity 2 WDC_WD40EZAX-00C8UB0_WD-WX12A82ED2EV - 4 TB (sdg) 26 C Disk 1 ST2000DM001-1CH164_Z1E6HF1W - 2 TB (sdd) 30 C xfs Disk 2 WDC_WD40EZAX-00C8UB0_WD-WX92D622NRV2 - 4 TB (sdb) 24 C xfs Disk 3 WDC_WD40EZAX-00C8UB0_WD-WX42D6213VCY - 4 TB (sdc) 24 C xfs Slots: 5 Pool Devices DEVICE IDENTIFICATION TEMP. READS WRITES ERRORS FS SIZE USED FREE Cache Samsung_SSD_860_QVO_1TB_S4CZNF0MB47942W - 1 TB (sdf) 24 C Cache 2 ADATA_SX8200PNP_2N3529191NRH - 1 TB (nvme0n1) I wanted to switch the "Disk 1" to a new disk similar to what I did with the Parity disk and Disks 2 and 3. So I stopped the array, shutdown, switched the cables from Disk 1 to the 4TB disk. Started everything. Problem encountered: After starting the array the configuration was as such **Note that Disk 1 changed accordingly, *but* Cache was gone** Array Devices DEVICE IDENTIFICATION TEMP. READS WRITES ERRORS FS SIZE USED FREE Parity unassigned Parity 2 WDC_WD40EZAX-00C8UB0_WD-WX12A82ED2EV - 4 TB (sdg) 26 C Disk 1 WDC_WD40EZAX-00C8UB0_WD-WX42D62F1K5E - 4 TB (sde) 30 C xfs Disk 2 WDC_WD40EZAX-00C8UB0_WD-WX92D622NRV2 - 4 TB (sdb) 24 C xfs Disk 3 WDC_WD40EZAX-00C8UB0_WD-WX42D6213VCY - 4 TB (sdc) 24 C xfs Slots: 5 Pool Devices DEVICE IDENTIFICATION TEMP. READS WRITES ERRORS FS SIZE USED FREE Cache unassigned Cache 2 ADATA_SX8200PNP_2N3529191NRH - 1 TB (nvme0n1) I'm not sure what to do. * Now, when attaching the original cache disk it says: "All existing data on this device will be OVERWRITTEN when array is Started" * And I'm not sure whether the new disk 4TB will be rebuilt. Wether I select the original 2TB disk OR the new 4TB disk, in both cases it says: "New Device". Attached the Diagnostics. supermicro-diagnostics-20240502-0931_anonymized.zip Quote Link to comment
Solution JorgeB Posted May 2 Solution Share Posted May 2 Unassign disk1 for now, and also unassign the other cache device, start array, stop array to reset the pool, re-assign both cache devices, start array, the pool should now import, if OK, stop array and re-assign disk1 to rebuild, if not post new diags. 1 Quote Link to comment
eXorQue Posted May 2 Author Share Posted May 2 Whe I unassign the cache devices, can I mount them to see what the contents are? I have my photoprism importing on the cache Also, where can I see whether the mover has ran lately? @JorgeB Now, after unassigning the "Cache 2" I see the notice "Start will remove the missing cache disk and then bring the array on-line". Why wasn't this shown when "Cache 1" was missing? Is it possible that the Cache Pool is a duplicate from each other? If yes, could this be holding my old cache data? Quote Link to comment
JorgeB Posted May 2 Share Posted May 2 19 minutes ago, eXorQue said: Now, after unassigning the "Cache 2" I see the notice "Start will remove the missing cache disk and then bring the array on-line". Why wasn't this shown when "Cache 1" was missing? That's normal, just do what I posted above. 1 Quote Link to comment
eXorQue Posted May 2 Author Share Posted May 2 (edited) @JorgeB Cache pool seems to be fine from what I can see now. Rebuild is in progress for Disk 1. However, Docker doesn't start: "Docker Service failed to start." Diags attached supermicro-diagnostics-20240502-2124.zip Edit: Maybe some useful addition: I see that there's a rootshare/cache/system/docker/docker.img on the cache pool. Edited May 2 by eXorQue Quote Link to comment
JorgeB Posted May 2 Share Posted May 2 May 2 21:13:09 supermicro kernel: BTRFS info (device sdf1): bdev /dev/sdf1 errs: wr 1521310, rd 486, flush 147793, corrupt 1821, gen 0 This shows that one of the pool devices dropped offline in the past, run a correcting scrub on the pool and make sure there are no unrecoverable errors, if successful then run a balance to raid1, since currently it has dual profiles. 1 Quote Link to comment
eXorQue Posted May 3 Author Share Posted May 3 (edited) @JorgeB First of all, thank you for helping me so far. Everything seems to work out great I've executed both Scrub and Balance. Those disks seem to be fine. It had corrected a bunch of things. I then tried to start docker again by disabling and enabling the service in Settings > Docker. This gave me the message in the docker tab "Docker Service failed to start." Then with this command, but got the following error: ``` /etc/rc.d/rc.docker start no image mounted at /var/lib/docker ``` Could it be that the docker.img isn't mounted? What to do Edit: added diags supermicro-diagnostics-20240503-0944.zip Edited May 3 by eXorQue Added diags Quote Link to comment
JorgeB Posted May 3 Share Posted May 3 Docker image is corrupt, delete and recreate: https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file Also see below if you have any custom docker networks: https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks 1 Quote Link to comment
eXorQue Posted May 3 Author Share Posted May 3 @JorgeB Thanks, everything works again I had to recreate my docker containers that I build from source from my dev machine again, but other than that, everything works. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.