jonnyczi Posted February 9, 2020 Share Posted February 9, 2020 Hey guy's, long time Unraid user here. I ran into something that doesn't make sense. I haven't added or removed any drives in a very long time. Cache 1 states "too many missing/misplaced devices" and wants to be formatted even though it's file system still states btrfs. Feb 9 16:18:38 Tower emhttpd: cache uuid: 0b1c567a-f0dc-4b7e-a313-8d098ed56c16 Feb 9 16:18:38 Tower emhttpd: cache TotDevices: 2 Feb 9 16:18:38 Tower emhttpd: cache NumDevices: 2 Feb 9 16:18:38 Tower emhttpd: cache NumFound: 2 Feb 9 16:18:38 Tower emhttpd: cache NumMissing: 0 Feb 9 16:18:38 Tower emhttpd: cache NumMisplaced: 3 Feb 9 16:18:38 Tower emhttpd: cache NumExtra: 0 Feb 9 16:18:38 Tower emhttpd: cache LuksState: 0 Feb 9 16:18:38 Tower emhttpd: /mnt/cache mount error: Too many missing/misplaced devices Feb 9 16:18:38 Tower emhttpd: shcmd (1310): umount /mnt/cache Feb 9 16:18:38 Tower root: umount: /mnt/cache: not mounted. Feb 9 16:18:38 Tower emhttpd: shcmd (1310): exit status: 32 Feb 9 16:18:38 Tower emhttpd: shcmd (1311): rmdir /mnt/cache So I have 2x250GB cache drives. I also have 3x4TB drives. Two as parity and one as data. I needed more space and wanted to encrypt my data so I decided to move my parity 2 drive as a data drive. I stopped the array. Went to new config and chose to preserve cache assignments and clicked apply. I changed the default disk format from btrfs to btrfs encrypted. Then I reassigned my parity 1 and disk 1 the way it was before and added the old parity 2 drive to disk 2. Of course my cache drives are still in their respective places. So then I chose a keyfile and clicked on start array. The disk 2 didn't have a file system and required a format so I went to the bottom of the main page and saw the option to format disk 2. No other disk was there so I formatted. All was good so I went to disable the auto start of all my docker containers (The docker image file and appdata are all on the cache drive). Then I stopped and restarted the array. Which is where I got my cache drive issue. I thought that if one of the cache drives wasn't working I would still be able to see the data because the drives are mirrored but I don't see the data. Also disk 2 was formatted but now it has an unlocked yellow icon and the tool tip says "Device to be formatted" even though the file system stated on the drive is btrfs. I really need to fix my cache drive issue more so than anything else. I didn't even touch the cache drives and they were working after using the new configuration function and formatting disk 2. I know this because my dockers were all working. I mounted my cache drive in read only mode and my data is there but I still need to get it back into the array, hopefully without moving data around. PS: I have the license for 12 drives so I'm in the clear there although I have plugged in many drives over the years if that makes a difference. Thanks a bunch guys! Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 Please post the diagnostics: Tools -> Diagnostics Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 (edited) Unfortunately I rebooted since this happened but here is what I've got right now. Is there a log history? I also posted something in the first post. This is the line that I don't get. I only ever had these same two cache drives. Feb 9 16:18:38 Tower emhttpd: cache NumMisplaced: 3 syslog.txt Edited February 9, 2020 by jonnyczi Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 Something weird going on where the array devices are being considered part of the pool, am I right in understating that the array devices don't have any data are empty for now (disk1 and disk2)? 1 Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 You appear to have multiple disks with the same UUID, disconnect (or wipe) any disks currently no being used, like sde. It also looks like disks 1 and 2 have the same UUID, this won't work, you need to reformat one of them. After doing that post new diags (not just the syslog) 1 Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 disk1 is full of data and disk2 was in the process of being changed from parity2 to disk2 so its empty. Its does look like Unraid thinks that they were in the cache pool before and now they are not. I think I stumbled on a bug here. If I mount the problem cache drive as read only in unassigned devices all the files are there. Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 Yes, the pool is fine, the problem are the duplicated btrfs UUID, which are confusing Unraid: Feb 9 17:02:55 Tower root: WARNING: adding device /dev/sde1 gen 27591023 but found an existing device /dev/sdd1 gen 27591412 Feb 9 17:02:55 Tower root: ERROR: cannot scan /dev/sde1: File exists Feb 9 17:02:55 Tower root: WARNING: adding device /dev/sdf1 gen 27591023 but found an existing device /dev/sdd1 gen 27591412 Feb 9 17:02:55 Tower root: ERROR: cannot scan /dev/sdf1: File exists Feb 9 17:02:55 Tower root: WARNING: adding device /dev/md2 gen 27591023 but found an existing device /dev/md1 gen 27591412 Feb 9 17:02:55 Tower root: ERROR: cannot scan /dev/md2: File exists Feb 9 17:02:55 Tower root: Scanning for Btrfs filesystems Pool should mount correctly once you fix those, disconnecting or wiping sde and reformatting disk2 should do it. 1 Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 It's probably best to wipe disk2 before re-formatting: wipefs -a /dev/sdX1 then wipefs -a /dev/sdX 1 Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 (edited) Sure I will get that sorted and post back. Thank you so much for your help! All of the disks have been in the system for many months now. sde is actually my parity1 drive that I moved off of the array after running New Config so that I can move data faster with unBalance so that I can encrypt disk1. The only drive that changed was parity2 which is being moved and formatted as btrfs encrypted. When the array started with disk2 unraid asked me to format it so I did. Then upon restarting the array is when I got the cache drive problem. I didn't know if UUIDs ever change in hard drives but there are supposed to be so many that its basically impossible to duplicated them. What are the chances? 🤣 Edited February 9, 2020 by jonnyczi Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 I didn't know if UUIDs ever change in hard drives but there are supposed to be so many that its basically impossible to duplicated them It basically is, but parity will have the same one if used with a single device array. 1 Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 (edited) Thank you so much johnnie.black! That fixed it! Quote It basically is, but parity will have the same one if used with a single array device. Is this in the documentations somewhere cause that really got me? On a side note, I have been providing a keyfile (a photograph) on array start but disk2 states "Unmountable: Volume not encrypted". Unraid gave the format option but that didn't change anything. Something doesn't look quite right. Feb 9 20:08:23 Tower root: Starting diskload Feb 9 20:08:23 Tower emhttpd: Mounting disks... Feb 9 20:08:23 Tower emhttpd: shcmd (11025): /sbin/btrfs device scan Feb 9 20:08:23 Tower root: WARNING: adding device /dev/sde1 gen 27591023 but found an existing device /dev/sdd1 gen 27591501 Feb 9 20:08:23 Tower root: ERROR: cannot scan /dev/sde1: File exists Feb 9 20:08:23 Tower root: Scanning for Btrfs filesystems Feb 9 20:08:23 Tower emhttpd: shcmd (11026): mkdir -p /mnt/disk1 Feb 9 20:08:23 Tower emhttpd: shcmd (11027): mount -t btrfs -o noatime,nodiratime /dev/md1 /mnt/disk1 Feb 9 20:08:23 Tower kernel: BTRFS info (device md1): disk space caching is enabled Feb 9 20:08:23 Tower kernel: BTRFS info (device md1): has skinny extents Feb 9 20:08:37 Tower emhttpd: shcmd (11028): btrfs filesystem resize max /mnt/disk1 Feb 9 20:08:37 Tower root: Resize '/mnt/disk1' of 'max' Feb 9 20:08:37 Tower emhttpd: shcmd (11029): mkdir -p /mnt/disk2 Feb 9 20:08:37 Tower kernel: BTRFS info (device md1): new size for /dev/md1 is 4000786976768 Feb 9 20:08:37 Tower emhttpd: /mnt/disk2 mount error: Volume not encrypted Feb 9 20:08:37 Tower emhttpd: shcmd (11030): umount /mnt/disk2 Feb 9 20:08:37 Tower root: umount: /mnt/disk2: not mounted. Feb 9 20:08:37 Tower emhttpd: shcmd (11030): exit status: 32 Feb 9 20:08:37 Tower emhttpd: shcmd (11031): rmdir /mnt/disk2 Feb 9 20:08:37 Tower emhttpd: shcmd (11032): mkdir -p /mnt/cache Feb 9 20:08:38 Tower emhttpd: mount_pool: ERROR: cannot scan /dev/sde1: File exists Feb 9 20:08:38 Tower emhttpd: cache uuid: 0b1c567a-f0dc-4b7e-a313-8d098ed56c16 Feb 9 20:08:38 Tower emhttpd: cache TotDevices: 2 Feb 9 20:08:38 Tower emhttpd: cache NumDevices: 2 Feb 9 20:08:38 Tower emhttpd: cache NumFound: 2 Feb 9 20:08:38 Tower emhttpd: cache NumMissing: 0 Feb 9 20:08:38 Tower emhttpd: cache NumMisplaced: 1 Feb 9 20:08:38 Tower emhttpd: cache NumExtra: 0 Feb 9 20:08:38 Tower emhttpd: cache LuksState: 0 Feb 9 20:08:38 Tower emhttpd: shcmd (11033): mount -t btrfs -o noatime,nodiratime,degraded -U 0b1c567a-f0dc-4b7e-a313-8d098ed56c16 /mnt/cache Feb 9 20:08:38 Tower kernel: BTRFS info (device sdb1): allowing degraded mounts Feb 9 20:08:38 Tower kernel: BTRFS info (device sdb1): disk space caching is enabled Feb 9 20:08:38 Tower kernel: BTRFS info (device sdb1): has skinny extents Feb 9 20:08:38 Tower kernel: BTRFS info (device sdb1): enabling ssd optimizations Edited February 9, 2020 by jonnyczi Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 2 minutes ago, jonnyczi said: Feb 9 20:08:23 Tower root: WARNING: adding device /dev/sde1 gen 27591023 but found an existing device /dev/sdd1 gen 27591501 Is this still the same sde disk? It's still causing problems. 4 minutes ago, jonnyczi said: On a side note, I have been providing a keyfile (a photograph) on array start but disk2 states "Unmountable: Volume not encrypted". Unraid gave the format option but that didn't change anything. Something doesn't look quite right. Encryption is outside my wheel house, but that error suggest it's not accepting/recognizing the key. Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 Looking at it again it looks like my disk1 (sdd) and cache1 (sde) both have the same UUID even though they have been running together for a long time. Is that possible? Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 Not sure if that's a duplicate UUID issue, please post complete diags or can't see which disk is which. Also please post output of blkid Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL="UNRAID" UUID="5249-83B7" TYPE="vfat" /dev/sdb1: UUID="0b1c567a-f0dc-4b7e-a313-8d098ed56c16" UUID_SUB="bdaf68b2-1077-4d73-9f79-66a978ae6a90" TYPE="btrfs" /dev/sdc1: UUID="0b1c567a-f0dc-4b7e-a313-8d098ed56c16" UUID_SUB="37b13129-7539-43b1-9c32-5367273a1b98" TYPE="btrfs" /dev/sdd1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs" PARTUUID="adb9d840-8b94-4b72-b27d-225193802d5d" /dev/sde1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs" PARTUUID="b7cc4bde-8906-4d69-ad30-185e5f291241" /dev/sdg1: UUID="45704a3d-73a8-4d3c-8fff-63c304e424f8" TYPE="ext4" PARTLABEL="primary" PARTUUID="ab7b71f6-036c-4096-94db-3dd7353e333a" /dev/md1: UUID="f9b2c107-3b00-4ff5-bae5-2140e6db2314" UUID_SUB="4ec50007-10d0-4de9-b2bf-c793a6168b57" TYPE="btrfs" /dev/loop2: UUID="288d3b2e-7fe7-4604-b4c0-999207f03d35" UUID_SUB="93c6a399-6006-42c6-b5e7-8a508268d2a0" TYPE="btrfs" /dev/loop3: UUID="cc89a9fc-a9c2-45e9-85d4-0a961199eb31" UUID_SUB="6ff694a5-9b3f-4513-8592-a090097b1236" TYPE="btrfs" /dev/sdf1: PARTUUID="bf3964db-c06a-4c5f-a011-37e0b6e56d82" syslog.txt Quote Link to comment
JorgeB Posted February 9, 2020 Share Posted February 9, 2020 sde isn't cache1, it's an unassigned disk that is still connect and not wiped, and has the same UUID as disk1, it's also confusing the cache pool code since Unraid thinks it's part of the pool (because it looks to btrfs like it's part of a pool). 1 Quote Link to comment
jonnyczi Posted February 9, 2020 Author Share Posted February 9, 2020 Oh I just went back and yes you are right sde isn't cache1 but my unassigned parity1. I just wiped it and I don't have any more errors except for one in syslog. My array started automatically when rebooting and didn't ask me for the keyfile. Feb 9 20:57:53 Tower emhttpd: /mnt/disk2 mount error: Volume not encrypted I will do more research on that. Sorry about that. I wasn't very vigilant. Thank you so much @johnnie.black Quote Link to comment
jonnyczi Posted February 10, 2020 Author Share Posted February 10, 2020 In case of people run into the encryption issue: One problem was that the keyfile I was using was too large. I'm not sure what is the maximum size but 8.2MB was too large. Also to format and encrypt a new drive I needed to place the keyfile manually at /root/keyfile, the contents of the /root directory is in ram so it gets deleted when the machine is shut off or rebooted. Quote Link to comment
pjneder Posted May 6, 2020 Share Posted May 6, 2020 (edited) @johnnie.black I think I have hit a similar issue as covered in this thread. I'm reading it but not exactly sure what to do, so I will tread carefully and await some advice. Attached are my anonymized diags output. Basics of my sequence: Plugged in 2x new 4T WD Red drives enumerated as sdi and sdj Pre-cleared both. Stopped array, unassigned sde (3T) from Partiy. Assigned sdi as Parity, started, rebuilt parity Stopped array, unassigned sdl (1T) and added sdj to array, started, rebuilt sdj Went to zero out the sde 3T using preclear, started having problems. Did a clean restart then the cache went unmountable and I started seeing these "cannot scan" errors. Oh yeah, all my dockers are gone now since the Cache pool didn't start. Hoping to recover the pool and not have to start over. Any pointers would be appreciated! Thanks! unRaid-pjneder-diags.zip Edited May 6, 2020 by pjneder Quote Link to comment
JorgeB Posted May 6, 2020 Share Posted May 6, 2020 The unassigned disks are confusing Unraid, since they are btrfs and are clones of existing disks it thinks they are part of the pool, just disconnect (or wipe) them and the pool should mount normally. Quote Link to comment
pjneder Posted May 6, 2020 Share Posted May 6, 2020 Thanks! @johnnie.black!!! running the wipefs -a on the sde and sdl was the trick to managing that. I will hopefully remember than in the future. I still have 1 more 1T to retire and slot my other 3T into that spot. I will be able to not repeat the mistake. Curious, if one immediately uses the FORMAT button on the unassigned device, will that fix this as well? i.e. is there any other recommended way to remove the drive from the config without physically doing the disconnect? My thing was that I was keeping it there in case something happen during the migration should I have needed to replace. Thanks! Quote Link to comment
JorgeB Posted May 6, 2020 Share Posted May 6, 2020 12 minutes ago, pjneder said: Curious, if one immediately uses the FORMAT button on the unassigned device, will that fix this as well? Yep, you'll need to first delete existing partition and then format, since it's a new fs it won't confuse Unraid, even if still btrfs. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.