ToXicreloadz Posted September 9, 2022 Share Posted September 9, 2022 I just tore my server down to clean everything out and upon starting it back up I was unable to mount my cache drive pool. I then proceeded to update my server from version 6.9.2 to 6.10.3. After trying to mount the drives I said screw it and wiped the drives and formatted them. I even went into tools and did a new config. I have changed out my cables to my drives and tried multiple things but cannot get the drives to mount in a pool. Please help here are my logs. tower-syslog-20220909-1533.zip Quote Link to comment
ToXicreloadz Posted September 9, 2022 Author Share Posted September 9, 2022 tower-diagnostics-20220909-1538.zip Quote Link to comment
ToXicreloadz Posted September 9, 2022 Author Share Posted September 9, 2022 I seem to always have the unsolvable errors xD. Last motherboard caught fire lol Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 More info: I am getting an unraid error and I cannot mount more then one drive in my cache pool. Not sure what went wrong. I disassembled my server and cleaned it and put it back together and then could not remount my cache pool. Upon rebooting from the GUI the server would hang with a usb error. I then proceeded to replace the usb key updating to the latest version of unraid and that fixed the boot issue hanging. I then wiped and formatted all of my cache drives and still cannot mount more then one cache drive in a pool without getting a invalid pool config or no pool uuid. I also have an error on my parity drive stating ERROR: cannot scan /dev/sdf1: Input/output error. Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 Trying to run an extended self-test on the parity drive but has been at 10% for a while. Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 I've also changed the cables to the parity and all ssd cache drives. Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 Great troubleshooting Toxic! keep it up. "Rambles on to self" Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 Ready to throw the motherboard ram and cpu out the window and start with a newer build Quote Link to comment
itimpi Posted September 10, 2022 Share Posted September 10, 2022 4 hours ago, ToXicreloadz said: Trying to run an extended self-test on the parity drive but has been at 10% for a while. The extended check only updates every 10%. The check will typically take around 2 hours per TB so that can give you an estimate as to when you will see the percentage update. Quote Link to comment
Solution JorgeB Posted September 10, 2022 Solution Share Posted September 10, 2022 Sep 9 15:01:30 Tower emhttpd: /mnt/cache ERROR: cannot scan /dev/sdf1: Input/output error This is the problem, sdf is parity and because you have an odd number of btrfs arrays devices makes it look like parity has an invalid btrfs filesystem and causes issues during btrfs device scan, it's a known issue but there's no fix for now, workaround is to use xfs only for the array, use btrfs encrypted, or have an even number of btrfs array devices, this last option usually works but not always. Quote Link to comment
JorgeB Posted September 10, 2022 Share Posted September 10, 2022 37 minutes ago, JorgeB said: it's a known issue but there's no fix for now, Just saw the new test release and there's a fix for this issue, so if you wait for v6.11-rc5, which should be released soon, you will be able to use the pool with the current array config. Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 1 hour ago, JorgeB said: Sep 9 15:01:30 Tower emhttpd: /mnt/cache ERROR: cannot scan /dev/sdf1: Input/output error This is the problem, sdf is parity and because you have an odd number of btrfs arrays devices makes it look like parity has an invalid btrfs filesystem and causes issues during btrfs device scan, it's a known issue but there's no fix for now, workaround is to use xfs only for the array, use btrfs encrypted, or have an even number of btrfs array devices, this last option usually works but not always. Can't I just convert my array drives all over to XFS one at a time? Granted it I would have to rebuild the drive, but is it safe to rebuild a drive with the parity showing the error. Tower root: ERROR: cannot scan /dev/sdf1: Input/output error Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 2 hours ago, JorgeB said: Sep 9 15:01:30 Tower emhttpd: /mnt/cache ERROR: cannot scan /dev/sdf1: Input/output error This is the problem, sdf is parity and because you have an odd number of btrfs arrays devices makes it look like parity has an invalid btrfs filesystem and causes issues during btrfs device scan, it's a known issue but there's no fix for now, workaround is to use xfs only for the array, use btrfs encrypted, or have an even number of btrfs array devices, this last option usually works but not always. Issn't this just the issue with the parity drive. How does this affect the cache pool at all? Quote Link to comment
trurl Posted September 10, 2022 Share Posted September 10, 2022 53 minutes ago, ToXicreloadz said: convert my array drives all over to XFS one at a time? Granted it I would have to rebuild the drive Rebuild can't change filesystem, you would have to reformat. Quote Link to comment
ToXicreloadz Posted September 10, 2022 Author Share Posted September 10, 2022 40 minutes ago, trurl said: Rebuild can't change filesystem, you would have to reformat. That's not a problem. I just need to know if it's safe to do so with the current status of the parity drive. Quote Link to comment
JorgeB Posted September 10, 2022 Share Posted September 10, 2022 1 hour ago, ToXicreloadz said: ssn't this just the issue with the parity drive. How does this affect the cache pool at all? Because until now the error, caused by parity appears to have a btrfs filesystem, interferes with the pool device scan, those errors will be ignored starting with rc5. 16 minutes ago, ToXicreloadz said: I just need to know if it's safe to do so with the current status of the parity drive. It is, assuming parity is valid, other issue won't affect the array. Quote Link to comment
trurl Posted September 10, 2022 Share Posted September 10, 2022 3 hours ago, ToXicreloadz said: That's not a problem. I just need to know if it's safe to do so with the current status of the parity drive. Just so we're completely clear on this since you mentioned rebuilding and are still thinking about parity in relation to this. After formatting to a new filesystem, you can't rebuild the data that was on the disk. Quote Link to comment
ToXicreloadz Posted September 11, 2022 Author Share Posted September 11, 2022 I will add another hard drive into the array as a btrfs drive and see if that resolves the issue. Figner's crossed. Thx for your help. Quote Link to comment
ToXicreloadz Posted September 12, 2022 Author Share Posted September 12, 2022 On 9/10/2022 at 6:00 AM, JorgeB said: Sep 9 15:01:30 Tower emhttpd: /mnt/cache ERROR: cannot scan /dev/sdf1: Input/output error This is the problem, sdf is parity and because you have an odd number of btrfs arrays devices makes it look like parity has an invalid btrfs filesystem and causes issues during btrfs device scan, it's a known issue but there's no fix for now, workaround is to use xfs only for the array, use btrfs encrypted, or have an even number of btrfs array devices, this last option usually works but not always. Once I added another BTRFS drive into the array and rebooted the issue has been resolved. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.