Aquamac Posted March 10, 2021 Share Posted March 10, 2021 (edited) I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool. Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage. Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check. I've had access to dockers lock up, array stalls, or the UI fail to respond. Looking at the logs, it appears I'm having write issues to the new apps cache. I broke the cache and ran a preclear on both new drives; both test completed without error. Thanks in advance jupiter-diagnostics-20210310-0700.zip jupiter-syslog-20210310-0648.zip Edited March 10, 2021 by Aquamac Quote Link to comment
Squid Posted March 10, 2021 Share Posted March 10, 2021 Reseat the cabling to all drives. You probably slightly disturbed them. Quote Link to comment
Aquamac Posted March 10, 2021 Author Share Posted March 10, 2021 1 minute ago, Squid said: Reseat the cabling to all drives. You probably slightly disturbed them. Hey Squid, thanks for the quick response, I already tried this but will try again. This issue seems to manifest itself overnight. FYI, all drives are in cages installed in server hot swap slots in a dedicated rack so they aren't susceptible to being bumped. Quote Link to comment
Aquamac Posted March 18, 2021 Author Share Posted March 18, 2021 On 3/10/2021 at 10:46 AM, Aquamac said: I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool. Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage. Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check. I've had access to dockers lock up, array stalls, or the UI fail to respond. Looking at the logs, it appears I'm having write issues to the new apps cache. I broke the cache and ran a preclear on both new drives; both test completed without error. Thanks in advance jupiter-diagnostics-20210310-0700.zip 325.45 kB · 1 download jupiter-syslog-20210310-0648.zip 167.44 kB · 2 downloads I wanted to update this thread in case anyone else is experiencing the same issue as described above. My configuration: Various mechanical drives make up the array 1 cache pool (raid0 -2 Samsung 850 evo SSD's) for cacheing to the server <--- this pool works without issue even when used as appdata storage 1 cache pool (1 Samsung 870 evo SSD) for appdata <--- this pool causes errors when both 870's are set up as raid Various SSD's as unassigned devices The only time I experience drive errors is when the cache pool for appdata is configured as raid (either raid0 or raid1). As long as the appdata cache pool is made up of one drive, everything works as expected. I've tested each 870 evo individually in the appdata pool, using both btrfs and xfs without issue. I would really like to use both 870's in a raid1 configuration for my appdata cache but can't get past the drive errors. Any direction would be appreciated. Quote Link to comment
JorgeB Posted March 18, 2021 Share Posted March 18, 2021 Log is full of timeout errors, this is a hardware problem, though it might be worse with a raid config, if it's an option try connection the SSDs on the Intel SATA ports. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.