Jump to content

UNRAID UNSTABLE AFTER UPGRADE TO 6.9 AND ADDITION OF NEW APPDATA CACHE


Recommended Posts

I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool.  Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage.  Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check.  I've had access to dockers lock up, array stalls, or the UI fail to respond.  Looking at the logs, it appears I'm having write issues to the new apps cache.  I broke the cache and ran a preclear on both new drives; both test completed without error. 

 

Thanks in advance

jupiter-diagnostics-20210310-0700.zip jupiter-syslog-20210310-0648.zip

Edited by Aquamac
Link to comment
1 minute ago, Squid said:

Reseat the cabling to all drives.  You probably slightly disturbed them.

Hey Squid, thanks for the quick response, I already tried this but will try again.  This issue seems to manifest itself overnight.  FYI, all drives are in cages installed in server hot swap slots in a dedicated rack so they aren't susceptible to being bumped.

Link to comment
  • 2 weeks later...
On 3/10/2021 at 10:46 AM, Aquamac said:

I recently upgraded my Unraid server to 6.9 and added 2 SSD's as a cache pool.  Ideally, I wanted to create a raid 0 for my cache and a raid 1 for my appdata storage.  Server has locked up 3 times in a 3 day period requiring a hard restart leading to a parity check.  I've had access to dockers lock up, array stalls, or the UI fail to respond.  Looking at the logs, it appears I'm having write issues to the new apps cache.  I broke the cache and ran a preclear on both new drives; both test completed without error. 

 

Thanks in advance

jupiter-diagnostics-20210310-0700.zip 325.45 kB · 1 download jupiter-syslog-20210310-0648.zip 167.44 kB · 2 downloads


I wanted to update this thread in case anyone else is experiencing the same issue as described above. 

 

My configuration:

Various mechanical drives make up the array

1 cache pool (raid0 -2 Samsung 850 evo SSD's) for cacheing to the server <--- this pool works without issue even when used as appdata storage

1 cache pool (1 Samsung 870 evo SSD) for appdata <--- this pool causes errors when both 870's are set up as raid

Various SSD's as unassigned devices

 

The only time I experience drive errors is when the cache pool for appdata is configured as raid (either raid0 or raid1).  As long as the appdata cache pool is made up of one drive, everything works as expected.  I've tested each 870 evo individually in the appdata pool, using both btrfs and xfs without issue.  I would really like to use both 870's in a raid1 configuration for my appdata cache but can't get past the drive errors.  Any direction would be appreciated.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...