Jump to content

Whole array goes un-writable (cache pool related)


Recommended Posts

Hey peeps.

 

I have a 18 drive array with 2 parity disks. I added a 120GB ssd cache disc, and set my dockers and shares to use the cache, no prob. I then added another ssd cache disc of 480GB and assigned to cache (cache pool jumped to 600GB) and changed to raid1 auto after about 15 minutes or so (cache pool dropped to 300GB). System saying everything is fine, but when I hit that 112GB mark on the cache pool now (prior it just bypassed cache onto main array disks until the daily mover scheduler kicked in) the whole array suddenly just goes un-writable. SMB, FTP, docker filesystems all cannot write and throw errors. Unraid still says everything is fine though and nothing is mentioned in log and it still says I have 190GB odd of cache space left. When I hit the manual move now button, 30 seconds later everything is writable again. It's done this twice now (only hit that GB mark during a day twice) so I've deduced it down to the cache pool causing the problem. I'm not sure if I have done something wrong in setup or perhaps this is a bug? I don't mind dropping the cache down to the 480GB alone and seeing if the problem persists. If I have a 120GB & 480GB SSDs in a cache pool but the size says 300GB, am I ok so just stop array and un-assign the 120GB and start array again? Or do I need to Yes/No all the shares etc first and move everything off whole pool first like a sole cache disc removal/swap? Any other suggestions I can try? (I'm not sure where to find the cache pool settings to change to stripped mode) Cheers, Ben

Link to comment

Ah. Thanks mate. Yeah I figured a mirrored backup for two different sized drives is impossible, after reading back my post. I just let unraid do its thing without any intervention. I actually wanted to setup the drives as raid0 in the first place. I'll do that now and i'm sure it will work fine again. Cheers.

Also, forgot to mention i'm on v6.7. 

Link to comment
1 hour ago, BiGs said:

I actually wanted to setup the drives as raid0 in the first place.

Probably what you really want is "single", not raid0. Or just forget about the smaller SSD and save the port for another array disk later.

 

Single vs raid0, from that 2nd link I gave:

On 7/18/2016 at 4:46 AM, johnnie.black said:

Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1


-dconvert=single -mconvert=raid1
 

 

RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger.


-dconvert=raid0 -mconvert=raid1
  • Like 1
Link to comment

You're right, single mode is what I want. I made my way over to the cache balance form with the option cmd box (link seems to only be available from the dashboard page of the GUI). I copy pasted the option and nothing happened, I then copy pasted it via notepad and still nothing happened. I then manually typed it as per the FAQ note and it immediately started doing work (strange aye). So i guess ill wait for it to do its thing and maybe reboot after to see the 600GB. Cheers Con.

 

p.s. went into detail with the balance fix for the sake of the help log for others.

Edited by BiGs
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...