Array taking 30+ minutes to start up


Arcaeus

Recommended Posts

So this morning I wanted to change the name that I had for the binhex-deluge docker to just Deluge. So I opened the settings, changed the name, and hit save. The operation failed, and deluge wouldn't start up again. I tried to turn off all of my dockers and restart them, but then they wouldn't restart again. So after waiting for the parity sync to finish, I rebooted the server in hopes that would fix the problem. 

 

The server rebooted, and the all the disks mounted after a minute or two. However the array is taking forever to start up. It's been about 40 minutes so far with no change. It's previously never taken so long and I'm trying to figure out why.

 

I haven't rebooted yet in case you guys need the diagnostics files, and figured I'd reach out first before doing anything for this reason.

 

Thoughts?

Link to comment
1 minute ago, Arcaeus said:

So that ties into another issue I'm having here: 

 

 

Im working on getting the move log working atm.

You might want to set a reasonable value for the Minimum Free Space under Settings_>Global Share Settings.   The normal recommendation is something like twice the size of the largest file you expect to write to the array.    When the free space on the cache drive gets below the Minimum Free Space value Unraid will start writing files directly to the array by-passing the cache.   This is slower than writing to the cache but it avoids you getting errors in the first place.

Link to comment
  • 2 weeks later...
1 hour ago, Arcaeus said:

@itimpi Ok, that sounds like a good idea. I'm transferring some big files with 4k video and all, so I'd like to set it for 50GB or so. I do have soime 80Gb UHD movies, but trying to set it for 160GB I think will just bypass the drive more often than I'd like.

 

Right now it's set to 5,000,000. Is that in bytes or what?

It is always a trade-off between the cost of allowing for the worst possible case, and using a lower value that handles the vast majority of cases but not all of them.

 

The key point is that if to realise that once you start writing a file and Unraid has chosen the target drive it will not be taking into account the size of the file in picking the drive.   It will just error out when free space gets to zero.    As an example if you had 60GB free and you chose to write the 80GB file you mentioned. then Unraid would choose the cache drive (as it has more than the 50GB free that you intend to set) and then error out after writing 60GB (which fills the drive).  If you had only been writing a 50GB file then it would have succeeded and you would have been left with 10GB free.  At that point you would be below the Minimum Free Space value and subsequent writes would by-pass the cache.

 

In terms of the meaning of the setting turn on the Help in the Unraid UI and that should make it clear how you can enter values and what they mean.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.