Jump to content

Unraid is not using my cache pool settings. Cache seems to be bypassed.


Recommended Posts

I have a new Unraid install, 6.9.2. I have 16 drives and 2 parity drives in an array. I have 2 1TB SSDs in a raid0 cache pool. I have been moving lots of data over and this has been working great. I fill up the cache and then run mover to move to array. I've moved 23TB so far. Today, copying files over, it is not using my cache at all. It is writing to the array directly. Cache is online and I can see the file structure. 

 

My data share is set to "Yes: Cache" and has been unchanged? I am at a complete loss here. I've restarted, taken the array offline, changed the cache settings on share to none and then back again, no change. Cache is basically empty. If I run mover, it scans and completes in 1 minute and doesn't seem to move anything.

 

Any thoughts?

Link to comment
3 minutes ago, Scott Balkum said:

I have been moving lots of data over and this has been working great. I fill up the cache and then run mover to move to array.

Would make more sense and maybe even quicker to copy all this data directly to the array and save yourself the trouble of running mover. Another approach to speed up initial data load is to not build parity until the initial load is finished.

 

As for your current situation

 

attach diagnostics to your NEXT post in this thread.

 

 

Link to comment
14 minutes ago, trurl said:

Would make more sense and maybe even quicker to copy all this data directly to the array and save yourself the trouble of running mover. Another approach to speed up initial data load is to not build parity until the initial load is finished.

 

 

 

Well, When copying the files to Unraid array directly, I end up with a write speed of about 7-8MB/s which is extremely low. Writing to the cache I get 100-200MB/s. And oddly, Mover gets about 40MB/s to the array so the net result is it is much faster to copy to the cache and Move than direct.

 

Out of shear curiosity, I tried copying to the array from another machine and it did successfully go to the cache. so, I restarted the original machine and now it is writing to cache. I can't explain that at all but it seems to be working at least now.

 

Not sure if you need my diagnostics since this is working now?

Link to comment

I am running a robocopy script that is copying over about 10TB at a time. It obviously fills up the cache pool at 2TB. I then stop the Robocopy and run Mover.. When you rerun robocopy, it verifies each file and won't write/overwrite a file that is already there and the same/newer. So, when you run the script after filling up the cache and moving, it does verify all those files, but that takes about 5 minutes or so and then it would write to the cache normally.  It has been working well for the 23TB I've copied so far. But, now, it isn't. 

 

Robocopy will show on the screen the files it is skipping.

Link to comment

Ok, I have discovered the issue, Not Unraid at all. Robocopy was having conflicts with the date and rewriting the dates to 1980. This appears to be a common issue when writing between NTFS and a Linux FS when using Robocopy. I found the files, deleted all the 1980's and then reran without using the /zb switch and added the /fft switch. This resolved the issue and it is now writing properly to cache at 200MB/s

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...