Cache Drives Not Being Used after System Swaps


Recommended Posts

Hello folks,

I attached diagnostics

 

So, I have two Unraid servers - one names Node and another named Monolith. These are named after their appearance/case designs. Node is my main server and Monolith is acting as my backup server. I'm administering this stuff on a Windows machine. 

Once upon a time the builds of these servers was different, but I recently swapped cases between the two, and the arrays, cache drives, etc. Kind of like swapping the USB sticks between servers where the USB install expects to see different stuff than what it finds.

Anyway, things are humming along reasonably well after having to fix some small things. But I don't believe either server is using the cache drives now. The diagnostics I've attached is for Monolith as I know it's not using the cache drive for writes to the "Storage" share (I just bounced a 10GB file to the share while it's running a parity check to see if it'd write to cache and it didn't).

Any advice on what to fix?

Thanks, and I'll be eagerly watching this topic for any help.

monolith-diagnostics-20220805-1103.zip

Link to comment
19 minutes ago, trurl said:

If a file already exists, it will be replaced on the drive where it currently exists.

Sorry, I don't understand your reply. Could you elaborate? Thanks for your reply.

 

If you're referring to the 10GB file I dumped, I dumped it to the root of the share (it didn't exist there).

Have my logs revealed anything useful?

 

And for more info, my Windows writes to the server are around 35mb/s (since no cache). When the cache drives were working, I can get close to saturating the bus - around 100mb/s. 

Link to comment
10 minutes ago, JorgeB said:

Share config looks OK, please try creating a new test share with use cache = yes, leave all the other settings as default, then see if there's any difference.
 

 

Thank you for the suggestion - I created a new test share (TESTTEST) and dumped the same 10GB file to it via Windows explorer. Writes averaged to about 45mb/s. Didn't use cache at all. I knew this from looking on the cache drives right after and there was still no change on the cache drives themselves...it wrote the file directly to the array.

 

I'm wondering if UNRaid associated cache drives with hardware IDs or something and it's seeing the old IDs even though I've tried to update them. It's like Unraid sees cache drives in the main tab but doesn't have them correctly associated so it doesn't use them.

 

Thanks again for the help.

Link to comment
15 hours ago, BKTEK said:

I'm wondering if UNRaid associated cache drives with hardware IDs or something and it's seeing the old IDs even though I've tried to update them.

Don't see how that's possible, lets start with the basics, type this in the console:

 

touch /mnt/user/Storage/cache_test

 

Them post the output of:

 

find /mnt -name cache_test

 

Link to comment

/mnt/user/Storage/cache_test
/mnt/user0/Storage/cache_test
/mnt/disk4/Storage/cache_test
/mnt/rootshare/user.Monolith/Storage/cache_test

 

Didn't realize it hadn't completed its run. This is the completed output. Thanks for the tip.

Link to comment

Also worth noting I just tried a bunch of different things and none of them made the cache drives work (I have two). 

*marked all shares as cache=yes (after turning off dockers and VMs), then running Mover, then rebooting

*detached/deleted the cache pools and rebuilt the pools as new (with the same names)

*detached/deleted AGAIN and rebuilt using new names. Interestingly, the old cache reference names were still in the shares, but I went in and changed them to the new names. No fun, though - it didn't change anything

 

I observed that Windows believes that it's copying the files over smb at a rate of about 112mb/s (saturation) for the first 4GB of the file, then drops off dramatically to 40mb/s (and yes, I checked the server and Windows box and both are full 1GB duplex). 

 

I don't know what else to try, but both servers are having this problem. Again, swapped out hardware (see first post) and maybe that's the problem. I wonder if I start over from scratch if the problem will be solved. I could do that on my backup server Monolith and see what happens.

Link to comment

I also go in and check the cache drives in Unraid to see what's on them and they remain 0 folders, 0 files. And while the write is happening/after the write, there is nothing. And no space taken on the drives...no changes.

 

Is the only/best solution to start over from scratch, which I probably should have done in the first place?

 

Thank you again to anyone helping.

Link to comment

Okay, one more odd observation - on my main server I decided to do the whole "mark shares as cache=yes" thing and running Mover so it bounces the files over to the array. I have about 100gb on a cache drive and Mover is indeed moving the files (the little bars are going down). I know that the array is writing parity while it's doing this, but I think it's been bouncing the files over for maybe 4 hours now and has only moved 10gb or less. At this rate it will take 40 hours to bounce 100gb from the cache to the array, which seems absurd.

 

Anyone have any thoughts?

Link to comment

I still cannot see why cache is not being used, possibly some old setting that is conflicting or is not taking, I would suggest backing up flash drive, creating a new one, then restore only the things you need, like the key, disk assignments (super.dat), network config, etc, avoid restoring anything that relates to shares, then reconfigure them.

Link to comment

Just a thought, it might be worth deleting (or renaming) the config/share.cfg file from the flash drive in case it contains any values from an old Unraid release that predates multiple pools that are causing problems.  You can then use Settings->Global Share settings to get a new one generated.  No idea if it will help but cannot hurt to try it.

Link to comment

I had another thought. Is it possible that the root share available in the Unassigned Drives plugin bypasses smb cache usage? Previously I was using the method detailed by SpaceInvaderOne to create a root share, but found the plugin to be simpler to setup. Is it interfering?

 

Another reason I abandoned the old method is that it would incorrectly report array space in Windows. 

 

The reason I thought of this is because along the way I've heard a lot of people talking about being careful with root shares because of copying files across shares or drives, or arrays, etc. I wonder if I'm somehow doing something wrong with that.

 

Finally, can someone detail a few simple tests I can perform to see if indeed my cache drives are being used correctly? Thanks. Sorry for the spammed messages. I'm just thinking out loud at this point.

Edited by BKTEK
Link to comment

Update - I removed the Unassigned Devices root share and did a simple smb share. Then I copied a test file over and VIOLA. The cache did exactly what it was supposed to do. So I'm thinking now about starting a new thread to figure out what's going on with the plugin's implementation of root shares - because it definitely broke my setup.

Link to comment
11 minutes ago, trurl said:

If you try to do anything with rootshare, no user share settings can apply. If you try to do something with a specific user share, user share settings should apply.

 

Were you trying to write to rootshare?

I've never been clear on the distinction between rootshares and user shares. I see in the GUI that user shares are what I've created - but I'm confused on what exactly a rootshare is, I guess. From a layman's point of view (me), the rootshare created by the plugin seems like a one-stop solution to see all of my shares in one place, which is much better than mapping 8 shares and 2 servers from unraid to multiple Windows boxes. That would be probably 10-16 network drives mapped per machine and would make me want to die.

 

So when you ask if I'm trying to write to rootshare, what I did was map the rootshare to Windows as drive Z, then go in that and sync some files to one of the shares that appears within that. But it won't use the cache that way.

 

I'm pretty sure that the manual rootshare technique outlined by SpaceInvaderOne did use the user share settings (cache drives). Either way, the method was implemented very differently since the SpaceInvaderOne method seemed to use my usershare settings but didn't accurately reflect free space across the array, while the Unassigned Devices plugin shows accurate space across the array but bypasses the usershare settings.

 

If you meant writing files to the root of the rootshare - no.

 

Thanks for the help.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.