xokia Posted September 27, 2023 Share Posted September 27, 2023 If I want something to primarily run on the cache drive I have it set as primary and then I have the HDD as the secondary. Do I have this setup correctly or should the mover go the other direction? I'm thinking this config nothing would get put in the HDD unless the cache drive fills is that correct? So you have no backup of the cache? I guess my intuition says I want things to primarily run in the cache but write-through to the HDD at some point (whether that's hourly or daily). Quote Link to comment
JorgeB Posted September 27, 2023 Share Posted September 27, 2023 The way it's set the mover will move the data from array to cache. 8 hours ago, xokia said: I'm thinking this config nothing would get put in the HDD unless the cache drive fills is that correct? Correct, if the minimum free space is correctly set. 1 Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 (edited) 6 hours ago, JorgeB said: The way it's set the mover will move the data from array to cache. Correct, if the minimum free space is correctly set. Is there a way to write-through to HDD say once daily but still operate out of the Cache? Do I just set the mover the other direction? But if it moves the data then does it no longer operate out of the cache? Edited September 27, 2023 by xokia Quote Link to comment
JorgeB Posted September 27, 2023 Share Posted September 27, 2023 8 minutes ago, xokia said: Do I just set the mover the other direction? Yes. 8 minutes ago, xokia said: But if it moves the data then does it no longer operate out of the cache? Correct 1 Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 (edited) So no way to write-through to HDD like a real write-through cache? Edited September 27, 2023 by xokia Quote Link to comment
itimpi Posted September 27, 2023 Share Posted September 27, 2023 3 minutes ago, xokia said: So no way to write-through to HDD like a real write-through cache? No - this is not a supported Unraid function. Having said that I think ZFS ARC might provide something like that for ZFS pools but not sure. For the items that you are most likely to keep permanently on a pool/cache there are the appdata backup and VM backup plugins that do not provide a write-thru cache but do automate regular backups to the array. 1 Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 (edited) 7 minutes ago, itimpi said: No - this is not a supported Unraid function. Having said that I think ZFS ARC might provide something like that for ZFS pools but not sure. For the items that you are most likely to keep permanently on a pool/cache there are the appdata backup and VM backup plugins that do not provide a write-thru cache but do automate regular backups to the array. Kinda wondering the point of the "cache" at all. Could you not just install the drive as a normal drive and operate out of the drive and get parity protection at the same time? I guess if you were just looking to speed up network data transfers this "cache" drive makes some sense. Will need to rethink my setup. Its not really a cache its more of a temporary storage if I understand it correctly. Edited September 27, 2023 by xokia Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 (edited) If I want to pull one of my "cache" drives out and make it disk1 now so its just a normal disk but has all the appdata.....rest of the unraid system files is that an easy task? I would like to move the existing disk 1 down the line as some other drive. Edited September 27, 2023 by xokia Quote Link to comment
wayner Posted September 27, 2023 Share Posted September 27, 2023 50 minutes ago, xokia said: Kinda wondering the point of the "cache" at all. Could you not just install the drive as a normal drive and operate out of the drive and get parity protection at the same time? I guess if you were just looking to speed up network data transfers this "cache" drive makes some sense. Will need to rethink my setup. Its not really a cache its more of a temporary storage if I understand it correctly. I have to think that dockers, VMs, etc, will run much more quickly from an SSD cache drive than a spinning hard drive. Isn't that the purpose of the cache? Quote Link to comment
itimpi Posted September 27, 2023 Share Posted September 27, 2023 1 hour ago, xokia said: Kinda wondering the point of the "cache" at all. Could you not just install the drive as a normal drive and operate out of the drive and get parity protection at the same time? You get MUCH better write performance to drives in a parity protected array (often 5-10 times faster) than those that are in a pool which is the normal reason for using a pool. Nothing says you HAVE to have or use a pool. Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 46 minutes ago, wayner said: I have to think that dockers, VMs, etc, will run much more quickly from an SSD cache drive than a spinning hard drive. Isn't that the purpose of the cache? My cache drive was a 2TB NVME I can just use the NVME as disk1 of the array and run all the appdata and VM's out of the NVME. I do also have a 1TB SSD drive that I will repurpose as a cache drive to speed up network transfers. This assumes I am understanding things correctly. Quote Link to comment
itimpi Posted September 27, 2023 Share Posted September 27, 2023 17 minutes ago, xokia said: My cache drive was a 2TB NVME I can just use the NVME as disk1 of the array and run all the appdata and VM's out of the NVME If you also have parity in the array then the speed of can still severely limit writing speed to the SSD in the array thus affecting performance of docket containers and/or VMs. There is also the fact that drives in the main array cannot be trimmed and this may lead to performance of the SSD deteriorating over time if you have it there. Quote Link to comment
xokia Posted September 27, 2023 Author Share Posted September 27, 2023 (edited) I guess do not understand the purpose of your implemented "cache" drive then. Its a "cache" but not really, seems more a temporary storage. It will speed up data transfers (which seems to be the primary purpose) but also contribute to data loss since its unprotected data space. If you intend to run programs out of cache then you have to not run mover on that data period. I'm not even sure I understand the point at which you fill up your cache the data gets moved to the HDD but then nothing will move it back. So you will be using the spinning HDD and not cache if you ever fill it. Only new data will get stuck in "cache" so performance could degrade over time unless you catch that you had a fill event and then move it back manually. Seems wacky but I'm sure I am not understanding something. Do you folks have some additional tutorials or documentation explaining the details of this feature and how to use it to get the most out of it? Edited September 27, 2023 by xokia Quote Link to comment
wayner Posted September 27, 2023 Share Posted September 27, 2023 To be honest when you have a cache setting of no or prefer (using the old unRAID terminology) then you are pretty much right, it isn't actually a cache. With prefer it would never get moved over to the pool, unless you run out of space on the primary cache drive. Quote Link to comment
xokia Posted September 29, 2023 Author Share Posted September 29, 2023 On 9/27/2023 at 2:58 PM, wayner said: To be honest when you have a cache setting of no or prefer (using the old unRAID terminology) then you are pretty much right, it isn't actually a cache. With prefer it would never get moved over to the pool, unless you run out of space on the primary cache drive. I dont think its a cache in any sense of the word. I think its probably the best term they could come up with to kind of describe what it is. Just makes it difficult for those that understand what a cache is to understand what this is 😄 Quote Link to comment
CharmPeddler Posted April 9 Share Posted April 9 Just to chime in, @xokia I'm also having a hard time with the terminology and how it implies what the function will actually be. So far IF I'm understanding things correctly the cache drive is used to separate shares from the array because anytime data is written to the array parity will slow things down (doing the parity while also writing) + (this assumes you have parity in place). because the drive is not slowed down by the nature of parity This allows us to take advantage of the non parity writing/reading in combination of an assumed SSD or nVME drive to also utilize this physical drive for reading/writing/storage of data types that we would feel the most (appdata/vms) at the same time, the "cache" function of a share that is located on this physical drive can, in fact, function as a true "cache" when set up in a way that incoming data from outside the server will written to this share first (as it should be the fastest write speeds) and then at a later time be "Move/ed" to the array. By the time Mover is doing it's thing we humans would presumably not be affected by this slower speed moving because we've already "moved" on with our lives. I'm not yet 100% sure on how to do this. Please let me know if I'm wrong on any of these points? I wanted to lay this out as I've been having a hard time finding all this contextual data in any one place. And I want to avoiding going down one path type and then later realizing that I should have started off in a different manner. So piecing this all together has been a bit of a struggle. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.