Star_weaver Posted September 4, 2020 Share Posted September 4, 2020 Wasn't sure what to use as the title but anyway I have 8 drives 1x 240GB ssd - cache 1x 600 GB sas - parity 2x 600 GB sas - main storage 4x 148 GB sas - main storage Is it safe to give 1 vm a 1TB drive or will that break things? Is it worth me using the cache drive due to its size - want it to be more of a drive things get dumped onto when downloading or transferring stuff then pushed to main storage? Any tips I should know about with the cache system? Quote Link to comment
trurl Posted September 4, 2020 Share Posted September 4, 2020 Each data disk in the parity array is an independent filesystem, so no individual file can span disks. Unraid user shares allow folders to span disks, but files cannot. Why are you using such small disks anyway? Quote Link to comment
FreeMan Posted September 4, 2020 Share Posted September 4, 2020 (edited) 1 hour ago, Star_weaver said: Is it worth me using the cache drive due to its size Until about 6 months ago, I ran my server first with a 250GB spinning cache drive, then a 240GB SDD. It housed all the dockers as well as serving as a cache. I've since upgraded by adding 2 more 240GB SDDs to the cache pool. I've grown my server from 2 1TB drives to the current 52TB all with one of these 3 cache configs, but almost all of it was with a single 240GB cache drive. I'd venture to say that since your drives are positively tiny (by modern standards), your 240GB cache will be more than sufficient. 1 hour ago, Star_weaver said: want it to be more of a drive things get dumped onto when downloading or transferring stuff then pushed to main storage? That's the primary purpose of "cache". Using it as a home for the docker.img file is another common (and recommended) use. Edited September 4, 2020 by FreeMan Quote Link to comment
Star_weaver Posted September 5, 2020 Author Share Posted September 5, 2020 4 hours ago, trurl said: Each data disk in the parity array is an independent filesystem, so no individual file can span disks. Unraid user shares allow folders to span disks, but files cannot. Why are you using such small disks anyway? Mainly due to the fact that I don't have the option for 3.5 inch drives and I'm not going back to using 2.5 inch data drives again Quote Link to comment
trurl Posted September 5, 2020 Share Posted September 5, 2020 8 minutes ago, Star_weaver said: Mainly due to the fact that I don't have the option for 3.5 inch drives and I'm not going back to using 2.5 inch data drives again What size are these disks if not 3.5 or 2.5? Quote Link to comment
Star_weaver Posted September 5, 2020 Author Share Posted September 5, 2020 3 hours ago, FreeMan said: Until about 6 months ago, I ran my server first with a 250GB spinning cache drive, then a 240GB SDD. It housed all the dockers as well as serving as a cache. I've since upgraded by adding 2 more 240GB SDDs to the cache pool. I've grown my server from 2 1TB drives to the current 52TB all with one of these 3 cache configs, but almost all of it was with a single 240GB cache drive. I'd venture to say that since your drives are positively tiny (by modern standards), your 240GB cache will be more than sufficient. That's the primary purpose of "cache". Using it as a home for the docker.img file is another common (and recommended) use. I'm gonna go with the screw cache option, filled up, paused all my vms, mover wouldn't clear cache after I set the shares to "yes" on the cache option, waited 3 hours after letting us mover work and it put everything into read only so I couldn't start any vm 3 minutes ago, trurl said: What size are these disks if not 3.5 or 2.5? Auto correct strikes again, ment sata not data Quote Link to comment
trurl Posted September 5, 2020 Share Posted September 5, 2020 1 hour ago, Star_weaver said: I'm gonna go with the screw cache option, filled up, paused all my vms, mover wouldn't clear cache after I set the shares to "yes" on the cache option, waited 3 hours after letting us mover work and it put everything into read only so I couldn't start any vm Cache is a good place for the shares used by dockers/VMs, (appdata, domains, system). If these are on the array performance will be impacted by slower parity updates, and array disks will be kept spinning since there will be open files. If anything, only use cache for this purpose. But really you have plenty of cache. You just need to figure out how to use it. We can help you work through all that. Quote Link to comment
FreeMan Posted September 5, 2020 Share Posted September 5, 2020 18 hours ago, Star_weaver said: I'm gonna go with the screw cache option, filled up, paused all my vms, mover wouldn't clear cache after I set the shares to "yes" on the cache option, waited 3 hours after letting us mover work and it put everything into read only so I couldn't start any vm If you've got 1 or 2TB of data to move over to the server all at once, you'll definitely blow out the cache and run into issues like this. Set your shares to not use cache while you're doing the initial population of data. At this point where you're doing a massive fill, all your drives will be spinning as you write directly to the array and it calculates parity for you. After you've got the initial transfer done, you may chose to enable cache on one or more shares, as it makes sense for you. If you're writing 100GB or less per day to the server (you'll be doing significantly less if you're just saving pictures and Word docs, etc to it), the cache makes sense because you'll write basically at the speed you can transfer data and write to a fast SSD. Writing to the array, which is slower, will then occur nightly as it transfers data from the SSD cache to the spinners in the array. Again, though, this is entirely up to you. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.