SliMat Posted December 10, 2020 Share Posted December 10, 2020 (edited) Hi All I recently noticed that the cache disk in my array is installed but not being used for anything at the moment. I have a server which has the following disks making up the array; 1 x 2Tb SAS 5 x 1.2Tb SAS 2 x 1Tb SAS 2 x 600Gb SAS 6 x 500Gb SAS Also fitted is a 2Tb SAS parity disk and a 1.2Tb SAS cache disk. The server is being used for a few roles, but is installed in a datacentre and isnt being used as a file store for local users on a LAN. Primarily it is being used as a webserver, Plex server, Nextcloud server and a Windows SBS server for my mail. These are created using a few VMs and a number of dockers. Anyway, until a few days ago I thought that the VMs and dockers were mounted on the cache disk and outside the array... so I do run regular backups of the VMs which are stored in a backup share which is in the array. Having just completed a hardware upgrade I have realised that the VMs and dockers are all in the array, so the 1.2Tb 'cache' disk is just sitting there doing nothing... so I am looking for some advice - should I move the VMs and dockers onto the cache drive and maintain regular backups in the array in case of a disk failure... or would you just get rid of the cache drive and add it to the array to get more storage capacity? I'm not sure in this scenario if there would be any speed advantage to using the cache disk, or not. So - does anyone have any thoughts on the best config for this scenario? Especially if it will reduce wear on the disks in the array as I suspect that they will be spun up most of the time as the VMs and dockers will be keeping them up 🙄. The hardware is an HP Proliant DL380p Gen8, 2 x Xeon E5-2670 (2.6Ghz / 8-core) CPUs, 192Gb PC3-12800R (1600Mhz) RAM, UnRAID Pro v6.8.3 - all disks are standard, not SSDs! Many thanks Edited December 10, 2020 by SliMat more info Quote Link to comment
trurl Posted December 10, 2020 Share Posted December 10, 2020 With the array started Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. Quote Link to comment
SliMat Posted December 10, 2020 Author Share Posted December 10, 2020 Thanks Trurl - attacheddl380p-rack-diagnostics-20201210-1514.zip Quote Link to comment
trurl Posted December 10, 2020 Share Posted December 10, 2020 You don't have any user shares configured to use cache. If you had SSD cache there would definitely be some benefit to having your dockers and VMs on SSD. Even without SSD there would be some benefit for those not to be impacted by slower parity writes. With as many disks as you have it might make more sense to have dual parity whether or not you have cache. Quote Link to comment
SliMat Posted December 10, 2020 Author Share Posted December 10, 2020 Thanks @trurl. Yes as I said the existing cache drive is doing nothing... so would I be right in thinking that the best setup would be to remove the current cache disk from the cache slot and add it to the array to gain another 1.2Tb storage... then add a 1Tb SSD as a cache disk and move the dockers and VMs to the SSD Cache drive? Then (when I can afford it) add another 2Tb disk as PARITY 2? As I mentioned, it would be nice to get the array to spin down from time to time as having 18 disks permanently spun up is an (uneccessary) power drain! Quote Link to comment
trurl Posted December 10, 2020 Share Posted December 10, 2020 2 minutes ago, SliMat said: add a 1Tb SSD as a cache disk You probably don't need it to be nearly that large. If you are not caching user share writes then 200G would be more than enough. Go to Shares - User Shares and click Compute All button at bottom of page. That will show you how much of each disk each User Share is using, and also the total usage for each User Share. It will probably take several minutes to produce all the results. You may have to refresh the page. Post a screenshot of the results. Quote Link to comment
SliMat Posted December 10, 2020 Author Share Posted December 10, 2020 11 minutes ago, trurl said: You probably don't need it to be nearly that large. If you are not caching user share writes then 200G would be more than enough. The reason I said a 1Tb SSD cache drive is because I have a 120Gb Windows SBS VM which, if moved to the cache drive, will use 120Gb alone! I am out at the moment - but will post the screenshot in about an hour Thanks Quote Link to comment
SliMat Posted December 10, 2020 Author Share Posted December 10, 2020 1 hour ago, trurl said: Post a screenshot of the results. @trurl here is the shots; Quote Link to comment
trurl Posted December 10, 2020 Share Posted December 10, 2020 Looks like 1TB might be needed after all. Quote Link to comment
SliMat Posted December 11, 2020 Author Share Posted December 11, 2020 2 hours ago, trurl said: Looks like 1TB might be needed after all. 👍🏻 Looks like a 1Tb SSD is on order tomorrow and then at least I get another 1.2Tb in the array when I swap it over. Then a 2Tb SAS on my Christmas list 😂 Incidentally - by moving the VMs and dockers off the array onto an SSD cache disk... will that mean the disks in the array which aren't in use stand a chance of spinning down? Thanks again Quote Link to comment
JonathanM Posted December 11, 2020 Share Posted December 11, 2020 14 minutes ago, SliMat said: Incidentally - by moving the VMs and dockers off the array onto an SSD cache disk... will that mean the disks in the array which aren't in use stand a chance of spinning down? Eventually, but not currently. SAS spindown is still a work in progress, hopefully there will be a fully fleshed out solution when 6.9 is final. Quote Link to comment
SliMat Posted December 11, 2020 Author Share Posted December 11, 2020 (edited) Thanks @jonathanm - that makes more sense now as on my SAS systems the disks never spin down - but on my SATA systems they do. I didnt realise that SAS had that limitation - hopefully 6.9 will fix this... I wont try any pre-release/Beta's as this system is a production box. I may even build a test box on a spare chassis to test before remotely installing 6.9 Thanks Edited December 11, 2020 by SliMat Quote Link to comment
SliMat Posted December 11, 2020 Author Share Posted December 11, 2020 16 hours ago, trurl said: With as many disks as you have it might make more sense to have dual parity whether or not you have cache. I decided that a second parity disk was probably more important than the cache drive... so ordered another 2Tb SAS disk today (💲💲💲). Thanks for the advice Quote Link to comment
SliMat Posted February 25, 2021 Author Share Posted February 25, 2021 On 12/10/2020 at 6:25 PM, trurl said: With as many disks as you have it might make more sense to have dual parity whether or not you have cache. Hi @trurl I have finally got a second 2Tb parity disk, which has been precleared, and I am able to get in to my server tomorrow to install it. I just wanted to check, as there doesnt seem to be a lot of info about second parity disks... when I take the array offline to insert it, when I assign the drive is there a "parity 2" option and if so I guess I just select this and spin the array up and it should do everything else? Thanks. Quote Link to comment
trurl Posted February 25, 2021 Share Posted February 25, 2021 With the array stopped, the parity2 slot is just below the parity slot Quote Link to comment
SliMat Posted February 25, 2021 Author Share Posted February 25, 2021 Thanks @trurl Quote Link to comment
ConnerVT Posted February 25, 2021 Share Posted February 25, 2021 Just added a second Parity drive two weeks ago. Easy peasy. Assign it, start up the drive, and it will start rebuilding parity again. 6TB for me - about 10.5 hours. Quote Link to comment
SliMat Posted February 25, 2021 Author Share Posted February 25, 2021 Cheers @ConnerVT Guessed that would be the case, but thought I'd better check the process in case there are any quirks. Nice to know though. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.