bigjme Posted October 21, 2015 Posted October 21, 2015 Hi everyone, I have posted a few questions around now before purchasing my new system and have been able to find the answers myself after a lot of digging. Having found the answers i am now down to just 2 questions which are driving me a little mad. Setting up vm shares, and storage performance. I will outline my HDD plan below, i am unsure how to achieve this or if it is even possible so help would be huge on this front. unRaid 4TB WD RED - NAS drive VM1 120GB SSD 4TB HDD VM1 120GB SSD 2TB HDD I don't plan on having any parity in the system as they will all write to their own drives, but i may add parity to the NAS drive later on. I have heard people talking about the cache pool and storage pools, but can i allocate just those specific drives to a vm (preferably through the gui drive manager then modifying the vm XML config)? The OS's will be on the SSD's and the storage on the HDD's Or would i need to add both SSD's to the cache pool making 240GB of space, adding the 2TB and 4TB drive to the storage pool without parity allowing 6TB of space, and just splitting the storage over the 2 drives, and the os over the 2 ssd's (i presume this may slow things down but make it easier to expand the storage later?) This is for a dual vm gaming rig so the speed of the drives is pretty important, which brings me to another question. unRaid has changed a lot in the past few months, i know when i set up a vm, i would give it access to the ssd to install, then the hdd to storage the data, but it will store these in VM images if i am correct? These can be increased in size but never reduced also? I am concerned that read and writing to a vm image is going to seriously effect performance compared to read and writing from the native drive. Am i worrying about nothing or is this something i should be considering? Also, if i set them up as shares, i should be able to see the entire image on my local network, meaning if i need to backup the OS image, and storage image for a vm, i should be able to shut down the vm and transfer the 2 files to elsewhere, and the same for when i decided to upgrade the size of storage devices. I know this is sort of a ramble but i wanted to say everything i believe i understand and see if anyone can come along, point out where i am wrong so i know what to look into more or point me to some recommendations. VM 1 will be on 24/7 and never shut down unless needed, so i am looking to maybe pass 2 x 4TB drives to it so steam games and windows network + emby server rather then messing about with dockers. Again, this concerns me that the vm image for the NAS data may be slowed due to size, maybe add un-needed cpu strain, and may even cause a higher probability of data loss due to corruption etc. I may be worrying about nothing but i would rather be told, then just hope. My case is currently rather limited for storage so i am limited to the 5 devices (hence no parity). I do however plan to get a 24 bay DAS box built later in the year so i want to be able to move everything to that easily (This is when drive parity will be added) I would really appreciate help on this as i don't have the hardware to test this myself. Many thanks, Jamie
saarg Posted October 21, 2015 Posted October 21, 2015 I would use just one big SSD for both VM's and then use the array in unraid for storage. I see no reason to dedicate a drive to each VM for storage. You then access the storage either through network internally on unraid. The network traffic will not leave the unraid server as the network bridge in unraid is setup to handle this locally. There should not be any performance hit doing this. You could use one SSD for each VM's vdisk (OS) using the unassigned devices plugin, but I'm not sure if you would be able to autostart the VM's when unraid boots. It might work or it might not work. There are some info about this in various threads. I have seen someone passing through SSD's to the VM, but I haven't done it myself, but if you search the forum you might find it. The cache pool is raid1 so you can not decide which drive the VM's will be stored on.
bigjme Posted October 21, 2015 Author Posted October 21, 2015 I would prefer to do it via the disk manager so i think a single drive per VM is now out of the question So would you recommend getting a newer 240GB single SSD for both OS images, adding that to the cache, and setting it as an OS share? And then not using my current 120GB SSD's at all? I can get a 240GB SSD for not to much, so i may be able to get 2 and see the extra read performance from the raid 1. For the NAS drive, the 2 VM's will talk a little but it will mostly be data going to other machines on the network. Right now i am looking at the following config if this would be better: Cache Drive: 240GB SSD or 2 HDD Pool 2 x 4TB drives 1 x 2TB Drive Giving a total of 120GB of SSD per OS, and 10TB of storage between the 2 VM's until i do my drive upgrade. My main system (one that will be virtualized) is using 4TB currently for steam, and my NAS, so the 10TB should be plenty for 2 instances of steam games and the NAS. Thanks for your help
saarg Posted October 21, 2015 Posted October 21, 2015 Raid1 has no increased read performance. It writes to 2 disks the same file. Check the link for more info about the cache pool in unraid http://lime-technology.com/exciting-new-developments-with-unraid-6/ If you do not want to use the plugin and test your luck, buying a 240GB disk is a better option. Just thinking out loud a little now... If you use one of the VMs not that often and you power it on when needed (not started with unraid), you could use one of the 120GB disk as a cache drive and use that for the always on VM. Then you use the plugin to mount the other 120GB disk and use this drive for the other VMs vdisk. As long as you do not use the autostart feature on this VM there is no risk of the plugin not mounting the disk before the VM is trying to start. This way you can choose where to store the VMs vdisks from the GUI
bigjme Posted October 21, 2015 Author Posted October 21, 2015 Very good point. For me, a 240GB SSD is cheap, and gives me an excuse to upgrade my current 2 I'm from the UK so a 240GB SSD is around £70 ($100) maximum, or i may even go for one of these m.2 drives? One machine will be a minimalist install of windows 10, basic anti-virus, and drivers, with everything else on the storage drive. So in all honesty a 120GB allocate is probably way too much, where as my main host tends to be using closer to the 100GB mark so i would have a little bit of breathing room. The new motherboard i'm looking at will support 2 m.2 drives so that will free up some space in my case as well. Would an m.2 be wasted in those speeds due to the vm images? or would it aid things due to its 2150MB/s maximum read? This should allow 2 vm's to basically never slow each other down even on one drive. Edit: So i think i may have miss-read what you were saying before in regards to the network traffic and the main storage share. I initially thought you meant that it would do that just for network traffic between the 2. What i did not realise was that it also meant the entire storage share could contain the NAS files, and all the program files for each virtual machine. Then it can run the programs over the network share (by adding a network drive) which would actually just load directly from the HDD's meaning the program files would not be located in VM images but on the main drive. This means that on the rest of the network, all machine could see the nas files, and the entirety of the program files folder without me setting up specific network shares, also making this easier to backup. Am i understanding it a little better now?
Recommended Posts
Archived
This topic is now archived and is closed to further replies.