lunarlyte Posted January 6, 2019 Share Posted January 6, 2019 So, I have three SSDs in my array, 2 of which have VMs on them. I recently installed Fix Common Problems App and it says that I should remove the SSDs from my array. Can I do that without killing my VMs? I can reload them without much headache, but I'd rather not have to. Also, can I use disks not in the array for main VM drive? Quote Link to comment
Squid Posted January 6, 2019 Share Posted January 6, 2019 43 minutes ago, lunarlyte said: Fix Common Problems App and it says that I should remove the SSDs from my array Reason for that error / warning is how some SSDs handle their background garbage collection. It may (or may not) invalidate parity. 1 Quote Link to comment
lunarlyte Posted January 6, 2019 Author Share Posted January 6, 2019 So, does that mean I should acknowledge and move on? Or should I remove them from array and if so do I need to reload my VMs? Quote Link to comment
Squid Posted January 6, 2019 Share Posted January 6, 2019 I make no recommendations. FCP simply reporting the "official line" and the reasoning why. (and unRaid is no different in that respect than many true RAID solutions) Wait for someone else to chime in Quote Link to comment
JonathanM Posted January 7, 2019 Share Posted January 7, 2019 Beyond the garbage collection and parity sync issue, write performance is going to be limited by your parity drive. Are you using a SSD for parity? If not, you will notice a HUGE increase in the VM performance by getting the vdisks off the parity protected array. You can move the vdisk file to any disk accessible to unraid, you just need to specify the full path in the VM configuration. Maybe if you describe your array more fully we could make more useful suggestions. Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 (edited) Is this sufficient or do you need terminal output? I apologize but I am not very fluent in Linux so I would have to figure out the commands to show the array in terminal. My VMs are on Disks 11 and 12. Edited January 7, 2019 by lunarlyte Quote Link to comment
JonathanM Posted January 7, 2019 Share Posted January 7, 2019 Edit the VM, and copy the path listed beside the primary vdisk location for the VM's in question. Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 17 minutes ago, jonathanm said: Edit the VM, and copy the path listed beside the primary vdisk location for the VM's in question. <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disk11/Lunar DVR/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Quote Link to comment
JonathanM Posted January 7, 2019 Share Posted January 7, 2019 That's some of the info I needed, but definitely not what I wanted to see. Apparently you manually specified disk11 as the location for your VM instead of using the default unraid setup of /mnt/user/domains. The upshot is that instead of a couple mouse clicks, you will have to do significantly more work to get the VM to where it needs to be. In a nutshell, you need to move the /Lunar DVR folder to the cache drive and edit the VM config to point to it. I recommend creating a folder in the root of disk11 called domains if there isn't already one there. Then go to the shares and set the domains share to cache:prefer. Working on disk11, move the /Lunar DVR folder and its contents inside the domains folder. Repeat for all other VM disk folders. Edit all the VM disk locations to read /mnt/user/domains/VM Folder/vdisk1.img. Run mover. Check to make sure all the domains folders that were on disk11 and disk12 are now on the cache drive. Done. 1 Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 I did manually designate the location because I wanted my VMs to run off of an SSD. I guess I just didn't fully understand how UnRaid worked under the hood. So, in order for it to work as I want it to I need to have my main VM vdisks run from the cache drive and there is no way for them to each run off of their own SSD? Maybe, unless I use multiple cache drives, which I'm sure is not recommended? After I move the vDisks to the cache drive, should I then remove the 3 SSDs from my array and possibly just take them out of the system completely? Quote Link to comment
itimpi Posted January 7, 2019 Share Posted January 7, 2019 Running VMs off the cache is normally easiest to set up and thus the recommended way. You can also run VMs off disks that are not part of the array and are managed via theUnassigned Devices plugin. 1 Quote Link to comment
JonathanM Posted January 7, 2019 Share Posted January 7, 2019 +1 to what itimpi said. Before you start changing anything I recommend you check out spaceinvader one youtube videos on unraid, and get some background education. An alternative to what I proposed would be to remove the SSD's from the array and mount them using UD like itimpi said, but that is more advanced, and if you do that you really need the background knowledge of how and why so you don't get yourself in a bind. What I suggested will put things back the way unraid is configured out of the box, so the tutorials and general knowledge will line up exactly with what you see. How you have it set up now is far from ideal, and results in severe performance penalties. 1 Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 Thanks for all of the info. I have started watching some of SpaceInvaderOne's videos but only a few so far. I guess it will come with experience. This may take me a few days to research and implement these changes, but I will report back as soon as I finish up this maintenance. I guess I will just go with what's recommended and not use my Windows mentality when it comes to Raid setups and such. I am a bit of a hardware snob and refuse to use a PC with spinning disks in it. LOL. My main machine has a 500GB NVME and a 2TB SATA SSD and every other machine I use for work and such all have SSDs in them so I am used to running machines overkill a bit. UnRaid is slowly teaching me to think differently when it comes to allocating resources. Thanks again and I will update soon. Quote Link to comment
JorgeB Posted January 7, 2019 Share Posted January 7, 2019 If you end up copying the vdisks outside the array use a utility that supports sparse files, like cp --sparse=always or the vdisks will use all of their allocated space. Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 I ended up using the unallocated disks plugin and was able to easily follow SpaceInvaderOne's tutorial on converting vDisk to physical and it worked like a charm. I now have all three SSDs outside of my array and my 2 VMs running on there own SSD each. Thank you all for the help. It did help that I do have windows Hyper-V experience but this is truly a different monster, at least in my world. Quote Link to comment
JonathanM Posted January 7, 2019 Share Posted January 7, 2019 So, how much of a difference do you see in the VM performance? Quote Link to comment
lunarlyte Posted January 7, 2019 Author Share Posted January 7, 2019 disk usage alone seems lower and a little more responsive, but I wasn't really doing it because of a performance issue. I was only doing it because my server was telling me that I shouldn't have SSDs in my array. It may be a stupid reason but I just hate seeing warnings and errors. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.