Jump to content
We're Hiring! Full Stack Developer ×

[Solved] VMs Independent of Array Status


joelones

Recommended Posts

I'm not sure if this topic has been discussed already or is at all supported, but I was wondering what the process is to create VMs that are indepedent of array status. Now, as a case in point, if one were to create a pfsense VM, stopping the array would bring down the Internet which is probably not something you want.

 

Assuming the VM data files are stored on a disk outside the array, it should work correct? All admin commands to create/control the VM would need to be done via command line but just wondering if someone has attempted this.

Link to comment

This works by using Unassigned Devices with a disk mounted outside the array. Setup can be done through the GUI.

 

Correct, this will store the VM outside of the array; however, if you stop the array then unRAID sends a command to shutdown all the VM's (even if they are mounted outside the array) with this command

rc.unRAID[11167][17776]: Stopping libvirt...

Link to comment

The issue here is that right now, we stop libvirt when the array is stopped, which forces all VMs that are running to be shutdown before unmounting the disks.  If we didn't do that, VMs could prevent an array from stopping, which could be really bad (imagine a power outage and your UPS sends a signal to stop the array, but it can't because a VM is running off an array/cache pool device).  To keep libvirt running when the array is stopped, we'd have to at least force all the VMs that are on the array to shutdown before stopping the array.  Now imagine you have an ISO mapped to a VM that lives on the array (under an isos share) but your VM itself is living on a non-array device.  What do we do in that scenario?

 

There are lots of "what ifs" and edge cases to be considered if we were to allow the array to be stopped independent of VM manager.  That said, this is a feature that I think could be worthwhile in the future, but it's not as simple to implement when you start reviewing all the potential "gotchas."

Link to comment

Perhaps you can differentiate between VMs created with the GUI and those created via command line. Such that libvirt is kept running and only VMs create via the GUI are issued the shutdown event. I see the command line creation of VMs more of an advanced feature, assumption being user accepts risks and understands potential pitfalls. It's a tad bit limiting to have VM support tied to the array in my opinion but I understand the implications.

Link to comment

The issue here is that right now, we stop libvirt when the array is stopped, which forces all VMs that are running to be shutdown before unmounting the disks.  If we didn't do that, VMs could prevent an array from stopping, which could be really bad (imagine a power outage and your UPS sends a signal to stop the array, but it can't because a VM is running off an array/cache pool device).  To keep libvirt running when the array is stopped, we'd have to at least force all the VMs that are on the array to shutdown before stopping the array.  Now imagine you have an ISO mapped to a VM that lives on the array (under an isos share) but your VM itself is living on a non-array device.  What do we do in that scenario?

 

There are lots of "what ifs" and edge cases to be considered if we were to allow the array to be stopped independent of VM manager.  That said, this is a feature that I think could be worthwhile in the future, but it's not as simple to implement when you start reviewing all the potential "gotchas."

 

Oiy. I had considered the VM's running off of the array, but not the potential for off-array vm's to have on-array ISO's mapped. Or any other number of such possible issues. haha much potential for headache.

Link to comment

May I suggest simple settings for auto-start and stop - drop down choices for start would be { Auto-start at system start, Auto-start at array start, Auto-start off }, and drop down choices for stop would be { Stop with system, Stop with array }.  This would be intuitive, and you don't have to deal with any gotchas, as it is an obvious mistake by the user if an always-on VM is using ANY array storage.  All you need is a simple statement that it's the user's responsibility to control related storage locations.  I *think* implementation should be straightforward.

Link to comment

May I suggest simple settings for auto-start and stop - drop down choices for start would be { Auto-start at system start, Auto-start at array start, Auto-start off }, and drop down choices for stop would be { Stop with system, Stop with array }.  This would be intuitive, and you don't have to deal with any gotchas, as it is an obvious mistake by the user if an always-on VM is using ANY array storage.  All you need is a simple statement that it's the user's responsibility to control related storage locations.  I *think* implementation should be straightforward.

 

I agree.  Most VM's should be stopped when the array is shutdown, but if we want to do pfsense, we can't.  Simply have the user responsible for a setting that defaults to "Yes, shut this VM down before stopping the array"  If they choose "No, leave this VM running when stopping the array", they need to manage the data connections outside the array. 

 

pfSense is the only edge case that I am aware of where bad things happen if the array gets shut down when you don't want it to be.

 

 

Link to comment

The dropdown idea seems like an elegant solution.

 

I can think of another use case (albeit unlikely): user wants to use a VM to administrer unRAID via the GUI, ie: he/she doesn't have another machine capable of connecting via the GUI. He stops the array in a VM, VM shuts down, he's booted out, now he's unable to bring it back with the GUI. Granted this scenario is improbable as the majority of us here probably have more devices than clean underwears.

Link to comment

pfSense is the only edge case that I am aware of where bad things happen if the array gets shut down when you don't want it to be.

 

Thought I would chime in a bit on my experience with a pfSense VM. I ran a pfSense VM for about 6 months and for the most part it worked really well. I hardly have my unRAID array stopped so it wasn't too inconvenient. Here is something that you may find interesting. Lets say you are on a personal laptop/computer and you turn on unRAID. The pfSense VM will start once everything is loaded, then your laptop/computer will get an IP address. From your laptop you can then connect to the unRAID web GUI (as expected) and if you then click stop the array your laptop will loose connection to the outside interwebs but you still have access to your internal network through the IP address. Meaning, you can still access the unRAID web GUI even with pfSense stopped, so you can make changes to your array, add a new disk, etc. The down side is that you have no access to the outside internet during this time so you wouldn't want to keep the array stopped for very long. This is inconvenient but like I said it wasn't too bad because I didn't stop my array often. Anyways thought I would share.

Link to comment

I can think of another use case (albeit unlikely): user wants to use a VM to administrer unRAID via the GUI, ie: he/she doesn't have another machine capable of connecting via the GUI. He stops the array in a VM, VM shuts down, he's booted out, now he's unable to bring it back with the GUI. Granted this scenario is improbable as the majority of us here probably have more devices than clean underwears.

 

We actually have an alternative solution for this problem, but we'll save the discussion on that for another day  :-X

 

May I suggest simple settings for auto-start and stop - drop down choices for start would be { Auto-start at system start, Auto-start at array start, Auto-start off }, and drop down choices for stop would be { Stop with system, Stop with array }.  This would be intuitive, and you don't have to deal with any gotchas, as it is an obvious mistake by the user if an always-on VM is using ANY array storage.  All you need is a simple statement that it's the user's responsibility to control related storage locations.  I *think* implementation should be straightforward.

 

There's more to it than that, and I would argue that if a user that ends up in a situation where the array won't stop because of a configuration that would allow that to happen, well, they are going to blame us for that.  "Why did you let me configure a VM which makes it so my system can't shut down when it's running!"

 

Also, there are a number of things we prevent from being modified when the array is running (network settings, VM manager configuration, hostname, etc.) that would have to be retested.  Example:  what happens if the network bridge is enabled, then disabled while a VM is running that's using it?

 

The point is, there is no "simple" solution here.  We'd have to make some decisions on what is and isn't acceptable, and we'd have to make numerous programming changes to the webGui to accommodate this.  So yes, we want to figure this out, but just wanted to highlight there isn't a simple "just do this and you're done, son" fix for this.  It will need to undergo a lot of testing.

Link to comment

pfSense is the only edge case that I am aware of where bad things happen if the array gets shut down when you don't want it to be.

 

...and if you then click stop the array your laptop will loose connection to the outside interwebs but you still have access to your internal network through the IP address...

 

right, but if then because of any reason your laptop needs to be rebootet, it won't get an ip-address (becuase your pfsense vm is still down [array =down]) and so you can't connect to your unRAID in any way. only way then is to give your laptop a manual ip-address (and maybe gateway/dns, etc.) so you can connect to unRAID again and start back that array (and subsequent pfsense). don't forget then to undo your manual ip-setup on the laptop.

 

but what if that downtime of the array takes longer than expected...?

or what if your important VMs, like email-/web-server, are also down? these VMs mostly will have their own associated drives (SSDs probably), so they needn't be down actually.

Link to comment

doesn't make life easier  ;)

 

i'm in the middle of deciding what is the best approach for building/running my new server (which will be put together this weekend).

 

at first i was totally sure, i would built it like: unRAID bare metal + 1-2 VMs (hosting stuff) + 1-2 dockers (Plex) + maybe later a VM for OPNsense.

now, with the confirmation that an array stop brings my VMs down (and that this situation will stay like that for a longer period of time) i get in deep doubt, if i should walk this way. on the other hand, if not, it complicates my plans for how i wanted to deal with massive storage (and the benefits of unRAID).

 

sometimes it seems to good to be true (the chosen solution) – now i found the pitfall (in my usage scenario).

 

Link to comment

doesn't make life easier  ;)

 

i'm in the middle of deciding what is the best approach for building/running my new server (which will be put together this weekend).

 

at first i was totally sure, i would built it like: unRAID bare metal + 1-2 VMs (hosting stuff) + 1-2 dockers (Plex) + maybe later a VM for OPNsense.

now, with the confirmation that an array stop brings my VMs down (and that this situation will stay like that for a longer period of time) i get in deep doubt, if i should walk this way. on the other hand, if not, it complicates my plans for how i wanted to deal with massive storage (and the benefits of unRAID).

 

sometimes it seems to good to be true (the chosen solution) – now i found the pitfall (in my usage scenario).

 

Quite honestly, I have been running unRAID, plus VM's, plus dockers for a couple of weeks now. While initially I really disliked the idea of losing my VM's when spinning down my array, I have come to notice that once the array is up, and configured as I like it, it really does not come down. It has not really bothered me in the least.

 

The use case where it would, would be pfSense. But I don't like the idea of running that on combined hardware anyways. I want my internet to be separate from my other items, in the case of failure. Would hate to be without net/DHCP etc if I lost say the power supply in my server.

Link to comment

The use case where it would, would be pfSense. But I don't like the idea of running that on combined hardware anyways. I want my internet to be separate from my other items, in the case of failure. Would hate to be without net/DHCP etc if I lost say the power supply in my server.

 

Part of the reason why I'm still keeping ESXi around. More so, many people have VoIP so it'll awful having the internet down while the array was down. I've got pfSense on ESXi, not to mention ESXi has been rock solid for 4+ years and it likely going to stay that way. It's just too bad my ESXi is somewhat dated in terms of performance now...

Link to comment

hi jonp - thx. for joining in.

 

i see, i see. but how about the pre-using taks like pre-cleaning (a) disk(s)? hopefully that doesn't qualify for stopping the array? it would break all my hope for using unRAID as bare metal hypervisor/solution.

several hours downtime of VMs for doing maintenance tasks in unRAID isn't doable/justifiable for me. important VMs like hosting solutions would be down...  :(

 

and it makes me feel (and others do think the same) that unRAID would improve massive in terms professional usage possibilities, if it had optional switches for having (letting) VMs run independent of array statuses. in my case, the VMs all would be isolated from array, would just use cache SSDs or unassigned storage media. they just need to run, as soon as unRAID has started up (ok, optionally for maintenance of these with a switch on/off and/or startup order/delay) . i would even go so far, as using unsupported hacks/switches/settings/xml, etc. as to gain this ability (till you settled on a more general usage pattern/solution for this). would happily spent brain power and help to think about possible solutions and discussing these.

 

just give me hope here, please  ;)

 

Link to comment

Currently Preclear is done outside of official unraid management so array status doesnt matter to it. You will likely need to powerdown to add the new drive unless your motherboard, sata controller, and drive caddies support hotswap.

 

The only time I have ever stopped my array in unraid was to power down for hardware tasks or reboot for OS updates. Over the past 2 years that was 2 times for drive hardware and 9 times for unraid 6 releases (6.0.1, 6.0.0, 6rc series, 6.0 beta series).

 

The previous years on the unraid 5 series I have had multiple uptime stretches of 400+ days.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...