-Daedalus

Members
  • Posts

    426
  • Joined

  • Last visited

Everything posted by -Daedalus

  1. When you create a VM, you specify the size and location for its primary vDisk. You can allocate as many other VDs as you like to a VM during setup, or after the fact. VDs can be in any location. Array, cache, unassigned devices. It makes no difference. You can resize VDs as well (this must be done through CLI), though you must also resize the VDs on the OS-level as well, obviously. Disks are thin-provisioned. (they use as much space as required to hold the data they have, and balloon up to the specified size) You can, if you prefer, pass through an entire disk to be used exclusively by a VM as well.
  2. I'm assuming this database corruption presents as a borked container? Because if that's the case, I'm on 6.7.2, with a cache pool, and zero issues. I've never experienced this (at least, so far as I can tell). server-diagnostics-20190903-0955.zip
  3. At present, when the '+' is clicked, a second NIC can be added, but there is no option to add a 3rd/4th. The current VM config must be saved, then edited again, and so-on until the desired number is reached. This is in contrast to vDisks, where any number can be assigned to a VM in one go. (the '+' icon is present on each new VD, as well as the '-') While we're at it, it would be lovely if we could select 'model type' from the UI as well. Manually changing ESX VMs to vmxnet3 can get tedious...
  4. It's not exactly a common use-case, but I'm pretty sure it can be done. To answer your questions: 1: Nothing cheap. 2: Yes. It's trivial to change CPU/memory, etc. GPU is a little trickier, but still simple. 3: They might, yes. NVIDIA's drivers are unified (mostly), so assuming you're using two relatively recent GPUs, in theory you should just be able to reboot it, and everything will come up as you'd expect. 4: It's not natively done like that, but you can. 5: It's remote desktop. You won't be playing games with it, but it's fine for programming. Notes: 1: As C-Fu mentioned, ESXi is an option, but the free version has limitations, and it's hardware passthrough can be pretty damn picky. More generally, have you considered just using WSL? If you're gaming on Windows anyway, why not use the Linux subsystem for programming in terminal? 4: By default, you'd pick a location for the vDisk to be installed, and that would be that. If the only reason you want the VM's disks on the HDD is for redundancy, then I'd suggest buying a second SSD and making a cache pool. That's what people usually do when running VMs/Docker on cache. If there's some other reason you want them on HDD when inactive, then you could create the VM on SSD, and have a script manually move the VD to HDD on shutdown, and move it back to SSD before startup. This means you wouldn't be able to use the GUI to start the VM, but it would accomplish what you want.
  5. No reason. Just a minimalist thing I suppose. Well, actually, it's also to try and prevent Windows from filling up images with junk. Seems to be like cats in that regard: It expands to fill all available space of a container.
  6. I do this semi-regularly. I have a base VM images that I use when deploying different OSs. Usually these are minimum size, and get expanded as required depending on use-case. +1
  7. Moved away from Server 2012 R2 (and FlexRAID) to unRAID a few years ago. Overall, really happy with the move. Keep up the fantastic work guys!
  8. If you click and hold on any of the headers within a panel, you can re-order them. (So you can have 'Docker Containers' below 'Virtual Machines', or 'Server' below 'Processor' for example) So far as I know, you can't move panels as a whole around however.
  9. I knew this. I've no idea why I hadn't that included. Good shout, thanks very much!
  10. Thanks for the reply. I'll make sure to keep this backed up. Feature request: Specify backup location and frequency of .img in VM Manager.
  11. Basically, title. I've pretty much narrowed the hardware down to a faulty mobo, so we'll ignore that for now. Having restarted the server from a few unclean shutdowns (these construction workers don't like mentioning when they're going to shut off power...) my VMs have vanished. I understand this is most-likely a corrupted libvert.img. I've tried bouncing the VM service, and deleting and re-creating the .img. Weirdly, when I do the latter, only 1 VM shows up. To my knowledge, it doesn't have anything unique about it, but just a curiosity. Anyway, long story short: Is there any way to recover more gracefully from this, or am I going to have to manually recreate my VMs, and link them back to the existing vDisks? Bonus question: Can I backup libvert.img with VMs running without any issues? (for future) syslog and diags attached. Thanks in advance for any tips. syslog server-diagnostics-20190801-2028.zip
  12. +1 Would absolutely love to see this. I imagine it won't show up until 6.8 or later, registering interest none-the-less.
  13. Nope, they're all spinning. I've been running this same hardware for a few months now and haven't had issues previously. Also, the array starts fine in maintenance mode, with the disks unmounted, so I don't think it's a power issue. Edit: 4U systems are bloody heavy... After some hardware troubleshooting, looks like I've either got two defective DIMM slots, or an issues with the CPU's IMC. Doesn't seem to be an OS issue after all!
  14. Happened again. This time, I can't start the array without the server rebooting within a few seconds. I can start it in maintenance mode, but I'm not sure where the logs would be. The system was set to log the syslog to /mnt/user/logs, but obviously I can't get there in maintenance mode. There is a syslog in /boot/logs, but it doesn't look like there's much there, attached anyway. Anyone able to offer advice on where to look for logs during the crash? logs.log
  15. You'd swear it was in an obscure place.... Thanks, I don't know how I managed to miss that, plain as day. Enabled. We'll see if this happens again. Cheers!
  16. Hi all, My server rebooted at some point between last night and this afternoon, cause unknown. I have diags attached, but I doubt they'll be much help considering there's nothing from before the reboot. I remember a request being made for unRAID to be its own logging server, but unless I'm mistaken this hasn't been implemented yet? It's an extremely humid, and pretty hot day here, so it could have just been a thermal shutdown, but it would be nice to have an idea as to the cause. Any help would be great. server-diagnostics-20190530-1605.zip
  17. Bumping this as it's the first result I found while searching. Would it be possible to get an option for this in the UI? Maybe something about setting a default location under the Date/Time settings?
  18. That was a mighty fast reply! You're right, of course. I was initially going to say that setting a daily move through cron would mean that the mover would run during a parity check (for some reason I assumed the "don't move during parity" option was only for the x% move), but on reflection, that would be nonsensical. Dumb-ass satisfied. Carry on.
  19. Simple question, unless I'm missing something: It looks like with this plug-in enabled, the default daily mover operation gets overridden, correct? I always thought MT ran in addition to schedule. For my use-case, I'd like the mover to still run daily, but also have this set to move at x%, just in case the cache gets full from a busy day. I realise I could force this via cron, but then it would also trigger during a parity check, which I would rather not happen. If I'm missing something obvious, please let me know. Otherwise, would it be possible to include this option in an update?
  20. +1 from me. Would be very handy.
  21. While the CPU pinning page is definitely an excellent step forward, I'd argue it's use case is config, rather than monitoring. What they're asking for can be solved for using top or CAdvisor or something, but it would be nice to have an output in the GUI somewhere of who is doing what. Resource monitoring is one area where unRAID definitely has room to grow. Especially when you consider the system stats plug-in isn't included by default.
  22. Thanks for the reply. I absolutely have put my +1 (more than once, heh) in that request. But what I'm asking isn't creating a second cache pool from terminal, but rather making a RAID1 using the BIOS first, then passing that to UD. Your option is probably a slightly better in that it keeps everything managed within unRAID. I will have to have a think on which I'll go for. Do you know if trim is properly support with NVME drives created in a secondary pool like this? Maybe @johnnie.black does? Thanks again!
  23. Quick one for any of the Threadripper users on here: Has anyone tried passing through an NVME RAID1 to unRAID? I have no idea if this would work or not. From what I can see, the only drivers AMD has for it are Windows. I'm looking to have SATA SSD for cache, Docker, etc. and a separate pool for VMs. I was thinking a motherboard-driven RAID1 passed through to UD would be an ideal solve for this, but don't have any NVME drives lying around. Has anyone tried this with any success? (ASRock Taichi here) Cheers!