scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Everything posted by scorcho99

  1. So I want to install an alternative to inotify-tools on my unraid server to use in bash scripts: https://github.com/tinkershack/fluffy/ I tried using the dev pack to build (needed make and some other dependencies listed) I got stuck on a dependency that wasn't on the list apparently and also wasn't in the dev pack. I also had questions about how persistent this was anyway, since unraid runs out of a ramdisk. Anyway, can you build something like this elsewhere and copy over the binary? Should I build it on slackware or does it even matter? My machines are all different versions of linux mint. I can't find the base slackware version used by unraid to download to do that.
  2. Some one reported some issues with the script I wrote in another thread. I pulled the one I've been using off the server and I think it corrected the bug. No warranties on this, I wouldn't say I'm a bash expert. Note: LVSAVEDIR and LVSNAPSHOTDIR are probably different on your machine so change those to what you want. # S00slqemu.sh #!/bin/bash # stops libvirt and moves the save and snapshot folders creating a symlink to them # LVSAVEDIR=/mnt/cache/VMs/qemu_nv/save LVSNAPSHOTDIR=/mnt/cache/VMs/qemu_nv/snapshot if [ -f /var/run/libvirt/libvirtd.pid ];then /etc/rc.d/rc.libvirt stop fi if [ $LVSAVEDIR != "/var/lib/libvirt/qemu/save" ];then if [ ! -L /var/lib/libvirt/qemu/save ];then if [ ! -e $LVSAVEDIR ];then if [ ! -e /var/lib/libvirt/qemu/save ];then mkdir $LVSAVEDIR else mv /var/lib/libvirt/qemu/save $LVSAVEDIR fi else rm -r /var/lib/libvirt/qemu/save fi fi ln -s $LVSAVEDIR /var/lib/libvirt/qemu/save fi if [ $LVSNAPSHOTDIR != "/var/lib/libvirt/qemu/snapshot" ];then if [ ! -L /var/lib/libvirt/qemu/snapshot ];then if [ ! -e $LVSNAPSHOTDIR ];then if [ ! -e /var/lib/libvirt/qemu/shapshot ];then mkdir $LVSNAPSHOTDIR else mv /var/lib/libvirt/qemu/snapshot $LVSNAPSHOTDIR fi else rm -r /var/lib/libvirt/qemu/snapshot fi fi ln -s $LVSNAPSHOTDIR /var/lib/libvirt/qemu/snapshot fi /etc/rc.d/rc.libvirt start
  3. I'm using a MSI x470 Gaming Plus. But I was more looking to see if anyone had installed that card on any Ryzen board in their chipset controlled slot and had it detect for some piece of mind. I have some old PCI-e cards that just will not detect on this board that detect fine on other platforms, but they're all PCIe 1.0 devices.
  4. I'm thinking of buying one of these cards for my unraid box (real limited options for good sata cards with greater than 2 ports btw) and stumbled across this thread: https://forums.servethehome.com/index.php?threads/anyone-running-a-lsi-9211-8i-in-an-pcie-x4-slot.2942/ For my purposes, I'm going to run this in the 4x pcie 2.0 (16x physical) on my ryzen x470 board. I want to keep the other slots for graphics cards. So that thread gave me pause. There are no Ryzen examples there, so I was wondering if anyone is doing this successfully? I also thought maybe they needed to cover some of the pins on their card but that seems like a different issue.
  5. Pretty old post, sorry I don't check these forums to often. Its possible that script is kind of buggy and I've updated it but I believe it at least mostly worked. This has been working well for me for quite awhile. I'm not sure about the hibernate button, but through virt-manager and I believe some scripts I use for backups I can trigger the save of the VM state to disk. IIRC its similar to a guest driven hibernate but not quite the same. I vaguely remember giving up on guest driven hibernate/save for some reason or other. The guest time will be incorrect upon restore of the vm save state, but it corrects itself within a minute or two probably due to NTP time server. One gotcha I recall: snapshots would not work with UEFI based VMs. There was some bug or limitation in how nvram states were stored that prevent it from working and last time I checked no one had fixed it. This didn't bother me, I just run everything with seabios.
  6. I'll have to check my script at home. I've been running that for awhile. I suspect it just had a bug in it that I probably fixed at some point. Edit: I updated the original post with my new script, I think it corrects the error or at least its different.
  7. You can try with a seabios based VM. The easiest way is to create a Windows 7 in unraid web console which will use seabios by default. I get black screen on my intel system if nvidia card is primary with UEFI. I believe its because host boot "touches" UEFI but the seabios is left intact.
  8. I use virt-manager with spice VMs. I don't have my notes in front of me but I believe I used space invader one's youtube tutorial on virt-manager setup and then did similar to use to modify the VM to use spice display. AFAIK virt-manager is not available for Windows however.
  9. I haven't tried figuring out how this works but one thing that kind of stinks about KVM and by extension unraid is that the virtual display adapters don't support any 3D acceleration. You can use PCI passthrough or Intel GPUs to get something but that requires hardware for the task. I know VMWare allows some 3D work to be offloaded to the host graphics and while the vmware svga is available I don't think the host has the necessary hooks to use it. I'm not sure what virtualbox uses. But there is something in existence, at least for linux guests: https://virgil3d.github.io/ It would be nice if this could be made to work with unraid. Maybe the components are already there? I can't find much for tutorials but gather it requires a MESA supporting GPU on the host (so AMD or Intel graphics).
  10. I'm surprised anyone even attempted the thunderbolt cable and dock. Sounds completely bass ackwards to me. In addition to being expensive, I thought thunderbolt had pretty strict cable length requirements. And after you've brought the video card to the location, the dock needing power...you almost should have just put a cheap PC there. I use long hdmi and USB cables but my run isn't to bad (~20ft to wall outlets) A little tip: You can drill much smaller holes if you use micro/mini hdmi cables because the plug is so small.
  11. I'd tend to blame Windows like tr0910 suggested. You can use tools to see if its using a lot of resources but you could also install a linux VM and see if it fairs any better. My VMs don't seem to spike power usage much unless they're actually doing something. I also symlinked my vm folder to my cache drive so I can "hibernate" VMs when they're not in use by storing their memory state to disk. By default unraid uses a ram drive for this area so hibernating will still use ram, it just moves it around in different buckets. It ends up behaving like a suspend to ram sleep state instead of suspend to disk.
  12. .img is the default (only?) option when creating vms through unraids web manager. You can convert them to qcow2. I would, its a lot faster than reinstalling windows. btfs snapshots are another level of snapshots though btfs is a snapshot of the partitions that the virtual disks reside on qcow2 is a virtual disk format that supports snapshotting among other things VMs themselves can do snapshots that comprise information about the VM combined with a disk snapshot (like qcow2)
  13. Your options are the HDMI cable and USB, or possibly the over ethernet versions of this or you could get a steamlink and use that (it won't be as good as the straight HDMI cable but some people think its fine)
  14. I'm bumping this since this is the only thread on this topic I found searching. gnif had a prototype working that did guest to guest copy, although I think it had some issues he hadn't work through. So in theory you could have a linux VM with a graphics card that can view another windows VM with a graphics cards output. But I don't think its there yet. I wouldn't definitely like to see this working in unraid some day. Edit: Actually looks further along than I thought. From https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387/701 gnif:
  15. AFAIK none of the virtual graphics options offered by libvirt support 3d acceleration in the guest. There's suppose to be a newer virtio GPU in existence that uses virGL but I'm not sure how to get it to work in unraid. The vmware vga is an option, but I'm not sure if that is able to interface with a linux host to pass off graphics work. I was searching for more information about this but there doesn't seem to be much about how it works at all, much less if it works to its full capacity when used under kvm instead of vmware. Both options seem like they would need the unraid host to be running a full stack of graphics drivers so that they would have something to plug into.
  16. I suppose I'm getting a bit off the original topic though. I solved the symlink to make qemu components persistent. I'm really just planning a backup script at this point. So, I'm not a little confused about permissions. But I'm not the best with linux permissions to start with. Does unraid use its own layer on top of the standard linux permissions? I did chown (with -R or whatever recursive option is) nobody and I believe nobody:users. The permissions from ls -l looked the same for the other files in the directory aside from the owner. IIRC I still had the problem so wasn't sure what was going on at that point. One related issue was that I noticed the permissions wouldn't "stick" since creation of a new .save file was done by root and set the permissions to root again. I'll try that. Googling it's a custom script for fixing permissions included with unraid. Yeah, I noticed a file I plopped in there manually through smb a connection had my user name instead of nobody.
  17. Yeah, I can copy them as root but when I try to access them over smb I can't even read the files. I'm pretty sure the scripts run under root. I may just use another script to help the backup along.
  18. Thanks, yeah I ended up there anyway and did exactly that. Seems to work fine with array start. I'm not sure if shutting down libvirt like in the script will handle vms that are running but that would only be a potential problem if I set them to auto start. Only remaining issue I have left is I can't copy the save and snapshot files off because I don't have permissions. I can change them but it doesn't seem to take, and whenever a new file is created it sets it to root owned anyway. Its a pretty minor issue honestly, I don't necessary need this backed up it would just be nice.
  19. So after doing some reading I came up with the following script based on one I saw dmacias use for a similar purpose. It appears to work exactly how I want, snapshots and save states persist after reboots and are stored on the cache drive instead of in ram. I chose to only link the save and snapshot folders rather than trying out the entire qemu directory. There's also nvram, ram, dump folders that might be of interest but I'll just cross those bridges when I get to them. How do I assure this script runs at the right time? If I link to a /mnt/ location for a script in the go file will it work? And will it run after libvirt has been unpacked? # S00slqemu.sh #!/bin/bash # stops libvirt and moves the save and snapshot folders creating a symlink to them # LVSAVEDIR=/mnt/cache/domains/save LVSNAPSHOTDIR=/mnt/cache/domains/snapshot if [ -f /var/run/libvirt/libvirtd.pid ];then /etc/rc.d/rc.libvirt stop fi if [ $LVSAVEDIR != "/var/lib/libvirt/qemu/save" ];then if [ ! -L /var/lib/libvirt/qemu/save ];then if [ ! -e $LVSAVEDIR ];then mv /var/lib/libvirt/qemu/save $LVSAVEDIR else rm -r /var/lib/libvirt/qemu/save fi fi ln -s $LVSAVEDIR /var/lib/libvirt/qemu/save fi if [ $LVSNAPSHOTDIR != "/var/lib/libvirt/qemu/snapshot" ];then if [ ! -L /var/lib/libvirt/qemu/snapshot ];then if [ ! -e $LVSNAPSHOTDIR ];then mv /var/lib/libvirt/qemu/snapshot $LVSNAPSHOTDIR else rm -r /var/lib/libvirt/qemu/snapshot fi fi ln -s $LVSNAPSHOTDIR /var/lib/libvirt/qemu/snapshot fi /etc/rc.d/rc.libvirt start
  20. So I followed space invader one's great guide to link unraid to virt-manager and have converted my disks to qcow2. Snapshots on the disks work fine. I also hibernate/save a VM so that I can go straight back to where I was instead of rebooting it. The issue I discovered is that snapshots disappear from virt-manager after the unraid host is rebooted. I also had a hibernate fail the other day. These things both have the same root cause I believe, which is that the libvirt folder is stored in memory on unraid. This means that while the qcow2 disk snapshots still exist (I've confirmed this) the greater vm snapshot disappears and thus virt-manager can't see beyond that. And hibernation/saving the VM is actually silly because it just copies it from ram to a ram disk. Which if you don't have enough ram left will obviously fail. So I think the simplest solution is to create a symlink of the relevant folders where snapshots and vm state information are stored to the cache drive. I'd probably than put that symlink command in the go file. It seems like some users on here have done some things like this so but is this a stupid idea for some reason I'm not aware of? I'm also a little unsure which folders I should symlink or if it would be best to symlink /var/lib/libvirt in its entirety. I don't know where the vm state is stored but I've seen the snapshots folder so I think I've found that.
  21. So I had snapshots working great, or so I thought. I followed the youtube guide to link my unraid server to virt-manager on my PC. I converted the disks to qcow2. I set all my vms to use seabios. I tested the snapshots, restoring, creating, deleting. Everything worked great. Then I rebooted the server yesterday to do a hardware upgrade and I was surprised to see no snapshots. I didn't initially think the reboot caused it but I found this thread and it fits. I suspect when look with the qemu-img command I will see all the snapshots. Reading through this, what seems to happen is unraid just rebuilds the VMs at startup and the vm snapshot is lost because the vm only ever existed in volatile memory. The *disk* snapshot still exists but the virt manager gui doesn't look for those, it probably uses the same virsh snapshot list command as Ti33700N was using. Basically, the disk snapshot is still there, but virt-manager and virsh snapshot-list don't key off of disk snapshots, they are a superset of disk snapshots. That's not really so bad, only the disk snapshots honestly matter to me. The changes to the vm hardware/xml are just not frequent or complicated enough that they are much use to me. If I had multiple virtual disks it would be different but I currently do not. The main thing I'm missing is the nice simple GUI in virt manager for working with the snapshots. Is there a way to keep the snapshots visible or restore them on reboot? Alternatively, does anyone know a simple gui tool to manage the disk snapshots?
  22. If I create a VM and install Linux Mint Cinnamon I get the warning on startup that Cinnamon is running in software mode every time I start up. I've hitched things up through virt manager but none of the virtual GPU options I've tried seem to resolve this. Even the vmware svga although its unclear to me if unraid can pass some graphics work to the host like vmware supposedly can. If there's no options here its not a big deal, I'll probably just run mate for that guest but I was hoping to give the VM a little boost and take some load of the cpu.
  23. Did this ever happen? I gather LVM is part of the unraid kernel but not bcache?
  24. Any reason that this would be the case? I had an Intel iGPU as primary and a 750ti installed and created a new ubuntu VM, which uses EFI from the looks of it. No video out. But with a Windows 7 VM that uses seabios it seemed to work fine. Could it be the card lacks UEFI firmware?
  25. From reading around on btrfs it looks like there is a "single" raid mode that concatenate the disks together to get their total size even if the disks are a different size. Regular striping is more performant but it will lose space with mismatches sizes. That's nice. But as best as I can tell that will still be a single giant volume in the end, which isn't exactly what I'm looking for. With no way to control which physical disk the data is going on this isn't of great use to me. I could use the single mode to glue a couple of my smaller ssds together and still get snapshots I guess.