Jump to content

scorcho99

Members
  • Content Count

    81
  • Joined

  • Last visited

Everything posted by scorcho99

  1. I haven't tried figuring out how this works but one thing that kind of stinks about KVM and by extension unraid is that the virtual display adapters don't support any 3D acceleration. You can use PCI passthrough or Intel GPUs to get something but that requires hardware for the task. I know VMWare allows some 3D work to be offloaded to the host graphics and while the vmware svga is available I don't think the host has the necessary hooks to use it. I'm not sure what virtualbox uses. But there is something in existence, at least for linux guests: https://virgil3d.github.io/ It would be nice if this could be made to work with unraid. Maybe the components are already there? I can't find much for tutorials but gather it requires a MESA supporting GPU on the host (so AMD or Intel graphics).
  2. I'm surprised anyone even attempted the thunderbolt cable and dock. Sounds completely bass ackwards to me. In addition to being expensive, I thought thunderbolt had pretty strict cable length requirements. And after you've brought the video card to the location, the dock needing power...you almost should have just put a cheap PC there. I use long hdmi and USB cables but my run isn't to bad (~20ft to wall outlets) A little tip: You can drill much smaller holes if you use micro/mini hdmi cables because the plug is so small.
  3. I'd tend to blame Windows like tr0910 suggested. You can use tools to see if its using a lot of resources but you could also install a linux VM and see if it fairs any better. My VMs don't seem to spike power usage much unless they're actually doing something. I also symlinked my vm folder to my cache drive so I can "hibernate" VMs when they're not in use by storing their memory state to disk. By default unraid uses a ram drive for this area so hibernating will still use ram, it just moves it around in different buckets. It ends up behaving like a suspend to ram sleep state instead of suspend to disk.
  4. .img is the default (only?) option when creating vms through unraids web manager. You can convert them to qcow2. I would, its a lot faster than reinstalling windows. btfs snapshots are another level of snapshots though btfs is a snapshot of the partitions that the virtual disks reside on qcow2 is a virtual disk format that supports snapshotting among other things VMs themselves can do snapshots that comprise information about the VM combined with a disk snapshot (like qcow2)
  5. Your options are the HDMI cable and USB, or possibly the over ethernet versions of this or you could get a steamlink and use that (it won't be as good as the straight HDMI cable but some people think its fine)
  6. I'm bumping this since this is the only thread on this topic I found searching. gnif had a prototype working that did guest to guest copy, although I think it had some issues he hadn't work through. So in theory you could have a linux VM with a graphics card that can view another windows VM with a graphics cards output. But I don't think its there yet. I wouldn't definitely like to see this working in unraid some day. Edit: Actually looks further along than I thought. From https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387/701 gnif:
  7. AFAIK none of the virtual graphics options offered by libvirt support 3d acceleration in the guest. There's suppose to be a newer virtio GPU in existence that uses virGL but I'm not sure how to get it to work in unraid. The vmware vga is an option, but I'm not sure if that is able to interface with a linux host to pass off graphics work. I was searching for more information about this but there doesn't seem to be much about how it works at all, much less if it works to its full capacity when used under kvm instead of vmware. Both options seem like they would need the unraid host to be running a full stack of graphics drivers so that they would have something to plug into.
  8. I suppose I'm getting a bit off the original topic though. I solved the symlink to make qemu components persistent. I'm really just planning a backup script at this point. So, I'm not a little confused about permissions. But I'm not the best with linux permissions to start with. Does unraid use its own layer on top of the standard linux permissions? I did chown (with -R or whatever recursive option is) nobody and I believe nobody:users. The permissions from ls -l looked the same for the other files in the directory aside from the owner. IIRC I still had the problem so wasn't sure what was going on at that point. One related issue was that I noticed the permissions wouldn't "stick" since creation of a new .save file was done by root and set the permissions to root again. I'll try that. Googling it's a custom script for fixing permissions included with unraid. Yeah, I noticed a file I plopped in there manually through smb a connection had my user name instead of nobody.
  9. Yeah, I can copy them as root but when I try to access them over smb I can't even read the files. I'm pretty sure the scripts run under root. I may just use another script to help the backup along.
  10. Thanks, yeah I ended up there anyway and did exactly that. Seems to work fine with array start. I'm not sure if shutting down libvirt like in the script will handle vms that are running but that would only be a potential problem if I set them to auto start. Only remaining issue I have left is I can't copy the save and snapshot files off because I don't have permissions. I can change them but it doesn't seem to take, and whenever a new file is created it sets it to root owned anyway. Its a pretty minor issue honestly, I don't necessary need this backed up it would just be nice.
  11. So after doing some reading I came up with the following script based on one I saw dmacias use for a similar purpose. It appears to work exactly how I want, snapshots and save states persist after reboots and are stored on the cache drive instead of in ram. I chose to only link the save and snapshot folders rather than trying out the entire qemu directory. There's also nvram, ram, dump folders that might be of interest but I'll just cross those bridges when I get to them. How do I assure this script runs at the right time? If I link to a /mnt/ location for a script in the go file will it work? And will it run after libvirt has been unpacked? # S00slqemu.sh #!/bin/bash # stops libvirt and moves the save and snapshot folders creating a symlink to them # LVSAVEDIR=/mnt/cache/domains/save LVSNAPSHOTDIR=/mnt/cache/domains/snapshot if [ -f /var/run/libvirt/libvirtd.pid ];then /etc/rc.d/rc.libvirt stop fi if [ $LVSAVEDIR != "/var/lib/libvirt/qemu/save" ];then if [ ! -L /var/lib/libvirt/qemu/save ];then if [ ! -e $LVSAVEDIR ];then mv /var/lib/libvirt/qemu/save $LVSAVEDIR else rm -r /var/lib/libvirt/qemu/save fi fi ln -s $LVSAVEDIR /var/lib/libvirt/qemu/save fi if [ $LVSNAPSHOTDIR != "/var/lib/libvirt/qemu/snapshot" ];then if [ ! -L /var/lib/libvirt/qemu/snapshot ];then if [ ! -e $LVSNAPSHOTDIR ];then mv /var/lib/libvirt/qemu/snapshot $LVSNAPSHOTDIR else rm -r /var/lib/libvirt/qemu/snapshot fi fi ln -s $LVSNAPSHOTDIR /var/lib/libvirt/qemu/snapshot fi /etc/rc.d/rc.libvirt start
  12. So I followed space invader one's great guide to link unraid to virt-manager and have converted my disks to qcow2. Snapshots on the disks work fine. I also hibernate/save a VM so that I can go straight back to where I was instead of rebooting it. The issue I discovered is that snapshots disappear from virt-manager after the unraid host is rebooted. I also had a hibernate fail the other day. These things both have the same root cause I believe, which is that the libvirt folder is stored in memory on unraid. This means that while the qcow2 disk snapshots still exist (I've confirmed this) the greater vm snapshot disappears and thus virt-manager can't see beyond that. And hibernation/saving the VM is actually silly because it just copies it from ram to a ram disk. Which if you don't have enough ram left will obviously fail. So I think the simplest solution is to create a symlink of the relevant folders where snapshots and vm state information are stored to the cache drive. I'd probably than put that symlink command in the go file. It seems like some users on here have done some things like this so but is this a stupid idea for some reason I'm not aware of? I'm also a little unsure which folders I should symlink or if it would be best to symlink /var/lib/libvirt in its entirety. I don't know where the vm state is stored but I've seen the snapshots folder so I think I've found that.
  13. So I had snapshots working great, or so I thought. I followed the youtube guide to link my unraid server to virt-manager on my PC. I converted the disks to qcow2. I set all my vms to use seabios. I tested the snapshots, restoring, creating, deleting. Everything worked great. Then I rebooted the server yesterday to do a hardware upgrade and I was surprised to see no snapshots. I didn't initially think the reboot caused it but I found this thread and it fits. I suspect when look with the qemu-img command I will see all the snapshots. Reading through this, what seems to happen is unraid just rebuilds the VMs at startup and the vm snapshot is lost because the vm only ever existed in volatile memory. The *disk* snapshot still exists but the virt manager gui doesn't look for those, it probably uses the same virsh snapshot list command as Ti33700N was using. Basically, the disk snapshot is still there, but virt-manager and virsh snapshot-list don't key off of disk snapshots, they are a superset of disk snapshots. That's not really so bad, only the disk snapshots honestly matter to me. The changes to the vm hardware/xml are just not frequent or complicated enough that they are much use to me. If I had multiple virtual disks it would be different but I currently do not. The main thing I'm missing is the nice simple GUI in virt manager for working with the snapshots. Is there a way to keep the snapshots visible or restore them on reboot? Alternatively, does anyone know a simple gui tool to manage the disk snapshots?
  14. If I create a VM and install Linux Mint Cinnamon I get the warning on startup that Cinnamon is running in software mode every time I start up. I've hitched things up through virt manager but none of the virtual GPU options I've tried seem to resolve this. Even the vmware svga although its unclear to me if unraid can pass some graphics work to the host like vmware supposedly can. If there's no options here its not a big deal, I'll probably just run mate for that guest but I was hoping to give the VM a little boost and take some load of the cpu.
  15. Did this ever happen? I gather LVM is part of the unraid kernel but not bcache?
  16. Any reason that this would be the case? I had an Intel iGPU as primary and a 750ti installed and created a new ubuntu VM, which uses EFI from the looks of it. No video out. But with a Windows 7 VM that uses seabios it seemed to work fine. Could it be the card lacks UEFI firmware?
  17. From reading around on btrfs it looks like there is a "single" raid mode that concatenate the disks together to get their total size even if the disks are a different size. Regular striping is more performant but it will lose space with mismatches sizes. That's nice. But as best as I can tell that will still be a single giant volume in the end, which isn't exactly what I'm looking for. With no way to control which physical disk the data is going on this isn't of great use to me. I could use the single mode to glue a couple of my smaller ssds together and still get snapshots I guess.
  18. Huh, I thought the mover would move them to the main storage array. I'll play around with it tonight but how do I set the cache pool up for single disks? I was under the impression it just does a btrfs mirror array by default. I've seen command line directions for changing the array type to striped but not to separate disks.
  19. Great! Just to be clear, I also want a parity protected storage array. But I'd also like the two additional cache disks to have user shares extended across them. Basically, I want to store vms on the cache share but be able to move them from a ssd to a hdd or back in the cache pool without updating the VMs drive location.
  20. I gather by default at least multiple disks in the cache pool go together as a btrfs raid 1 (btrfs style raid 1 anyway) disk. I've seen its possible to convert this to a striped disk as well. Another idea was presented in the linked thread of having the cache pool disks be in a single disk / jbod mode instead. Basically, it would be like the unraid storage array only without parity protection. Did this ever get implemented? And if so how could I set it up?
  21. Well, I gather it works with BTFS raid 1 mirroring and people have turned on 0 striping. But what Jonp was suggesting was having it behave basically the same as the main storage array does currently except without parity protection (user shares would extend across the different disks, but the disks would actually be separate rather than existing in an array). I haven't seen anything to suggest that mode works is all.
  22. Actually, I guess all I would really need is a JBOD cache pool to do this. It seems jonp wanted this in addition to the default raid1 and optional raid0 based on what I read here: Did this ever become supported? Or is it even possible to setup?
  23. Thanks for the info on user shares with unassigned devices. That kind of throws cold water on the idea though!
  24. I have a 256GB SSD and another larger hard drive. Unraid stores its VMs on a cache drive, or it can anyway. Lets just call the unraid cache "VM storage" for now. What I'd really like is for the unraid VM storage to be an array of the hard disk with the SSD acting as a cache of most frequently used data, kind of like Intel Smart Response. There's a flash cache or something tech in windows that does the same thing. This doesn't seem like its easy to do but I was thinking what be almost as good would be to mount both disks with a VM disk user share on both. And then stored the most frequently used VMs on the SSD. The advantage here would be moving the disks between drives I wouldn't need to update the VMs disk location. I think I would need unassigned devices to do this. The cache pools seem like they just combine the disks and wouldn't work for this. So is that possible? Load the SSD in unassigned devices and just create a user share on it and the cache disk. Ideally I'd have a mover script, really just like the one that exists except it would: Move most frequently accessed files to the SSD and less accessed to the HDD. I could probably do this manually though and it wouldn't be to much of a burden.
  25. I currently have an esxi configuration where the virtual machines that run on it are all stored on an NFS share. The NFS server runs a script where all the VMs are shutdown and the file system is snapshotted in a clean state. Then I start things back up and run a backup of the VMs from the snapshot. I like this configuration a lot because the VMs can keep chugging along while a backup runs. There's a lot of other things I don't like about ESXI and I'm using unraid for general file storage already so it would be nice if I could move to an unraid only solution in the future. There doesn't seem to be a snapshot of the file system option in unraid so I feel it would be a good bit harder to do. Either the same snapshot system on the cache drives or the ability to mount an external/remote NFS share and store the VM files on that share would get the job done for me. Is anything like this already possible?