• Posts

  • Joined

  • Last visited

Everything posted by scorcho99

  1. I reworked the script to instead copy and delete the original since that was a workable solution for this case. I think I figured out why shred was so effective at triggering it all of a sudden. I was using the "-u" option. If you look at the help for shred, by default that renames the file before deleting it! (rather it says it "obsfucates the filename before unlinking". I bet if I drop that option it will be OK. Good ideas on the disk shares, not sure I'll go that route but I think it could be made to work.
  2. I'd rather not disable NFS since it's solving a problem I had with samba shares. I already disabled hardlinks awhile back. Does it make sense that NFS is involved if nothing is interacting with any NFS shares at the time of failure? The share is mounted in a single VM that was not running. And actually...looking back its not even clear if I'd enabled NFS when I first had this problem. I don't use Tdarr.
  3. Well, this happened to me again today. I thought it was running the shred command that did it but that isn't involved into today's case. I believe the trigger is again a script that using "mv" to rename a file and then deletes it. I'll try removing that part of the action and seeing it happens again. The shred command actually probably performs a similar action when unlinking a file name so they might be the same type of cause. Attached are diagnostics in the degraded state. VMs seem to stay running if they were up and I actually still have access to an unassigned device share. Not 100% sure this is the only case, but the files are being changed on a share on my cache drive, with is a SSD and uses btrfs. I only have one docker, a minecraft server, and I haven't actually run it in months. I have an NFS share, not sure that is relevant but I only enabled that in the past couple months and it sort of matches the timeline of when this started occurring. But the files are being edited through samba shares, not NFS.
  4. This issue seems to pop up for me and I haven't seen it before. What I changed recently was I added a script that renames a file, actually a couple of times and then runs shred on it. Not sure if its the shred or the rename. The file remains in the same directory, I'm just changing it's extension. I mention this because I read another post where some one mentioned renames in the same dir related to this issue. I am using 6.8.3. I'm going to experiment with removing the mv (rename really) command.
  5. Did you ever make any progress on this?
  6. Ran into this problem for the first time today. I was renaming and deleting some empty files on the cache drive when I lost access to all shares. I had a script that was adding and deleting in the same directory. I was using a samba share but the script was on unraid so not samba. I'm using 6.8.3. Weirdly (and fortunately) VMs were still running fine and rebooting appears to have restoreed everything.
  7. I'm not sure if this is the same issue since all the hardware is different but recently I was having crashes after reboots with a old AMD card, but only with the drivers installed in the VM. I didn't think it was the reset bug (although maybe in a way it is) since I'd used the card successfully for passthrough tests in the past. What actually solved the problem was ALSO passing through the hdmi audio which I had just left behind before since I didn't need it. I still don't use it but it was stable after this point.
  8. Yeah, we have the GPU drivers built into unraid now so I think we just need QEMU compiled with the right flags?
  9. If you're looking for a GUI to interact with VM snapshots with today, you can use virt-manager with unraid. That's what I use and I like it a lot. Keep in mind you can't snapshot ovmf machines (that's not unraid's fault its an old bug/limitation with KVM that I'm surprised isn't fixed) and you'll need to change how the VMs are stored because their states will be lost on reboot otherwise. (unraid by default creates its VMs from scratch essentially every time it boots but it has no system to maintain snapshot information). Certainly would appreciate some tools for btrfs though, I have to go all command line with that.
  10. I put mine as "at array first start" I remember having a lot of problems with the chained directories and can't remember if I figured it out. I had to clean them all out before I got things appearing to work OK again though. I think the symlinks were messed up when the script was run to many times so it kept creating redundant ones. Its been to long since I messed with this unfortunately. If I were to improve the script I would make it better handle checking for the symlinks existing.
  11. I would like to have wireguard setup to certain VMs (and maybe some dockers but no use case for this right now) connect to the internet ONLY through a VPN like mullvad and other VMs and existing dockers connect without the VPN. I read the thread but I was a little confused about whether that was possible by the end. Basically I want to offload VPN connections to a single configuration location instead of having to set it up on every potential client.
  12. Thanks, I have it working now. Is there a simple way to constrain the server to only my local network? I'm just playing around with it at this point. Perhaps its fine but I noticed the doctor was aware of the WAN IP in the status page. Edit: Nevermind again. I think I'd have to setup my router to forward ports for this to work so nothing to do.
  13. Hello, I get an error that the java path is missing when I try to start. I tried changing it from 8 (what i starts with) to 16 but it didn't make any difference. This is my only docker. The docker is installed on an unassigned devices disk but not sure if that matters. Attached is a log. Thanks Edit: I see there is a post on the top that probably covers this... log.txt
  14. @jwoolen I appreciate this thread since it seems like its the first info I've seen about the iommu groups on rocketlake. Just to confirm, you do not have ACS override options enabled, is that correct? (In other words, these are the true groups?) If so then Intel has finally fixed the broken ACS separation on the root ports. While rocketlake has a lukewarm response to a lot of people, this and the increased lanes for the DMI and m.2 slots make it much improved platform for passthrough.
  15. I don't know anything about this thread but was pinged because of my gvt-g thread. Its also been awhile since I did that. "All" I had to do to get unraid to use gvt-g was rebuild the kernel used with the gvt-g flags turned on as they were disabled in the default build. The qemu version was fine as is IIRC, although it was compiled without openGL support (not surprising) and openGL is required to allow the gvt-g slices to render to a dma buffer which many people make use of with gvt-g. It still had use even without that feature.
  16. This is a resolved post but I appreciate the data and resolution information. My hardware behaved similarly: On my MSI x470 I noticed that when I used the 3rd slot for VGA passthrough that some of my SATA disks in the array dropped out which sounds like the issue you were having as well. My theory was that the iommu was having some sort of resource conflict or misdirection with devices running off the chipset. Given I had to use the ACS override to force this configuration to work sort of leans on that too, I've always thought people were a bit eager to solve the problem with the ACS override since I've heard it has potential stability concerns. I only recall testing the an older AMD GPU. I believe all tests were with unraid 6.8. Since 3rd slot passthrough was just a test and not required for my use case, I kept using the board and it has worked fine with dual GPUs passed through off the cpu slots. I'm still running the ACS override option to pass through USB cards, it only seemed like the GPU mucked things up.
  17. I'm trying to troubleshoot a piece of hardware I had working in passthrough in the past which doesn't seem to work now. But I can't find installations for download for 6.4 series or 6.5. Those are the versions I probably had it last working in. I tried moving an old installation backup to my new flash drive to test with but since the backup was is from a different flash drive it refuses to give me a trial. I happened to have 6.3.5 installer files saved but was surprised to find that the older installers seemed to be gone from any of the posts. Here's another question: If I have a working 6.3.5 trial install, can I just copy the bz files from the 6.4 install to get it up and running? I'm just trying to test one thing and move on.
  18. I like this idea. I don't even really see why btrfs needs to be involved though. You just need the concept of a second simulated failed disk that has the file that failed hash check. That dynamix file integrity plugin could probably be linked up to that. Make sure you keep "write corrections to parity disk" off I suppose you could do it now by temporarily unplugging the disk but that is hardly seamless.
  19. There's a third party (asmedia) PCIe to PCI bridge listed in your first post. So the device is actually a legacy PCI device with a converter chip basically. Not a lot of people try to run legacy PCI stuff but it does work sometimes at least. Maybe try binding the device to vfio at boot if it isn't already.
  20. No. You'd have to handle it in the guest some how in that case, or perhaps some drive image software could be used when the VM was offline. Its the main reason I don't use direct device drive passthrough personally.
  21. @SpaceInvaderOne This is an old necro thread so this might be old news. I run a virtual sound device on my unraid server (ac97) and use it fine through virt-manager with spice to view. I found your post kind of amusing since I used your guide to setup the virt-manager link. Maybe unraid devs added something since your post though that made this possible. If that didn't work I was probably going to use scream audio or some other network sound device but it would be a long clunkier of a workflow.
  22. Try installing the adapter+card in different PCI-e slots. I've noticed the IRQ sharing actually comes into play with legacy PCI devices and passthrough. Its also possible this device is simply not going to get along with PCI passthrough, not all of them do. I think there is also an option in VM settings for "allow unsafe interrupts". I've never used it and am not sure what it does but it might be worth a shot.
  23. Wish they made some of those new cases in white. I'm not interested in building in anymore black hole cases, can't see anything.
  24. @meep I came to the same conclusion on this as you did. The price and availability of working 4 port cards is kind of wonky so rolling your own has some advantages. I use a 4 port mining riser which I bought for ~$15, they're pretty easy to find all over. I seem to have better luck with ones using the asmedia chipset which is what most of them use. Advantages are easier to find and flexibility. You can mix and match devices (including SATA controllers and different types of USB controllers, even video cards). Also you can install 4 port cards instead of needing to use USB hubs, most 4 controller cards only give you one port per controller. Disadvantages are if you're buying all the controller cards from scratch its not much cheaper than the all in one card and it takes up a lot of space and creates a wiring mess. So you'll need a custom case or a case with a lot of expansion slots in many cases. And when using USB 3.0 you have a bandwidth bottleneck. I've never seen an (affordable) mining riser that doees 4x pcie to 4qty 1x pcie splits, they're all 1x to 4qty 1x pcie split. The bottleneck doesn't really matter to me since I really only use USB 2.0 and 1.1 devices.
  25. So you backup my VMs on my cache drive I have a userscript that runs. It shuts down all running VMs, creates a read only btrfs snapshot of the VM directory and then boots them all back up for minimal downtime. Then the backup runs off the snapshot. This has worked fine for a few years. A second share appears in the share list (VMs_backup in my case) whenever the snapshot is active, I can't remember if I actively set this up or it happened automatically. Anyway, I suddenly cannot access this share from anywhere. The base share (VMs) still works fine, it is only the snapshot share that I cannot access. I confirmed that the snapshots are still happening, that the folder is accessible on the server and has the right contents (through command line). The share permissions in unraid seem OK. The only new thing I did was create a new vdisk folder and copy in a vdisk from another location and make a new VM. This makes me think it might be a permissions thing. Is there a requirement here? The thing that is weird is I only added things beneath the share. I did notice some of the folders have my user account and not nobody or root as owner.