jonp

Members
  • Posts

    6442
  • Joined

  • Last visited

  • Days Won

    24

Everything posted by jonp

  1. No, if things are working fine, it is not vital to pass through the audio and the GPU together. The audio device is solely if you are using HDMI/Displayport for transmitting audio, but most users have a separate audio device (PCI or USB) that they utilize for that purpose. In addition, passing through audio via HDMI/Displayport can cause weird issues if you don't apply the MSI interrupts fix.
  2. I could see us making this plugin ignore all system shares by default (domains, appdata, and system). These shares aren't exposed over SMB by default either and frankly the only data being written to them should be data coming from within the system itself (not over SMB or any other method). I definitely agree that if you have a domains share that has say 2TB free and you create your first vdisk and make it 1TB, then with this "floor" plugin, you would never be able to create another vdisk, which would be unexpected and frankly confusing behavior. Maybe it's even more simplistic in that by default, it only applies this floor calculation on shares that are exposed over SMB. Meaning if SMB = No, the "floor" isn't managed by the plugin and rather becomes a user-managed variable.
  3. Awesome work by the team and just reiterating what @SpencerJ said above: THANK YOU to everyone who has helped test and report bugs for us! You guys are all rockstars!!
  4. So glad to hear this. Was in the middle of researching more on this last night and had to turn in before I could figure out a solution for you. You can probably imagine my surprise and excitement when today I see you have fixed it on your own!!
  5. @JSE thank you for taking the time to do the testing and provide this incredibly detailed write-up. As you can probably imagine, we are prepping for the release of 6.10 and I don't think we would want to try to make this change to the default behavior in this release, but this is something that deserves attention and review by our team and I will be sure to make that happen. Please be patient as this will likely require us to block of some time on the schedule to specifically address this, so I will get back to you once we've made that happen. Thanks again!!!
  6. Hi @Robot! Certainly seems like you've been doing your research when it comes to optimizing Unraid for 10gbps performance!! You've gone down all the paths that I did so many years ago (short of modifying sysctl.conf and adjusting all the values there which we do by default in Unraid OS). That being said, I think you're actually getting pretty good performance out of the current configuration, all things considered. Let me expand. SHFS Overhead You are correct that SHFS does add a bit of overhead in order to operate. SHFS is what allows us to create "user shares" that can span across multiple disks in the array and multiple cache pools. Every time you go to write data to a share managed by SHFS (read that as anything that goes through /mnt/user or /mnt/user0), SHFS is involved. User shares also dictate policies like minimum free space (which determines what disk will receive the next file written to that share), allocation methods, and cache participation. It is the very thing that makes managing data on Unraid so easy!! The downside is that there is a bit of overhead, but I'll be honest, you're getting a fraction of the overhead I did years ago when I experimented with SMB and 10gbps connections for a LinusTechTips video. Over the years, SHFS has made a lot of progress in terms of optimizations and I'm glad to see that the hit isn't as severe as it used to be, though I can understand why you want more. Your use-case is very demanding. Writing directly to Cache Disks And once again you are correct in that writing directly to the cache (bypassing SHFS entirely) will deliver full performance as expected. This is because SHFS doesn't have to step into every transaction and manage it. Weird halt times at the end of copying files This is news to me, but I have asked the team to look into it and attempt to reproduce it. If there is a bug, we'll have to reproduce it to figure out how to solve it. Terribly slow transfers of certain files/folders Please expand on test examples here. Size of the file, length of the transfer in time, etc. This isn't something I've witnessed first hand. Longer Term Solution At some point, maybe we allow you to create a primary pool of storage that isn't dependent on SHFS? That'd be neat, eh?
  7. Aww thanks man! To be perfectly honest, I was getting a little burned out on the schedule and coming up with content that didn't overlap with our plethora of community content creators. Thanks to guys like SpaceInvaderOne, Ibracorp, LinusTechTips, and so many others, there are just so many good videos already out there covering the most interesting subjects. So having the Uncast happen on a regular bi-weekly schedule isn't something I'm going to do in 2022. Instead, we're going to pick specific moments to do episodes mainly around new releases or significant announcements. For example, you can definitely expect to see one soon that is coordinated with the release of 6.10. Who knows, maybe the man, the myth, the legend, @limetech will be a guest ;-).
  8. When you add new GPUs to your system, the PCI device addresses may change. You need to edit each VM with pass through devices assigned, reselect those devices, and save each VM. This is definitely something we need to address in the future because it isn't obvious to the user on what to do in these scenarios so that's on us for not making it easier to understand.
  9. Hi there and thanks for the report! For future reference, issues related to RC releases need to be posted as bug reports in the prerelease forum here: https://forums.unraid.net/bug-reports/prereleases/ This forum is more for general troubleshooting and not detected software bugs. For this one, I'm going to try to move the topic for you.
  10. Wow thanks for catching this and reproducing. @Squid has made us aware and we will investigate.
  11. Hi there, Something is definitely amiss with your hardware. The type of issues you're experiencing are definitely not software bugs, as they would be affecting a much wider group of users if that was the case. Machine check errors are indicative of physical hardware problems. Since this is a Dell, I'm assuming it came pre-assembled. Did you then add a bunch of additional hardware to that base system? I'm curious if that machine doesn't have sufficient power or cooling to support all that gear. I wish I had more insights for you, but unfortunately when it's hardware, its very difficult for us to assist, but the first thing I would try is to detach that eSATA solution you're using. Try without that and see if you can maintain connectivity and stability. If so, there's your problem. If not, keep reducing the gear until you get to stability. If you can't keep the system stable and accessible with any devices attached, that is a strong indication of problems with the underlying electronics. All the best, Jon
  12. Hi there, Unfortunately these types of issues can happen when you use AMD-based devices (CPU/GPU) for use with VFIO. There is just a lack of consistency with the experience across kernel and package updates. These issues don't seem to plague NVIDIA users. There is a limit to how much we can do to support AMD when they aren't even supporting this themselves. I wouldn't call this as much a "bug" with Unraid as it is with the kernel itself and from our perspective, having problems with AMD-based GPUs and GPU pass through is a known issue and limitation of AMD. Hopefully AMD will do a better job supporting VFIO in the future.
  13. I have to say, I'm at a loss here. You have quite an exhaustive setup, and for us to figure this out, we'd probably need to work with you 1:1. We do offer paid support services for such an occasion which you can purchase from here: https://unraid.net/services. Unfortunately there are no guarantees that this will fix the issue, but its probably the best chance you have at this point to diagnose. I would again try shutting down all containers and leave only 1 VM running. Attempt your copy jobs and see if things work ok. If so, then your issue is related to all those services running in concert. But as far as this being related to the virtual networking stack, I have to say, I don't think that's the case. There are too many users leveraging that exact same virtual software stack without issue. If it was software-related, I would expect to see a much larger amount of users complaining about this very issue as this is a fairly common scenario. You also could try manually adjusting the XML for the VM to adjust the virtual network device in use. Take a look at the libvirt docs for more information: https://libvirt.org/formatdomain.html#network-interfaces
  14. Hi @BVD! Season 1 of the podcast definitely completed at the end of last year. This doesn't mean the show won't continue, but we are taking a hiatus for now.
  15. Ok, I can definitely tell you that the issues you're experiencing are hardware-specific in nature. These are not widespread nor being reported by anyone else. I've thoroughly reviewed your diagnostics and haven't seen anything in the OS configuration itself that would be causing this problem, but you are right to be suspicious of those dropped packets. Another key indicator is that when you connect to VNC using the Unraid GUI itself, you don't have these performance issues. That is VERY interesting because even though it's local to the system, it is still using a virtual network interface to connect from the browser to the VNC session. This means that when your physical network/cables are out of the equation, things work as they should. My belief now is that you have either a bad network cable or improperly configured network environment. I would try replacing the cables connecting your server to the network. In addition, what can you tell us about your network environment? What kind of router(s)/switch(es) do you have in place? Any advanced routing? Another question is that your server appears to have two Ethernet adapters: an Intel and an ASRock branded controller. Which of these are currently being used to connect your server to the network? Maybe try switching to the other one?
  16. We need to be paying more attention to this specific pre-sales board. The forums in general are VERY active and offer far better support for consumer/prosumer use-cases than any other platform available. The big thing with Unraid is that we're DESIGNED for consumers whereas most of our competition is designing their products for the enterprise/business use-cases and simply offer a free option so that IT professionals download it, learn it from home, and then hopefully sell their bosses on bringing it into the office in a paid-support setting. When it comes to virtualization support, Unraid utilizes KVM, which is, just like the OS, Linux-based. Proxmox is another great example of a very powerful solution for businesses, but oftentimes is overkill for consumers and they certainly don't specialize in optimization for VFIO (GPU pass through). That is where Unraid offers some clear advantages because the OS is tuned for that use-case. People run into problems with GPU pass through for a variety of reasons, but it is never because of Unraid OS itself. VFIO relies on hardware working a "certain way" and some manufacturers don't adhere to that standardized method. When they don't, problems can ensue and there are limits to what we or even the Linux open source community can do about it. Sometimes you'll see quirks get made to address one-off hardware problems, but other times we need the manufacturer themselves to step in, and quite frankly, that can be a tall order. And as @Felixen says, it is far easier to just try it and see if it works than to ask here, because it all comes down to your specific setup. Last comments I'll make regarding hardware: 1) Processor / mobo seem fine. Intel is always preferred as we generally see people having less problems with virtualization and networking. 2) AMD-based GPUs can be more problematic with VFIO/GPU pass through. It has gotten a lot better over the years, but know that this again isn't something that will change by switching to Proxmox or any other solution. 3) iGPU pass through is also challenging. It "should work" but since it's not common enough of a use-case, there isn't a lot of support in the community for it.
  17. jonp

    IOMMU Help

    This is very perplexing. Obviously its not a general OS problem otherwise everyone would be experiencing the issue. I know this is probably a tall ask, but is there any way you can use some type of extra storage device to try and load another OS on there that supports virtualization? What about running an older version of Unraid (6.8.3)?
  18. The amount of VMs that you have overlapping core assignments with is problematic. Unraid (by default) doesn't work like a traditional hypervisor. In a traditional setting, you just assign a quantity of CPUs to the VM and the hypervisor takes care of deciding where actual "work" needs to go on the physical CPUs. So if you have a VM in that setting with say 4 CPUs assigned (vCPU1, vCPU2, vCPU3, and vCPU4), those 4 vCPUs can technically "roam" across any of the physical CPUs in the system (pCPU0, pCPU1, pCPU2, pCPU3, pCPU4, pCPU5, pCPU...). In Unraid, the way you have this configured, that won't happen. Looking at your Linux Mint VM, vCPU0-6 are hard bound to pCPU 1, 17, 2, 18, 3, and 19. Those vCPUs will never shift. And if any of your numerous other VMs have "work" that needs to happen on the same pinned cores, you're going to create massive context switching issues that will bring performance to a crawl. Then on top of that you are not even fully isolating all the CPUs that VMs are using and you are overlapping your CPU pinning not only with multiple VMs, but with your docker containers as well. Shut down all your containers and VMs except for 1 VM. Now try your copy test. Does the issue persist?
  19. jonp

    IOMMU Help

    Have you checked for a BIOS update? You may need to report this issue to the motherboard manufacturer as a BIOS bug. Doubt this is an OS issue.
  20. Also, please share with us your VM settings and CPU pinning. You can take a screenshot of the Tools > CPU Pinning tab to give us an easy way to see what is assigned to what.
  21. Ok, something may be seriously wrong with your gear or setup. VMs "just work" especially when there is no pass through of PCI device involved. I would be very suspicious of your USB flash device (could be corrupted) or the memory (might be bad). That said, let's try starting from scratch and having you take screenshots along the way. First, let's delete your existing libvirt image and create a new one. Navigate to the VM Manager Settings page (Settings > VM Manager). Stop the VM service (Enable VMs > No > Apply). Make sure the path for libvirt image file is /mnt/user/system/libvirt/libvirt.img. Delete the libvirt image file (VM Manger Settings > libvirt storage location > click "delete" checkbox and click "apply") Start the VM manager service again (Enable VMs > Yes > Apply). Now if you go to the VMs tab, you should see no VMs defined there. Go ahead and try to create one. Before creating, take a screenshot of the settings you have configured for the VM. Once you click create, if any error message pops up, screenshot that as well. If no errors show up but the VM doesn't start, navigate to the Tools > Diagnostics page and download the diagnostics zip from there and attach it to a new forum post reply here. We can then try to figure out what's going on.
  22. Hi there, Unfortunately we at Lime Tech cannot help diagnose or troubleshoot application-specific issues like this. The problem is likely in how the application is built or some combination of that with your hardware. This is going to be an elusive problem to figure out and it would most likely take a combined effort of the folks maintaining the app as well as the actual creators of the app themselves (that is not LinuxServer.io). Unfortunately this isn't something we at LT can help with as it is application and user specific. If this was a wider-spread issue that could easily be recreated by multiple users, then perhaps something is wrong at the OS layer that we could address, but given that you're the only person with this issue right now and we can't reproduce it, this is outside of our wheelhouse. My suggestion would be to hop on github (or wherever the Radarr project is hosted) and ask the actual developers themselves for assistance. Be sure to include the errors and any logs/diagnostics so they can chase it down. All the best, Jon
  23. Check your BIOS settings and make sure that GPU is enabled and set to act as the primary GPU for your system. If its not, the add-on GPU may get used by the host where problems can then occur with GPU pass through.
  24. I mean you can always try. The advanced GPU pass through video from SpaceInvaderOne is hit or miss when it comes to specific GPUs. Sometimes you get lucky, other times not so much. For the EVGA 970, I'm shocked you can't get that to work just straight out of the box without any download of VBIOS firmware. Does the system you're using have a built-in integrated GPU on the processor or no?