Jump to content

jonp

Members
  • Posts

    6,443
  • Joined

  • Last visited

  • Days Won

    24

jonp last won the day on December 31 2022

jonp had the most liked content!

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jonp's Achievements

Mentor

Mentor (12/14)

702

Reputation

1

Community Answers

  1. One thing I'd like to add. We are not and cannot be responsible for what someone does to their system via command line. Root access completely eliminates our ability to do that. My view is that the solution must work for users but only outside the context of a command line method.
  2. No, if things are working fine, it is not vital to pass through the audio and the GPU together. The audio device is solely if you are using HDMI/Displayport for transmitting audio, but most users have a separate audio device (PCI or USB) that they utilize for that purpose. In addition, passing through audio via HDMI/Displayport can cause weird issues if you don't apply the MSI interrupts fix.
  3. I could see us making this plugin ignore all system shares by default (domains, appdata, and system). These shares aren't exposed over SMB by default either and frankly the only data being written to them should be data coming from within the system itself (not over SMB or any other method). I definitely agree that if you have a domains share that has say 2TB free and you create your first vdisk and make it 1TB, then with this "floor" plugin, you would never be able to create another vdisk, which would be unexpected and frankly confusing behavior. Maybe it's even more simplistic in that by default, it only applies this floor calculation on shares that are exposed over SMB. Meaning if SMB = No, the "floor" isn't managed by the plugin and rather becomes a user-managed variable.
  4. Awesome work by the team and just reiterating what @SpencerJ said above: THANK YOU to everyone who has helped test and report bugs for us! You guys are all rockstars!!
  5. So glad to hear this. Was in the middle of researching more on this last night and had to turn in before I could figure out a solution for you. You can probably imagine my surprise and excitement when today I see you have fixed it on your own!!
  6. @JSE thank you for taking the time to do the testing and provide this incredibly detailed write-up. As you can probably imagine, we are prepping for the release of 6.10 and I don't think we would want to try to make this change to the default behavior in this release, but this is something that deserves attention and review by our team and I will be sure to make that happen. Please be patient as this will likely require us to block of some time on the schedule to specifically address this, so I will get back to you once we've made that happen. Thanks again!!!
  7. Hi @Robot! Certainly seems like you've been doing your research when it comes to optimizing Unraid for 10gbps performance!! You've gone down all the paths that I did so many years ago (short of modifying sysctl.conf and adjusting all the values there which we do by default in Unraid OS). That being said, I think you're actually getting pretty good performance out of the current configuration, all things considered. Let me expand. SHFS Overhead You are correct that SHFS does add a bit of overhead in order to operate. SHFS is what allows us to create "user shares" that can span across multiple disks in the array and multiple cache pools. Every time you go to write data to a share managed by SHFS (read that as anything that goes through /mnt/user or /mnt/user0), SHFS is involved. User shares also dictate policies like minimum free space (which determines what disk will receive the next file written to that share), allocation methods, and cache participation. It is the very thing that makes managing data on Unraid so easy!! The downside is that there is a bit of overhead, but I'll be honest, you're getting a fraction of the overhead I did years ago when I experimented with SMB and 10gbps connections for a LinusTechTips video. Over the years, SHFS has made a lot of progress in terms of optimizations and I'm glad to see that the hit isn't as severe as it used to be, though I can understand why you want more. Your use-case is very demanding. Writing directly to Cache Disks And once again you are correct in that writing directly to the cache (bypassing SHFS entirely) will deliver full performance as expected. This is because SHFS doesn't have to step into every transaction and manage it. Weird halt times at the end of copying files This is news to me, but I have asked the team to look into it and attempt to reproduce it. If there is a bug, we'll have to reproduce it to figure out how to solve it. Terribly slow transfers of certain files/folders Please expand on test examples here. Size of the file, length of the transfer in time, etc. This isn't something I've witnessed first hand. Longer Term Solution At some point, maybe we allow you to create a primary pool of storage that isn't dependent on SHFS? That'd be neat, eh?
  8. Aww thanks man! To be perfectly honest, I was getting a little burned out on the schedule and coming up with content that didn't overlap with our plethora of community content creators. Thanks to guys like SpaceInvaderOne, Ibracorp, LinusTechTips, and so many others, there are just so many good videos already out there covering the most interesting subjects. So having the Uncast happen on a regular bi-weekly schedule isn't something I'm going to do in 2022. Instead, we're going to pick specific moments to do episodes mainly around new releases or significant announcements. For example, you can definitely expect to see one soon that is coordinated with the release of 6.10. Who knows, maybe the man, the myth, the legend, @limetech will be a guest ;-).
  9. When you add new GPUs to your system, the PCI device addresses may change. You need to edit each VM with pass through devices assigned, reselect those devices, and save each VM. This is definitely something we need to address in the future because it isn't obvious to the user on what to do in these scenarios so that's on us for not making it easier to understand.
  10. Hi there and thanks for the report! For future reference, issues related to RC releases need to be posted as bug reports in the prerelease forum here: https://forums.unraid.net/bug-reports/prereleases/ This forum is more for general troubleshooting and not detected software bugs. For this one, I'm going to try to move the topic for you.
  11. Wow thanks for catching this and reproducing. @Squid has made us aware and we will investigate.
  12. Hi there, Something is definitely amiss with your hardware. The type of issues you're experiencing are definitely not software bugs, as they would be affecting a much wider group of users if that was the case. Machine check errors are indicative of physical hardware problems. Since this is a Dell, I'm assuming it came pre-assembled. Did you then add a bunch of additional hardware to that base system? I'm curious if that machine doesn't have sufficient power or cooling to support all that gear. I wish I had more insights for you, but unfortunately when it's hardware, its very difficult for us to assist, but the first thing I would try is to detach that eSATA solution you're using. Try without that and see if you can maintain connectivity and stability. If so, there's your problem. If not, keep reducing the gear until you get to stability. If you can't keep the system stable and accessible with any devices attached, that is a strong indication of problems with the underlying electronics. All the best, Jon
  13. Hi there, Unfortunately these types of issues can happen when you use AMD-based devices (CPU/GPU) for use with VFIO. There is just a lack of consistency with the experience across kernel and package updates. These issues don't seem to plague NVIDIA users. There is a limit to how much we can do to support AMD when they aren't even supporting this themselves. I wouldn't call this as much a "bug" with Unraid as it is with the kernel itself and from our perspective, having problems with AMD-based GPUs and GPU pass through is a known issue and limitation of AMD. Hopefully AMD will do a better job supporting VFIO in the future.
  14. I have to say, I'm at a loss here. You have quite an exhaustive setup, and for us to figure this out, we'd probably need to work with you 1:1. We do offer paid support services for such an occasion which you can purchase from here: https://unraid.net/services. Unfortunately there are no guarantees that this will fix the issue, but its probably the best chance you have at this point to diagnose. I would again try shutting down all containers and leave only 1 VM running. Attempt your copy jobs and see if things work ok. If so, then your issue is related to all those services running in concert. But as far as this being related to the virtual networking stack, I have to say, I don't think that's the case. There are too many users leveraging that exact same virtual software stack without issue. If it was software-related, I would expect to see a much larger amount of users complaining about this very issue as this is a fairly common scenario. You also could try manually adjusting the XML for the VM to adjust the virtual network device in use. Take a look at the libvirt docs for more information: https://libvirt.org/formatdomain.html#network-interfaces
×
×
  • Create New...