Jump to content

AgentXXL

Members
  • Content Count

    240
  • Joined

  • Last visited

Community Reputation

12 Good

About AgentXXL

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Fix Common Problems reported the fact that /mnt/remotes had been created but that's fairly easy to detect. It already scans paths in Docker containers reporting on errors like forgetting to set the path as 'R/W Slave' so it might not be a big ask to add this functionality.
  2. I agree with others that using /mnt/remotes is still a good plan. I was able to correct my mount issues after restarting unRAID and of course updating to the latest revision that reverts the change. The symlink idea also came to me while thinking about how to change all my Docker and VM paths that used the /mnt/disks mountpoints. At least using symlinks shouldn't require users to unmount their remote shares before updating.
  3. Note that your update to UD has created a mess for remote shares that were already mounted on my media unRAID. The shares lost their connection and report no size or use/free info after applying the update. I am unable to unmount them now so I will have to stop the array and/or restart to see if that will clear the issue. Alas I have a large transfer underway so the troubleshooting will have to wait. On my backup unRAID I unmounted the remote shares BEFORE applying the update. Once the update was done I was able to re-mount. Your update script should have unmounted the shares before changing their mount location to /mnt/remotes. I'll post diagnostics tomorrow if I'm unable to restore the remote shares after the array stop and/or reboot.
  4. After a few different trials I now have my script working. In the long run I may have to build myself a custom nzbget container. It seems the changes I have to make to the container must be done each time I have to shutdown or restart the unRAID system. The two changes necessary are to install the Python package manager (pip) and then use it to install the lxml toolkit extension. It's fairly painless though: Pre-requisites: Python 3.x must be used - the binhex container has version 3.7 installed so nothing needs to be added yet. Download the get-pip.py script from https://pip.pypa.io/en/stable/installing/ and insert the script into the container. I used the 'scripts' folder located at /config/scripts. 1. Open the docker container console and change path to where you inserted the get-pip.py script. 2. Install pip by issuing the command 'python get-pip.py' into the container console. 3. Install the lxml extension by issuing the command 'pip install lxml' into the container console. That's it. My docker knowledge grew a bit while working on this issue but I'll admit I'm still a novice. If there is a way for me to make my changes permanent in the binhex-nzbget container, please let me know. Thanks again @binhex for the various containers that you provide.
  5. If I only had one backplane in my Supermicro CSE-847... this is a 36 bay unit with a 24 port backplane at the front (same as a CSE-846) and a 12 port backplane (same as CSE-836) in the rear. So to do dual link on both backplanes requires 4 ports. I thought about re-using my Dell H310 in the other slot which would give me the 2 ports to accomplish dual link, but that'll have to wait as the rebuild is already half way done. In the syslog; the line that I copied into the post. But I just realized that since my board is only PCIe 2.0, the max possible is 5GT/s. The message that unRAID reported says 32Gb/s out of a possible 64Gb/s. Possible with PCIe 3.0 or higher, but not on PCIe 2.0. My mistake... Yes, now that it's a new day and I see my mistake I agree that it's running a x8 PCIe 2.0 link. As I'm seeing 90MB/s in actual use, it confused my muddled brain. So for now I'm running at the fastest I can given the age of the motherboard. I'm still puzzled as to how the heck I installed the 9217-8i in the x4 slot. I'm sure the Dell H310 was in the 1st x8 slot, but obviously I goofed when replacing it with the new 9217-8i. Thanks again for your help (and patience!) with my issues.
  6. The Tylersburg 5520 only supports PCIe 2.0. The Supermicro CSE-847 model I have only supports low-profile cards and I only have the one LSI 9217-8i which only has 2 x SFF-8087 miniSAS ports so I can't do dual link cabling. I had tried dual link cabling before by using a 9201-16i with 4 miniSAS ports but it's a full height card so I couldn't put the chassis cover back on. My testing showed that dual link made minimal difference at that time, but it may help now as I've reset and reconfigured the motherboard BIOS (details later in this post). I may give it another try after the rebuild completes but if it works, I'll need to purchase a PCIe slot extender so I can lay the 9201-16i horizontally in the chassis (allowing me to put the cover back on). Here's the board layout: Regardless, I wanted to try the other x8 slot. However, upon opening the chassis, I noticed that the HBA was actually installed in the x4 slot (physical slot 5, bus 3). OOOPS! Not sure how I did that but I then moved the HBA to the 1st x8 slot ( physical slot 6, bus 9 ). In this config unRAID no longer sees my drives. The HBA itself is seen and the link speed is x4. I captured diagnostics while in this config and will start comparing the syslogs. Here's the link speed info from the 1st x8 slot: Oct 31 16:52:56 AnimNAS kernel: pci 0000:09:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x8 link at 0000:00:03.0 (capable of 63.008 Gb/s with 8 GT/s x8 link) I then moved the HBA to the 2nd x8 slot ( physical slot 4, bus 8 ) and decided to look through the motherboard BIOS. I went ahead and loaded the defaults for the BIOS. I then went through the BIOS re-configuring items that didn't need to be enabled like the floppy controller, serial ports and the option ROM support for the PCI-X slots. Yes, this motherboard is so old it has both PCIe and PCI-X slots. After re-configuring the BIOS I restarted with the HBA still in the 2nd x8 slot (slot 4). This time the logs show that it also negotiated a link speed of x4, but now the drives were seen and passed through to unRAID. I captured the diagnostics with this config also. Oct 31 17:03:16 AnimNAS kernel: pci 0000:08:00.0: 32.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x8 link at 0000:00:05.0 (capable of 63.008 Gb/s with 8 GT/s x8 link) So at least re-loading the BIOS defaults and the changes I made now have the HBA running at x4. I've gone ahead and started the array and am seeing about 90 MB/s for the rebuild. That has reduced the time for the rebuild to 2 days so at least that's some movement forward. I'll spend some time comparing the syslogs to see why the drives aren't seen when in the other x8 slot, even though the HBA is (and also at a x4 link speed). Note: After reloading and reconfiguring the motherboard BIOS I did retry the HBA in slot 6. It still negotiated a x4 link speed but the drives were not seen or passed through to unRAID. Hopefully comparing the syslogs will reveal why the drives aren't seen when the HBA is in slot 6.
  7. Yes, that's what I've determined as well. The motherboard (a Supermicro x8DTN+) has 2 x8 slots and 1 x4 slot. Only one of the x8 slots is connected to the 5520 chipset whereas the other one is connected to the ICH10 PCH. I will be moving the card to the other x8 slot once the parity rebuild completes. If I could pause the parity rebuild and have it start-up again after a system shutdown/reboot, I'd go ahead and try this right away. But it appears that's not possible. I don't want to have to start the parity rebuild from scratch, especially if moving the card to the other slot doesn't change the performance. So I'll (im)patiently wait for the rest of the parity rebuild to complete... still estimating just over the 3 day mark.
  8. I'm looking at the TR or Epyc as I want to consolidate my VMs that currently run on VirtualBox and VMWare on another system. An i7 just won't cut it for the CPU and RAM resources I want to allocate to the VMs. And as I'm tired of using older hardware, I'm willing to keep saving my cash so I can go with the new TR or Epyc generation coming in 2021. I'm also still keeping a dual Xeon platform on the drawing board. While AMD certainly has my interest right now, I've always been an Intel guy and usually quite happy with them. But my needs have changed, as it has for many people impacted by working from home during the pandemic. My backup unRAID on a 9 year old i7-980 hexacore is still useable and actually gets better I/O performance than my media unRAID. At least until I get this link speed issue resolved. Good luck with your build!
  9. But that's only if I can get the card to negotiate the x8 link speed. If moving to the other slot or isolating in its own IOMMU group doesn't work, I'd hate to waste the 3 days worth of rebuild time and have to wait another 6 days. This likely won't be an issue when I convert the CSE-847 to a DAS and use a new outboard PC with a new HBA that has external connnections. I'm looking at either building a Threadripper or Epyc based solution, but still need to save up the cash. Thanks for the feedback!
  10. So if my performance is unlikely to improve, I'd rather just leave the rebuild running for the remaining 3+ days it estimates before completion. I'll have to try the other slot after the rebuild completes. Certain things have improved by replacing the older Dell H310 with the 9217-8i. Namely I notice that the boot is faster as it takes less time for the newer SAS2308 chip to recognize and pass-through the drives. While still using the Dell, I also saw some unusual errors when trying to rebuild my parity after upgrading to the 16TB disks. I decided that the $50 CAD for the 9217-8i was worth a try so I ordered it and ran my unRAID without the parity drives until it arrived. It came in quickly and my 1st attempt to build the new dual parity set worked well with no errors reported. It was just incredibly slow, taking almost 6 days to complete.
  11. There are only 2 PCIe x8 slots on the Supermicro x8DTN+. From the block diagram in the manual it shows that the one I'm plugged into should be connected via the 5520. That's why I want to shutdown/reboot - to move it to the other slot to see if that improves the link speed. I still would prefer to know if the shutdown/reboot will mean restarting the data drive rebuild from scratch or if a clean shutdown will let the system continue the rebuild from where it left off.
  12. I've got another thread here where I'm trying to resolve slow parity rebuild/sync speeds - estimated 6 days to do a 16TB data disk rebuild from parity. It looks like my LSI HBA is being throttled to PCIe x1 even though it's in a x8 slot. The message I see in the syslog says this: Oct 27 02:54:54 AnimNAS kernel: pci 0000:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:00:1c.0 (capable of 63.008 Gb/s with 8 GT/s x8 link) I want to try 2 things to see if I can resolve this: 1) Isolate the LSI HBA in its own IOMMU group - currently it shares a group with 3 other items, one in specific that the syslog mentioned is limiting the PCIe link speed. 2) Move the LSI HBA to another x8 slot on the motherboard. Both of these will require me to power down/reboot the system. The data drive rebuild has been running for 3 days and estimates another 3 days to go. I just want to know if the rebuild state will continue from where it left off before the reboot or if I'll have to start the rebuild from scratch? I know that preclears can be resumed after reboot so I'm hoping a data drive rebuild from parity also can be resumed. Note that the 16TB drive was replacing an empty 10TB data drive. In the 3 days that the rebuild has been running, 2TB of data have been copied onto the drive, even though other drives still had plenty of free space. Not something I'm too concerned about but is there a setting in unRAID to prevent data copies to a rebuilding drive? I could have gone into each of my shares and added the disk to the 'Excluded Disks' section I suppose... TIA!
  13. I've had a quick read through of that thread and at least somewhat appeased by the fact that this is affecting others as well. One thing that's notable is that others report a single core (thread) utilizing 100% while a parity check/rebuild is underway. As you can see from the attached pic, that doesn't seem to be the case for my system. Any thoughts on why? Is it my larger number of drives (26 spinning, 2 SSDs) that's the bottleneck? Also looked through my syslog and see this: Oct 27 02:54:54 AnimNAS kernel: pci 0000:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 2.5 GT/s x4 link at 0000:00:1c.0 (capable of 63.008 Gb/s with 8 GT/s x8 link) Is this what you're referring to? I also note that the 0000:00:1c.0 device is in the same IOMMU group as the LSI card. Perhaps splitting the LSi card into its own IOMMU group might help? The LSI HBA is installed in a x8 slot on the motherboard but that message indicates it appears to be running at x1 speed. Thanks again!
  14. I'm dealing with a severe slowdown for parity checks/rebuilds on my media unRAID system. It used to run around 80 - 90 MB/sec until I added some new Exos 16TB drives just over a month ago. Since then my parity check/rebuild speed is around 30 MB/sec. I'm in the process of adding 2 more 16TB drives to replace my old 10TB parity drives that I had re-used for the data array. More details are in the following thread. @JorgeB has been helping and so far it may be that my HBA and older motherboard/CPU combo are the limiting factors. In any case, I've also seen that there is an update available for the parity check tuning plugin. Right now I haven't upgraded it as I'm not sure it can be done while a parity rebuild is still in process. I'm not using the increment section of the plugin at this time but I see references to the plugin in my syslog with the current rebuild underway. Can I safely update the plugin if the parity rebuild is paused? Or should I wait for the rebuild to finish? Thanks again for your plugin @itimpi.
  15. Thanks... diagnostics attached. It's been running the rebuild of the empty drive for almost 2 days already and showing about 4 days left. I probably should have attempted this sooner as the initial parity sync with the new 16TB drives also took almost 6 days total. And the same for the 1st of these 2 new 16TB drives - about 6 days. Note that I have occasionally paused the parity sync/rebuild when I needed a little more write performance for new data being added to the server. I know that increases the time it takes to complete. As an aside, is there a tutorial on how to use the various files in the diagnostics? I've occasionally looked at them myself but really have only concentrated on the syslog. I've seen it mentioned by others that you look at several things initially so that's why you want the full diagnostics and not just the syslog. I'm feeling fairly confident with unRAID these days so I'd like to know how to do this kind of troubleshooting myself. Thanks again for the assistance! animnas-diagnostics-20201029-1457.zip