• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

AgentXXL's Achievements


Contributor (5/14)



  1. Next time it happens I will grab a screenshot of the unRAID results from top. I just remember it being something related to Docker. That's what prompted me to start disabling containers to narrow down which one(s) might be causing the issue. One by one I disabled them and the Firefox container was the last one running while the CPU utilization was extremely high. As soon as I stopped the Firefox container, CPU utilization returned to normal.
  2. When this happens, top on unRAID itself reports Docker using the CPU. I've verified it's something specific to the Firefox container as I've had it happen when all other containers were stopped. I don't run any VMs on that system so the VM service isn't even enabled. I have let it run a few times and eventually the process 'setDefaultBrowser' completes in the container itself, and then my CPU utilization returns to normal. I don't see it every day... it's quite random. At least I know that if I'm patient, the process will eventually complete what it's doing and things will return to usable in the Firefox container.
  3. Here's what I saw before replacing the USB key and doing the 'clean' re-install: Here's with Passed Through and Disable Mount Button switches turned on: And here's with only the Passed Through switch enabled for the 1st drive in the ZFS pool: This is all purely cosmetic, as everything is operating as expected, other than the re-occurring banner mentioned in my previous message. And that banner so far hasn't re-appeared after my last clean install of the UD plugins with the folder removed from the USB key. Puzzling, but not creating any real usability issues. Let me know if you need any additional info. Thanks!
  4. I'm experiencing some odd behavior with the UD plugins. I was having array parity issues and during the course of troubleshooting I ended up with the unRAID USB key throwing some read errors. I shutdown and pulled the USB key and then did a full format of it on my Mac. It reported no issues during the format, but I decided to pro-actively replace it. Alas I made a big goof and forgot to copy the diagnostics off the USB key before formatting it. For the new USB I decided to do a 'clean' install of unRAID rather than restoring from backup. This was relatively painless but has left me with a couple of annoyances. First is I'm getting a persistent banner indicating I should reboot unRAID to update UD: I've rebooted numerous times, yet the banner keeps re-appearing. I made sure to clear my browser cache and cookies each time the banner re-appeared, but that made no difference. I then decided to try uninstalling the UD plugins, reboot and then re-install. Seemed OK but then the banner re-appeared. So then I uninstalled again, and deleted the 'unassigned.devices' folder at /boot/config/plugins/ and then rebooted. After re-installing again, the banner seems to be gone, but now my ZFS pool devices all show a 'Mount' button instead of PASSED. As I deleted the folder, it lost the settings so I went to each drive and set the switch for Passed Through, but returning to the Main screen, the UD section still shows a 'Mount' button. I then went to each drive and set the 'Disable Mount button' switch. Then they showed up with the UNMOUNT button greyed out. When I stop the array, this changes to PASSED. It also shows the Used and Free space incorrectly for the ZFS drives. Is there something else I can try to return it to the simple PASSED button and not show the drive Used and Free space? Thanks!
  5. As @Synd reported, I started seeing very slow writes with parity recently. I usually saw them in 80 - 110MB/s range, but with recent shuffling of data drives and upgrading parity to 20TB drives, it is now in the 30 - 50MB/s range. It was occasionally dropping to less than 10MB/s - I suspect this happened when data was being written to the array. As I was trying to figure it out, I tried moving the parity drives to the motherboard SATA. I also tried moving my ZFS pool to motherboard SATA instead of using the HBA. Alas with all the changes, my USB boot drive started throwing read errors, and I managed to lose the diagnostics I was grabbing throughout my changes. I had copied them to the USB and forgot to copy them off before I tried formatting the USB key to see if it was OK. It did re-format with no errors (full format, not quick). Yesterday (Dec 20th, 2022) I decided to pro-actively replace the USB key. I changed it to another of the Eluteng USB 3 to mSATA SSD adapters like I've been using on my 2nd unRAID system for the last 5-6 months. I decided to do a 'clean' rebuild of my main unRAID so I didn't restore my backup immediately. Instead I manually re-installed all needed plugins and when required, I copied the config/support files for each plugin/container from the backup. Doing this returned my parity build speed (on the new 20TB drives) to ~100MB/s when using single parity, and ~70MB/sec when doing dual parity. Also of note is that the 2nd unRAID system got upgraded with a new 16TB parity and data drives, but its parity build was in the normal 100MB/s range. The main unRAID is using a LSI 9305-24i to connect to a Supermicro CSE-847 36 bay enclosure that I've converted to use as a DAS shelf. It's been using this new HBA for about 5 months. The DAS conversion of the CSE-847 has been in use for over 2 years using two HBAs, one internal and one external, with those both being replaced by the single 9305-24i. The 2nd unRAID is using a LSI 9201-16i in a Supermicro CSE-846 24 bay enclosure. Specs of both systems are in my signature. One thing I did notice on the main unRAID is that the parity build seems to be single-threaded and is maxing out the single thread at 100%. Multi-threading would likely make little difference as only one thread at a time can be reading the sectors from alI drives. I did not notice this behavior on the 2nd unRAID system. I have currently stopped/cancelled the single disk parity build as I realized I had some more data to move between servers and the writes are much faster when no parity calculation is involved. Once this data move is complete I will re-add a single 20TB parity and let it build. If any additional info is required, let me know. EDIT: I also have my disk config set to Reconstruct Write.
  6. For some reason the old issue of the Firefox container becoming un-responsive has flared up again. Multiple CPU cores/threads are pegged at 100%, but top only shows the usage in the attached pic. It's always the ``--setDefaultBrowser`` task that's using the most CPU. I'm still puzzled as to why it goes unresponsive though as total CPU load is still quite low. Any thoughts?
  7. Is there a way to have the drives in a ZFS pool ignore UDMA CRC errors? I've got something odd happening that started a few days ago where I've been seeing UDMA CRC errors on both my array drives and my ZFS pool. The pool is being more severely affected, and has put it into degraded mode until I run zpool clear <poolname>. As UDMA CRC are usually cable errors, I'll need to shutdown and replace my cables from the HBA to the expanders. Alas I'm in the midst of moving a large amount of data onto larger drives for the array so shutting down now is inconvenient. I'm watching it, and run the zpool clear command every 4 - 6 hrs to keep it from going into a degraded state. These are older 10TB drives, so I've got a plan to replace them with a pool of newer 16TB drives, but I just need an interim workaround to try and prevent degraded mode. Suggestions? EDIT: I suppose I could just create a script that runs every 15 minutes to clear the errors being logged. I don't care that the count continues to increase as UDMA CRC are usually fully recoverable. zpool status reports that they were 'unrecoverable', but also states no known data errors. The errors are definitely not the drives as it's randomly happening to all of the drives in the ZFS pool.
  8. I was prepping for a possible new unRAID server to add to my existing two. I'm thinking of migrating my TV shows to their own server, and movies only on the one that currently serves both. I started looking at the plugins I would want and noticed that I can no longer find the CA Appdata Backup/Restore v2 plugin from the Apps tab. Has it been deprecated or pulled for some reason? Is there a recommended alternative if it's no longer available? I can make my own scripts but found the plugin just does the job quite well.
  9. Cleared the cache/cookies for both of my unRAID servers but it's still empty. I suspect it's related to an issue with my ISP - Shaw in Canada has had a major outage since 11am MT Friday. It's causing all sorts of slowdowns for the network, and for some reason is even impacting my local network performance. If I disconnect the WAN cable from my modem and reboot my systems, my internal LAN is as fast as expected, but not when my LAN is connected to the internet. Thanks for posting that screenshot - that'll suffice until my ISP things get sorted.
  10. Just curious if there are release notes for version 2022.08.07? Nothing shows up except an empty window titled 'Release Notes' with the 'Done' button.
  11. UPDATE: Rolled back to 4.4.2-2-01 and it's now moving the completed from the temporary download disk. I'll stick with this version until the next released version. Start of Original Message: Is anyone else experiencing issues with qBt moving files from the 'incomplete' folder on my scratch disk to the 'Complete' share on unRAID? It does have a SSD cache pool for the 'Complete' share, and the folders get created with 'placeholder files'. When I do a binary compare from the scratch disk to the share, the share files are not shown as equal. If I try and play one of the media files from the share, it won't play, but the one on the scratch disk does. If I manually move them from scratch to the share, then qBt reports them as missing files. This seems to have started after I upgraded to unRAID 6.11 rc2. I've tried running the 'Docker Safe New Permissions` tool, but that didn't work. Even if I manually move the completed torrents from the scratch disk to the share, qBt still sees them as missing even though the 'Save Path' shows the correct location. A 'Force Recheck' sets them to 0% completed and they start downloading again. If I copy the files from the share back to the scratch disk and do another 'Force Recheck' then they are shown as complete again. I've opened the console for the container and verified that my path mountpoints are all pointing to the correct locations. qBt reports them as saved in Complete for the Save Path and even with the files in both locations, but the moment they are removed from the scratch disk, they show as missing files and start to re-download. I've still been running the 4.4.x builds after the big glitch with 4.4.0 where it lost settings and reset the default network to LAN instead of VPN. Once I corrected those issues, the 4.4 releases have been working fine up until the upgrade of unRAID. I also just noticed that I somehow switched to an old template using OpenVPN instead of Wireguard. I'll try re-configuring with Wireguard but I doubt that will make a difference to the save path. Thoughts? Ideas on what to try next? Any help appreciated!
  12. I'm also using Firefox and occasionally seeing the Resend requestor in the browser when starting the array on both of my servers. Both are updated to 6.10.3 stable. It starts fine but when the page reloads at the end of the startup procedure, I get the requestor in the attached pic. If I click Resend, it tries to start the array again. Sometimes it locks up the webgui, some times it continues normally. If I click Cancel it's always started fine. I've cleared browser cache and cookies and tried a private tab in Firefox, but it doesn't always happen so it's been hard to validate. Any thoughts on other ways to diagnose the cause and apply a fix? I've been reminded to take a look at the browser console next time it occurs.
  13. I also checked the qB github for info, but as you said, they haven't updated the changelog since Jan. There's some recent notes in the News section of their main website, the latest being on May 22, 2022 for the release. There have been two, possibly 3 updates of your container since then. After the 4.4.0 issues, I always pause all torrents and shutdown the container before any updates are applied. Then after restarting the container I check to make sure my settings are all OK. Since correcting the 4.4.0 issues, my settings have stayed and the VPN tunnel is still correctly set. Only then will I resume all torrents. I'm having no issues with seeding or grabbing but my OCD just HATES seeing that there's an update available, with no idea of what the update is for. For example the last update came in yesterday, but hasn't been applied yet as I want to watch the various forums and sites I visit to see if others note any new (or old) issues. The sheer number of open issues is certainly troubling, but for now I'll just continue to use it until something drastic happens that will make me want to switch to something else.
  14. @binhex Where do we find the change logs for each release of your container? After the fiasco for the 4.40 release I tend to wait a week or two to see if others are reporting any issues. It would be nice to know what's changed with each update.
  15. UPDATE: OK, just managed to get the plugin to work with the H11SSL-i. Had to change from Network to Localhost. At least now I'm not getting bombarded with credentials notifications. Still would be nice to be able to see the limited data from the DAS controller as well, but that's pretty minor and my OCD will just have to learn to tolerate it. Is there any reason why this won't work with a Supermicro H11SSL-i? This is an AMD Epyc single cpu motherboard. I tried using the settings for a X11 but it constantly errors with a invalid user name/password. I've configured the user name and password with the same credentials used for the IP login to the IPMI management page. I had been using it with my Supermicro CSE-PTJBOD-CB3 DAS conversion board using the x9 settings and it worked fine for that system. But now that I have the H11 board, it has more sensors that I'd prefer to monitor. Any suggestions? Also, is there any way to monitor two IPMI systems at the same time? Since the H11SSL-i and my CSE-PTJBOD-CB3 are both part of my media unRAID, it would be nice to monitor both. Thanks!