AgentXXL

Members
  • Posts

    400
  • Joined

  • Last visited

Everything posted by AgentXXL

  1. I'm experiencing some odd behavior with the UD plugins. I was having array parity issues and during the course of troubleshooting I ended up with the unRAID USB key throwing some read errors. I shutdown and pulled the USB key and then did a full format of it on my Mac. It reported no issues during the format, but I decided to pro-actively replace it. Alas I made a big goof and forgot to copy the diagnostics off the USB key before formatting it. For the new USB I decided to do a 'clean' install of unRAID rather than restoring from backup. This was relatively painless but has left me with a couple of annoyances. First is I'm getting a persistent banner indicating I should reboot unRAID to update UD: I've rebooted numerous times, yet the banner keeps re-appearing. I made sure to clear my browser cache and cookies each time the banner re-appeared, but that made no difference. I then decided to try uninstalling the UD plugins, reboot and then re-install. Seemed OK but then the banner re-appeared. So then I uninstalled again, and deleted the 'unassigned.devices' folder at /boot/config/plugins/ and then rebooted. After re-installing again, the banner seems to be gone, but now my ZFS pool devices all show a 'Mount' button instead of PASSED. As I deleted the folder, it lost the settings so I went to each drive and set the switch for Passed Through, but returning to the Main screen, the UD section still shows a 'Mount' button. I then went to each drive and set the 'Disable Mount button' switch. Then they showed up with the UNMOUNT button greyed out. When I stop the array, this changes to PASSED. It also shows the Used and Free space incorrectly for the ZFS drives. Is there something else I can try to return it to the simple PASSED button and not show the drive Used and Free space? Thanks!
  2. As @Synd reported, I started seeing very slow writes with parity recently. I usually saw them in 80 - 110MB/s range, but with recent shuffling of data drives and upgrading parity to 20TB drives, it is now in the 30 - 50MB/s range. It was occasionally dropping to less than 10MB/s - I suspect this happened when data was being written to the array. As I was trying to figure it out, I tried moving the parity drives to the motherboard SATA. I also tried moving my ZFS pool to motherboard SATA instead of using the HBA. Alas with all the changes, my USB boot drive started throwing read errors, and I managed to lose the diagnostics I was grabbing throughout my changes. I had copied them to the USB and forgot to copy them off before I tried formatting the USB key to see if it was OK. It did re-format with no errors (full format, not quick). Yesterday (Dec 20th, 2022) I decided to pro-actively replace the USB key. I changed it to another of the Eluteng USB 3 to mSATA SSD adapters like I've been using on my 2nd unRAID system for the last 5-6 months. I decided to do a 'clean' rebuild of my main unRAID so I didn't restore my backup immediately. Instead I manually re-installed all needed plugins and when required, I copied the config/support files for each plugin/container from the backup. Doing this returned my parity build speed (on the new 20TB drives) to ~100MB/s when using single parity, and ~70MB/sec when doing dual parity. Also of note is that the 2nd unRAID system got upgraded with a new 16TB parity and data drives, but its parity build was in the normal 100MB/s range. The main unRAID is using a LSI 9305-24i to connect to a Supermicro CSE-847 36 bay enclosure that I've converted to use as a DAS shelf. It's been using this new HBA for about 5 months. The DAS conversion of the CSE-847 has been in use for over 2 years using two HBAs, one internal and one external, with those both being replaced by the single 9305-24i. The 2nd unRAID is using a LSI 9201-16i in a Supermicro CSE-846 24 bay enclosure. Specs of both systems are in my signature. One thing I did notice on the main unRAID is that the parity build seems to be single-threaded and is maxing out the single thread at 100%. Multi-threading would likely make little difference as only one thread at a time can be reading the sectors from alI drives. I did not notice this behavior on the 2nd unRAID system. I have currently stopped/cancelled the single disk parity build as I realized I had some more data to move between servers and the writes are much faster when no parity calculation is involved. Once this data move is complete I will re-add a single 20TB parity and let it build. If any additional info is required, let me know. EDIT: I also have my disk config set to Reconstruct Write.
  3. For some reason the old issue of the Firefox container becoming un-responsive has flared up again. Multiple CPU cores/threads are pegged at 100%, but top only shows the usage in the attached pic. It's always the ``--setDefaultBrowser`` task that's using the most CPU. I'm still puzzled as to why it goes unresponsive though as total CPU load is still quite low. Any thoughts?
  4. Is there a way to have the drives in a ZFS pool ignore UDMA CRC errors? I've got something odd happening that started a few days ago where I've been seeing UDMA CRC errors on both my array drives and my ZFS pool. The pool is being more severely affected, and has put it into degraded mode until I run zpool clear <poolname>. As UDMA CRC are usually cable errors, I'll need to shutdown and replace my cables from the HBA to the expanders. Alas I'm in the midst of moving a large amount of data onto larger drives for the array so shutting down now is inconvenient. I'm watching it, and run the zpool clear command every 4 - 6 hrs to keep it from going into a degraded state. These are older 10TB drives, so I've got a plan to replace them with a pool of newer 16TB drives, but I just need an interim workaround to try and prevent degraded mode. Suggestions? EDIT: I suppose I could just create a script that runs every 15 minutes to clear the errors being logged. I don't care that the count continues to increase as UDMA CRC are usually fully recoverable. zpool status reports that they were 'unrecoverable', but also states no known data errors. The errors are definitely not the drives as it's randomly happening to all of the drives in the ZFS pool.
  5. I was prepping for a possible new unRAID server to add to my existing two. I'm thinking of migrating my TV shows to their own server, and movies only on the one that currently serves both. I started looking at the plugins I would want and noticed that I can no longer find the CA Appdata Backup/Restore v2 plugin from the Apps tab. Has it been deprecated or pulled for some reason? Is there a recommended alternative if it's no longer available? I can make my own scripts but found the plugin just does the job quite well.
  6. Cleared the cache/cookies for both of my unRAID servers but it's still empty. I suspect it's related to an issue with my ISP - Shaw in Canada has had a major outage since 11am MT Friday. It's causing all sorts of slowdowns for the network, and for some reason is even impacting my local network performance. If I disconnect the WAN cable from my modem and reboot my systems, my internal LAN is as fast as expected, but not when my LAN is connected to the internet. Thanks for posting that screenshot - that'll suffice until my ISP things get sorted.
  7. Just curious if there are release notes for version 2022.08.07? Nothing shows up except an empty window titled 'Release Notes' with the 'Done' button.
  8. UPDATE: Rolled back to 4.4.2-2-01 and it's now moving the completed from the temporary download disk. I'll stick with this version until the next released version. Start of Original Message: Is anyone else experiencing issues with qBt moving files from the 'incomplete' folder on my scratch disk to the 'Complete' share on unRAID? It does have a SSD cache pool for the 'Complete' share, and the folders get created with 'placeholder files'. When I do a binary compare from the scratch disk to the share, the share files are not shown as equal. If I try and play one of the media files from the share, it won't play, but the one on the scratch disk does. If I manually move them from scratch to the share, then qBt reports them as missing files. This seems to have started after I upgraded to unRAID 6.11 rc2. I've tried running the 'Docker Safe New Permissions` tool, but that didn't work. Even if I manually move the completed torrents from the scratch disk to the share, qBt still sees them as missing even though the 'Save Path' shows the correct location. A 'Force Recheck' sets them to 0% completed and they start downloading again. If I copy the files from the share back to the scratch disk and do another 'Force Recheck' then they are shown as complete again. I've opened the console for the container and verified that my path mountpoints are all pointing to the correct locations. qBt reports them as saved in Complete for the Save Path and even with the files in both locations, but the moment they are removed from the scratch disk, they show as missing files and start to re-download. I've still been running the 4.4.x builds after the big glitch with 4.4.0 where it lost settings and reset the default network to LAN instead of VPN. Once I corrected those issues, the 4.4 releases have been working fine up until the upgrade of unRAID. I also just noticed that I somehow switched to an old template using OpenVPN instead of Wireguard. I'll try re-configuring with Wireguard but I doubt that will make a difference to the save path. Thoughts? Ideas on what to try next? Any help appreciated!
  9. I'm also using Firefox and occasionally seeing the Resend requestor in the browser when starting the array on both of my servers. Both are updated to 6.10.3 stable. It starts fine but when the page reloads at the end of the startup procedure, I get the requestor in the attached pic. If I click Resend, it tries to start the array again. Sometimes it locks up the webgui, some times it continues normally. If I click Cancel it's always started fine. I've cleared browser cache and cookies and tried a private tab in Firefox, but it doesn't always happen so it's been hard to validate. Any thoughts on other ways to diagnose the cause and apply a fix? I've been reminded to take a look at the browser console next time it occurs.
  10. I also checked the qB github for info, but as you said, they haven't updated the changelog since Jan. There's some recent notes in the News section of their main website, the latest being on May 22, 2022 for the 4.4.3.1 release. https://www.qbittorrent.org/news.php There have been two, possibly 3 updates of your container since then. After the 4.4.0 issues, I always pause all torrents and shutdown the container before any updates are applied. Then after restarting the container I check to make sure my settings are all OK. Since correcting the 4.4.0 issues, my settings have stayed and the VPN tunnel is still correctly set. Only then will I resume all torrents. I'm having no issues with seeding or grabbing but my OCD just HATES seeing that there's an update available, with no idea of what the update is for. For example the last update came in yesterday, but hasn't been applied yet as I want to watch the various forums and sites I visit to see if others note any new (or old) issues. The sheer number of open issues is certainly troubling, but for now I'll just continue to use it until something drastic happens that will make me want to switch to something else.
  11. @binhex Where do we find the change logs for each release of your container? After the fiasco for the 4.40 release I tend to wait a week or two to see if others are reporting any issues. It would be nice to know what's changed with each update.
  12. UPDATE: OK, just managed to get the plugin to work with the H11SSL-i. Had to change from Network to Localhost. At least now I'm not getting bombarded with credentials notifications. Still would be nice to be able to see the limited data from the DAS controller as well, but that's pretty minor and my OCD will just have to learn to tolerate it. Is there any reason why this won't work with a Supermicro H11SSL-i? This is an AMD Epyc single cpu motherboard. I tried using the settings for a X11 but it constantly errors with a invalid user name/password. I've configured the user name and password with the same credentials used for the IP login to the IPMI management page. I had been using it with my Supermicro CSE-PTJBOD-CB3 DAS conversion board using the x9 settings and it worked fine for that system. But now that I have the H11 board, it has more sensors that I'd prefer to monitor. Any suggestions? Also, is there any way to monitor two IPMI systems at the same time? Since the H11SSL-i and my CSE-PTJBOD-CB3 are both part of my media unRAID, it would be nice to monitor both. Thanks!
  13. I'm not sure if you've made any progress, but the P16.12 zip for Windows/DOS has the required sas3flash utility to install it. There doesn't appear to be a similar image for Linux, so I'll be using this one on my new 9305-24i. Of course that requires a Windows system or a bootable DOS USB key to use. Here's the link for it: https://docs.broadcom.com/docs/9305_24i_Pkg_P16.12_IT_FW_BIOS_for_MSDOS_Windows.zip
  14. It seems to have resolved itself somehow. I know MKVToolNix had an update, and since then the issue hasn't re-occurred for it, Filebot or the MakeMKV container.
  15. That will clear everything for EVERY site that's been visited. Under Firefox Settings, I go to Privacy & Security, then under Cookies and Site data, choose Manage Data. Then you can search for 'unraid.net' and your server IP address and just clear the cache and cookies for those URLs/address.
  16. When you remove cache/cookies, you'll need to make sure you do it for any unraid URLS, including the IP address(es) of your server(s). I found that some settings were cached under the IP address of my server, with others under unraid.net. Clearing both reset any oddities I was seeing, but that was done before I upgraded to 6.10 stable. And like @bonienl, my Firefox v100.0.2 is working fine with no page irregularities.
  17. Wow... major improvement in search speed. Excellent work! Thanks again.
  18. Thank you! The change for the browse function has eliminated the error. And I really appreciate the addition of disk location. Two minor issues - very minor is the File Manager 'toolbar' being at the bottom of the chosen folder. With a library like mine which is quite large, I just have to scroll all the way to the bottom of the page. The other minor issue is the search now takes much longer than it did prior to the change to the browse function. Regardless, appreciate your work and sent you a donation to thank you for this.
  19. Just tried the new search function but I get this error: Let me know if you need diagnostics and I'll DM them to you. UPDATE: looks like the initial folder load for my very large Movies share is what produces the error. I can still manually click on the Search button and it does successfully search the folder so it's still useable - see the photo below. One option that I would like to see for the search function is for it to tell me which disk(s) in the array the file(s) reside on.
  20. @Djoss - I'm having an unusual problem re-occur over the last 4 days. The containers I use from your repository are MakeMKV, MKVToolNix, TSMuxer and of course Filebot. Once a day for the last 4 days something seems to corrupt the cache/cookies for all 4 containers simultaneously. They usually work the 1st time I launch them, and I commonly have MakeMKV, MKVToolNix and Filebot open at the same time. When I exit the noVNC session, the next time I try to connect I get a blank page. The logs show the containers are fully started and running, but the app window never opens - it just sits there on a blank browser tab with the noVNC header bar. I narrowed it down to something in the browser cache or cookies as a private window in Firefox or another browser will open and display the apps. Unfortunately there's no easy way to delete the cache and cookies for individual Docker containers, just the main unRAID IP that they're hosted on. Clearing cache and cookies for the unRAID IP lets the containers work again, but clearing them causes loss of settings from all of my other containers. I thought it might be a full or corrupt docker.img file, but each time it happens I've found docker.img to be only 35 - 40% full. I've tried deleting the docker.img and rebuilding it twice, but the blank page issue always returns. Any thoughts on a way to prevent this? Thanks!
  21. I've been able to set my zfs dataset to enable NFS sharing, as per the above quote. I've verifed with ``zfs get sharenfs <POOLNAME>/<DATASET>`` and it lists the share with the settings I used. I also made sure that the mountpoint/dataset is using 99:100 for owner, but I still can't connect to the share from a client. I can mount the zfs dataset via Unassigned Devices on unRAID, but my Mac and Linux boxes won't mount it. Reported error isn't very revealing - 'operation not permitted'. My Google-fu isn't helping much. Any thoughts?
  22. Fair enough. Two thoughts though... I don't specifically need a switch to disable viewing of the disks that are Passed Through, so maybe just move disks that are set to PASSED to the table. I.E. when you set a disk to Passed Through it moves to the table instead of appearing in the main UD section. The other option would be a way to sort the disks so that all disks that are set to Passed Through are grouped together. Here's a screenshot to show you how messy it can get... and my OCD is constantly reminding me of this, so any ideas/implementation would be appreciated!
  23. Care to elaborate? Is it not as easy to duplicate/modify the code for historical devices? It would seem to me that a simple 'if set to passed through, move to Passed Through section' would work. Of course it's your code so maybe I don't understand if there's some other issue this might create? I'm sure users that set UD mounted devices to Passed Through for VMs would appreciate this too.
  24. Thanks again for your plugins! They make working with non-array and non-pool disks quite simple. One thing I'd like to request is a switch to disable display of any device that is set to 'Passed Through'. And/or the ability to show them in a separate section. You're doing this with shares and historical devices already, so hopefully adding a similar switch for passed through is possible.
  25. FYI - I have been seeing this issue (the disappearing user shares) since upgrading to 6.10 RC4. I am using NFS for my shares as the majority of my systems are Linux or Mac. Mac's in particular have speed issues with SMB shares, but NFS works great. The gotcha is that I don't use tdarr... in fact I don't use any of the *arr apps. I've grabbed diagnostics just now as it just happened again. I will send them via PM if anyone else wants to look at them, but I prefer not posting them here. Although I use anonymize, going through the diagnostics reveal they still contain information that I consider private. I'll be taking my own stab at the diagnostics shortly, but I've disabled hard links as suggested and will see if that helps.