AgentXXL

Members
  • Posts

    286
  • Joined

  • Last visited

Everything posted by AgentXXL

  1. I posted this over in the official unRAID Discord for you... this is just a pic of that.
  2. @dlandon I see the issue on Firefox 91.0.2 but it doesn't happen in Safari 14.1.2. It's really not a major issue so I wouldn't fret over it. Just more an FYI.
  3. @dlandon I've still been following this thread through email notifications about new posts, but I haven't seen mention of a very minor bug I've encountered. I'm running 6.10 RC1 and when you click the refresh icon in UD, all disks are displayed with their partitions shown. Even though all of the disks have the 'Show Partitions' switch set to off. A reload of the tab in the web browser restores it back to no partitions shown for the disks that have it disabled. Again, very minor but I'm sure you know that those of us with OCD can become focused on the smallest of issues. 🤣
  4. I don't see anything called 'max entity' but I also recently removed all my Wireguard configs to troubleshoot a remote connection issue. So to be safe, we should uninstall the Dynamix Wireguard plugin before upgrading? And be sure to make another Flash backup after removal.
  5. While I would really like to see ZFS implemented natively, I also want it to be available as a second pool/array. With 6.9.x allowing up to 35 pools in the Pool Devices section, that's likely the place I would be happy with configuring a ZFS pool. Of course if it was available as a second array beside the main parity protected array, that would work too. Use the slower unRAID pool for storage but use the ZFS pool for disk I/O intensive tasks like video editing/scrubbing.
  6. As @TexasUnraid said, the CLI tools have no issue. So for now I select the Krusader Docker container and open a console vs the web GUI. From the console it's easy enough to use 'unrar e myfile.rar' which has worked perfectly every time I've needed it. I'm curious though - as I'm still running unrar from the Krusader container, you would think that using the console and the CLI tools would have the same memory issue but they don't. At least there's a simple workaround and it's rarely needed anyhow. I'll also occasionally use Windows and the old standby QuickPAR to try and repair a rar set if the unRAID version of the par tools didn't work.
  7. It's interesting to note that after finding the memory fault mentioned above, these messages are no longer showing up in the syslog. Hopefully that will remain the case and/or the author of the GPU Stats plugin is eventually able to correct it. Thanks!
  8. As an electronics tech who has worked in essentially an IT role for most of my career, I often tell friends/family/clients to do a RAM test if you start seeing crashes/lockups that started 'out of the blue'. I followed that advice myself after my panic posting here and in a FB group and sure enough, the issue appears to be a RAM failure. Memtest86 (v5.01 on the unRAID flash device) found errors almost immediately. Through process of elimination I've tracked it down to 1 of the 4 x 16GB sticks in my system. As my system doesn't support operation with 3 DIMMs, I've pulled one of the others so I'm only running 2 DIMMs. I can limp along with the 32GB of RAM just fine until I get the bad stick replaced (still under warranty). Regardless, I still have the error noted in my first message just as unRAID finishes it's bootup. I'll hopefully find a cause for that as well. At least 'panic mode' is over.... a Sunday night with no Plex would have left me with a few messages from friends and family saying they can't seem to watch their shows! 😁
  9. UPDATE: it just happened again a few moments ago. The 1st sign of it not responding was Plex not being able to find the server. I checked my firewall and again it was listed as offline. Tried a momentary press of the power button and waited 5 mins but no shutdown so I was again forced to hard reset. UPDATE 2: It crashed/locked up again within minutes of the last hard reset. I've now started the system in safe mode with no plugins loaded. If I can't retrieve the logs from the Flash drive before it crashes again, I'll try exploring the Flash drive on another computer. UPDATE 3: After booting into safe mode I've encountered yet another lockup. It doesn't seem to be related to a plugin. I've disabled both the Docker and VM services. I was unable to retrieve the logs from the Flash drive before it locked up again. I'm going to start with Memtest86 and see if that shows any issues. I did set my syslog to mirror to the Flash device so I'll check that shortly and see if I can find the logs. If found, I'll attach them to this message. 😢
  10. [SOLVED] Cause was RAM failure. Removed faulty RAM and so far problem has not re-occurred. Last night I went to bed with the Plex client on my Nvidia Shield Pro playing music from my media server called AnimNAS (specs in signature). At some point the music stopped but as I was drifting in and out of sleep I just left it until this morning. Upon investigation, the server appeared to have crashed hard. The WebGUI was not accessible and my Plex and other Docker containers were offline. I also couldn't ping the IP and the system was shown as 'offline' in the pfSense firewall (running on a separate PC, not via VM). I went to the system and unfortunately the attached keyboard/mouse wouldn't 'wake' the system. The attached monitor also didn't see any signal. This means I was unable to grab any diagnostics before proceeding with a shutdown and restart attempt. I tried a momentary press of the power switch to try and do a clean shutdown but there was no response from the system. With no other options I then did the 'long press' of the power switch to shut the entire system down. I waited a minute before trying to restart it. It seemed to start normally and the unRAID boot process looked relatively normal from what I could see. All my drives are found and unRAID appears to have booted successfully, albeit with an obvious 'unclean shutdown detected'. I have my system set to NOT autostart the array upon reboot. During the restart I noticed some messages that I wasn't used to seeing during a reboot. It did appear to start somewhat normally as I was able to access the system from the webgui. Without starting the array, I went to Tools -> Diagnostics and was able to grab them and they are attached. At the final stage of the boot process, I saw some messages on the monitor attached to the system. The area in the red rectangle shows some of the new messages I hadn't seen before, with an error on the 2nd line. Before starting the array and the parity check (due to the unclean shutdown), I decided to try one more restart of the system. This restart had the same error as shown above, but of course was a proper restart so it cleared the requirement for a parity check. Regardless, I will start a manual parity check but I first want to troubleshoot the errors. I also noticed that my system log is now filling up with messages about my Nvidia card; here's another pic showing the messages that are spamming the syslog: Any recommended steps/actions I should take before starting the array and the parity check? Ideally I'd like to fix any errors and the cause of the syslog spam. Any assistance is appreciated! Thanks in advance... Dale
  11. [SOLVED] I waited until the preclears were finished before restarting. Both preclears were successful and after the reboot have been added to the array. As expected after a reboot, my /var/log folder is now only using 1% of available space. Still wish I could figure out why the log was considered full even though I had plenty of available RAM. Start of Original Message: I started a zero-only pass of preclear on two new 16TB drives yesterday evening. When I woke this morning I had a message from Fix Common Problems about an error with my setup. That error is a full /var/log folder. When I checked the unRAID log, it's being flooded with the following: May 12 04:40:56 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:56 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:56 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:56 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:57 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:57 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" May 12 04:40:57 AnimNAS preclear_disk_ZL2E3RCH[24492]: tput: unknown terminal "screen" The last log entry was from 4:40:57 this morning when the system declared the log as 100% full. This doesn't make sense as my understanding is that /var/log resides in RAM when unRAID is loaded. I have 64GB of RAM on the server and it's only showing 16% used, but the dashboard tab also shows the Log at 100% full. The preclear is 72% complete on both drives and they're still in their USB enclosures until the zero completes successfully. I know a restart will clear the logs but I haven't done one yet as I'm unsure if pausing the zero runs will survive the reboot? Am I safe to reboot after pausing the zero runs? Any idea why my syslog is full of those errors? TIA for any assistance...
  12. Just to help anyone else looking to do a Fractal Design Define 7XL build, here's the build info I used to fully equip it for maximum storage: Solid black model with 6 HDD/SSD trays + 2 SSD brackets + 2 Multibrackets included 5 x 2-pack HDD trays (10 additional trays) 2 x 2-pack multibrackets (4 additional multibrackets) https://www.fractal-design.com/products/cases/define/define-7-xl/Black/ https://www.fractal-design.com/products/accessories/mounting/hdd-kit-type-b-2-pack/black/ https://www.fractal-design.com/products/accessories/mounting/universal-multibracket-type-a-2-pack/black/ I bought from a local Canadian reseller - Memory Express. Total came to $420 CAD before taxes when I purchased but prices do fluctuate. Main Case (solid black model): https://www.memoryexpress.com/Products/MX81096 Extra trays: https://www.memoryexpress.com/Products/MX00113324 Extra multibrackets: https://www.memoryexpress.com/Products/MX00113330 You can see in the attached pic that you can mount 15 x 3.5" drives in trays (1 spare tray left over). 3 x 3.5" are mounted at the top upside down with multibrackets. I added 2 more 3.5" drives on multibrackets with stick on legs at the bottom of the motherboard compartment where they show 3 x 2.5". I used my last spare multibracket to mount another SSD upside down above the column of 11 x 3.5" drives - there's just enough space, but not if you want to do a water cooling radiator setup as shown in the manual.
  13. Yes, both methods work to prevent the DNS rebinding issue. I did add the custom option lines to that section on my pfSense box, but it didn't resolve the DNS rebinding issue. Even after waiting an hour for things to clean up. Only when I added it to the section I mentioned did the provisioning work. What's unusual is that one of my unRAID systems is being seen as available (green in My Servers) whereas the other one is still red. Yet the port forward rules are identical other than the port number. I haven't had the time to play with it anymore yet, but will soon. I'll try the hash url and report back later. Thanks!
  14. I installed the plugin on both of my servers. When I went to Management Access under settings, my initial attempt to provision the Let's Encrypt certificates failed, indicating that it was likely my firewall's DNS rebinding protection. To resolve the DNS rebinding issue I went into my firewall config (pfSense) and under DNS Resolver I added the unraid.net domain to the 'Domain Overrides' section. One thing I'm not sure about is where pfSense asks me to provide the DNS 'Lookup Server IP Address' so I just set it to a Cloudflare one for now, as shown on the attached pic. Cloudflare resolves unraid.net so I suspect I'm correct. Then, with the DNS rebinding check corrected, I was able to provision the Let's Encrypt cert for both servers. I then enabled remote access and the flash backup. Flash backup is working for both servers. I also chose custom ports for each server and added port forwarding rules for them to the firewall. When I attempt the Check function, both servers respond with the 'Oops This Unraid Server was unreachable from the outside' message. When I go to the My Servers Dashboard, one server shows that it has Remote Access but choosing it ends up at a browser window/tab that eventually times out before displaying the unRAID webgui. The other unRAID server still shows with a red X and 'Access unavailable'. Not sure what to try next other than the full reset procedure, which unfortunately takes time to ensure reset of user account passwords. That and it's Saturday night so the Plex server is a little busy with users. Any other suggestions?
  15. So still no luck... the 'Check' function fails on both of my servers. Tried the 'unraid-api restart' on both, rebooted, still no go. One of the servers shows up as available for remote access when I go to the url using my cell network, but it won't actually connect. After a reboot both servers show up in the My Servers section, one with a checkmark and the other with a red X. The one with the checkmark is the one that shows remote access as available, but won't connect. I'll potentially try the full reset method mentioned but I'll need to let my users finish their Plex sessions.
  16. That's why you use a complex password and hopefully eventual 2FA.
  17. OK, have added the plugin to both of my servers, configured my firewall to port forward a custom port to each server and added the 'unraid.net' domain to my DNS resolver. I was able to provision with Let's Encrypt and the flash drive backup activation appears successful. Alas even after trying the 'unraid-api restart' in a terminal on each server, I'm still unable to get remote access working. When I try the 'Check' function it fails. When attempting it from a phone using my cellular providers networks (WiFi turned off), I get a 'You do not have permissions to view this page' error for the https://forums.unraid.net/my-servers/ URL. Suggestions?
  18. It only needs the bootable partition if your VM is setup to use it for booting. If it's just for data, that 200mb partition is not required.
  19. The disk was likely GPT formatted with the vfat 200mb partition left on it. This partition is normally only needed for bootable media, i.e. it's commonly used for the EFI partition on UEFI bootable devices. You can unmount the drive and then you can click on the red 'X' beside the vfat partition to remove it. To be entirely safe, you may want to backup your drive again, remove both partitions, re-format and then re-copy your required data. Note that when you reformat, if the disk is going to be for data storage only, you can change it from GPT to MBR so you'll only have 1 partition. You'll likely want to do the formatting using a partition tool on Windows, Disk Utility on a Mac or Disks on a Linux system.
  20. Just a quick update - the 3rd 'false' parity check completed with 0 errors found, as I expected. I've increased the timeout to 120 seconds as @JorgeB suggested. I've also just successfully upgraded to 6.9.1 and hope that these 'false' unclean shutdowns won't re-occur. Also, just to confirm - 6.9.1 shows the correct colors for disk utilization thresholds on both the Dashboard and Main tabs. My OCD thanks you @limetech for correcting this. 🖖
  21. From what I know of how it works, it's based on the disk utilization thresholds and is out of 100%. The thresholds are setup in Disk Settings for both a warning level (color should be orange) and a critical level (red). Green should only be used when below the warning level threshold. As I prefer to fill my disks as completely as possible, my warning threshold is at 95% and my critical threshold is at 99%. This is just to alert me when I need to purchase more disks. Regardless, it's unusual that it's displaying correctly on the Main tab, but not on the Dashboard tab. A very minor issue though, compared to my other reported issues with the flash drive disconnect and 3 x 'unclean shutdowns' that all appear to be false. I'm OK with tolerating the issues as 6.9.0 stable has only been out for just over 1 week, and no matter what, there's always issues found with stable releases. Once exposed to users with all levels of knowledge, and a huge increase in the diversity and config of user's hardware platforms, more bugs are likely to be found. But in my experience, the Limetech and community devs are outstanding in their support availability.
  22. That's something I'll try. It could indeed be related to a settings issue, just like the minor issue with the incorrect colors being shown for disk utilization thresholds. Although I just noticed a few minutes ago that the Dashboard tab has reverted back to all drives showing as green, while the Main tab shows them correctly. Regardless, I'll try resetting the timeout for unclean shutdowns and hopefully after this current parity check completes, I won't see another likely false one. Thanks!
  23. I assume that reply was meant for me, but yes, I had closed all open console sessions, and even took the pro-active step of shutting down Docker containers manually before attempting the reboots. As I've got the pre and post diagnostics for this latest occurrance, I'll start doing some comparison today. I just find it odd that I've experienced 3 supposed unclean shutdowns since the upgrade to 6.9.0 stable. I had rebooted numerous times while using 6.9.0 RC2 but don't recall a single unclean shutdown occurring. And as reported, I believe they're completely false errors as all of my parity checks in the last year have passed with no errors found. When I 1st started with unRAID in 2019 using my original storage chassis (an old Norco RPC-4020) I had numerous actual errors, but not a single one since I've moved to the Supermicro CSE-847.
  24. @limetech Just an update to the issue with disk usage thresholds - my media unRAID system has been 'corrected'. As mentioned previously, it was showing all disks as 'returned to normal utilization' and showing as 'green' after the upgrade to 6.9.0. As per the release notes, I tried numerous times to reset my thresholds in Disk Settings, along with a few reboots. Nothing had corrected it. After I was sure other aspects were working OK, I proceeded to add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to using the proper colors. The 2 new disks are both green as they're empty, and the rest are accurately showing as red as I let them fill as completely as possible. Note that the preclear signature was valid even though the disks may not have fully completed the post-read process due to my previously reported USB disconnection error with the unRAID flash drive. I know they passed the pre-read and zero 100% successfully, so I just took the chance on adding them to the array, expecting that unRAID might need to run its own clear process but that wasn't the case. Note that my backup unRAID which was upgraded from 6.8.3 to 6.9.0 still shows the incorrect colors for disk utilization thresholds. I'll be adding/replacing some disks on it next month, so I'll watch to see if it also corrects this utilization threshold issue. Alas I've now got another potential issue.... sigh. Before adding the new drives I shut down all active VMs and Docker containers and then attempted to stop the array. This hung up the system reporting 'retry unmounting user shares' at the bottom left corner of the unRAID webgui. I then grabbed diagnostics and attempted a reboot, which it did. But upon restart, the system has again reported an unclean shutdown even though the reboot itself showed no errors on the monitor directly attached to the unRAID system. This is the 3rd time since the upgrade to 6.9.0 that the system has reported an unclean shutdown. On all 3 occasions I've been able to watch the console output on the monitor directly attached to the unRAID system, with no noticeable errors during the reboot. On the 1st occasion (immediately after the upgrade from 6.9.0 RC2), I did an immediate reboot and that cleared the need for a parity check. The 2nd unclean shutdown was reported after another apparently clean reboot on Mar 2nd. On this 2nd occasion I let unRAID proceed with the parity check and it completed with 0 errors found. I'm letting it proceed again with this most recent 'unclean shutdown' error, but I suspect it's a false warning and no errors will be found. I've got the diagnostics created just before adding the 2 new drives, and also a diagnostic grab just a few moments ago. I can provide them if you wish, but prefer directly through PM. Let me know if you have any questions or want the diagnostics, but hopefully these 'unclean shutdowns' are just a small quirk with the stable 6.9.0 release.
  25. I've reported the same earlier in the thread regarding the disk usage thresholds and incorrectly making them green. Upon 1st boot after upgrading, there were notifications from every array drive stating that the drives had returned to normal utilization. There's a note in the release notes about this stating that users not using the unRAID defaults will have to reconfigure them, but as you, I and others have found, resetting the disk usage thresholds in Disk Settings hasn't corrected the issue. I and others are also seeing the IPV6 messages, but they seem pretty innocuous so not a big concern at this time. We'll get these small issues solved eventually.