AgentXXL

Members
  • Content Count

    275
  • Joined

  • Last visited

Community Reputation

17 Good

About AgentXXL

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Just to help anyone else looking to do a Fractal Design Define 7XL build, here's the build info I used to fully equip it for maximum storage: Solid black model with 6 HDD/SSD trays + 2 SSD brackets + 2 Multibrackets included 5 x 2-pack HDD trays (10 additional trays) 2 x 2-pack multibrackets (4 additional multibrackets) https://www.fractal-design.com/products/cases/define/define-7-xl/Black/ https://www.fractal-design.com/products/accessories/mounting/hdd-kit-type-b-2-pack/black/ https://www.fractal-design.com/products/accessories/mounti
  2. Yes, both methods work to prevent the DNS rebinding issue. I did add the custom option lines to that section on my pfSense box, but it didn't resolve the DNS rebinding issue. Even after waiting an hour for things to clean up. Only when I added it to the section I mentioned did the provisioning work. What's unusual is that one of my unRAID systems is being seen as available (green in My Servers) whereas the other one is still red. Yet the port forward rules are identical other than the port number. I haven't had the time to play with it anymore yet, but will soon. I'll t
  3. I installed the plugin on both of my servers. When I went to Management Access under settings, my initial attempt to provision the Let's Encrypt certificates failed, indicating that it was likely my firewall's DNS rebinding protection. To resolve the DNS rebinding issue I went into my firewall config (pfSense) and under DNS Resolver I added the unraid.net domain to the 'Domain Overrides' section. One thing I'm not sure about is where pfSense asks me to provide the DNS 'Lookup Server IP Address' so I just set it to a Cloudflare one for now, as shown on the attached pic. Cloudflare r
  4. So still no luck... the 'Check' function fails on both of my servers. Tried the 'unraid-api restart' on both, rebooted, still no go. One of the servers shows up as available for remote access when I go to the url using my cell network, but it won't actually connect. After a reboot both servers show up in the My Servers section, one with a checkmark and the other with a red X. The one with the checkmark is the one that shows remote access as available, but won't connect. I'll potentially try the full reset method mentioned but I'll need to let my users finish their Plex sessions.
  5. That's why you use a complex password and hopefully eventual 2FA.
  6. OK, have added the plugin to both of my servers, configured my firewall to port forward a custom port to each server and added the 'unraid.net' domain to my DNS resolver. I was able to provision with Let's Encrypt and the flash drive backup activation appears successful. Alas even after trying the 'unraid-api restart' in a terminal on each server, I'm still unable to get remote access working. When I try the 'Check' function it fails. When attempting it from a phone using my cellular providers networks (WiFi turned off), I get a 'You do not have permissions to view this page' error for the htt
  7. It only needs the bootable partition if your VM is setup to use it for booting. If it's just for data, that 200mb partition is not required.
  8. The disk was likely GPT formatted with the vfat 200mb partition left on it. This partition is normally only needed for bootable media, i.e. it's commonly used for the EFI partition on UEFI bootable devices. You can unmount the drive and then you can click on the red 'X' beside the vfat partition to remove it. To be entirely safe, you may want to backup your drive again, remove both partitions, re-format and then re-copy your required data. Note that when you reformat, if the disk is going to be for data storage only, you can change it from GPT to MBR so you'll only have 1 partition. You'll lik
  9. Just a quick update - the 3rd 'false' parity check completed with 0 errors found, as I expected. I've increased the timeout to 120 seconds as @JorgeB suggested. I've also just successfully upgraded to 6.9.1 and hope that these 'false' unclean shutdowns won't re-occur. Also, just to confirm - 6.9.1 shows the correct colors for disk utilization thresholds on both the Dashboard and Main tabs. My OCD thanks you @limetech for correcting this. 🖖
  10. From what I know of how it works, it's based on the disk utilization thresholds and is out of 100%. The thresholds are setup in Disk Settings for both a warning level (color should be orange) and a critical level (red). Green should only be used when below the warning level threshold. As I prefer to fill my disks as completely as possible, my warning threshold is at 95% and my critical threshold is at 99%. This is just to alert me when I need to purchase more disks. Regardless, it's unusual that it's displaying correctly on the Main tab, but not on the Dashboard tab. A very minor i
  11. That's something I'll try. It could indeed be related to a settings issue, just like the minor issue with the incorrect colors being shown for disk utilization thresholds. Although I just noticed a few minutes ago that the Dashboard tab has reverted back to all drives showing as green, while the Main tab shows them correctly. Regardless, I'll try resetting the timeout for unclean shutdowns and hopefully after this current parity check completes, I won't see another likely false one. Thanks!
  12. I assume that reply was meant for me, but yes, I had closed all open console sessions, and even took the pro-active step of shutting down Docker containers manually before attempting the reboots. As I've got the pre and post diagnostics for this latest occurrance, I'll start doing some comparison today. I just find it odd that I've experienced 3 supposed unclean shutdowns since the upgrade to 6.9.0 stable. I had rebooted numerous times while using 6.9.0 RC2 but don't recall a single unclean shutdown occurring. And as reported, I believe they're completely false errors as all of my
  13. @limetech Just an update to the issue with disk usage thresholds - my media unRAID system has been 'corrected'. As mentioned previously, it was showing all disks as 'returned to normal utilization' and showing as 'green' after the upgrade to 6.9.0. As per the release notes, I tried numerous times to reset my thresholds in Disk Settings, along with a few reboots. Nothing had corrected it. After I was sure other aspects were working OK, I proceeded to add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to us
  14. I've reported the same earlier in the thread regarding the disk usage thresholds and incorrectly making them green. Upon 1st boot after upgrading, there were notifications from every array drive stating that the drives had returned to normal utilization. There's a note in the release notes about this stating that users not using the unRAID defaults will have to reconfigure them, but as you, I and others have found, resetting the disk usage thresholds in Disk Settings hasn't corrected the issue. I and others are also seeing the IPV6 messages, but they seem pretty innocuous so not a
  15. [EDIT: SOLVED] I've corrected my flash drive error described below. Looking through the syslog shows that there was a USB disconnect on the port/controller that my unRAID flash drive was plugged into. Not sure how/why, but I had plugged it into a USB 3 port. I did a shutdown, checked the flash drive on a Windows system and it was OK. After plugging it back into a USB 2 port, it booted successfully. The only issue created seems to be that the 2 x 16TB drives that reported as successfully precleared are not being seen as precleared. I guess I could go ahead and add them to the array, letting unR