AgentXXL

Members
  • Content Count

    275
  • Joined

  • Last visited

Everything posted by AgentXXL

  1. Just to help anyone else looking to do a Fractal Design Define 7XL build, here's the build info I used to fully equip it for maximum storage: Solid black model with 6 HDD/SSD trays + 2 SSD brackets + 2 Multibrackets included 5 x 2-pack HDD trays (10 additional trays) 2 x 2-pack multibrackets (4 additional multibrackets) https://www.fractal-design.com/products/cases/define/define-7-xl/Black/ https://www.fractal-design.com/products/accessories/mounting/hdd-kit-type-b-2-pack/black/ https://www.fractal-design.com/products/accessories/mounti
  2. Yes, both methods work to prevent the DNS rebinding issue. I did add the custom option lines to that section on my pfSense box, but it didn't resolve the DNS rebinding issue. Even after waiting an hour for things to clean up. Only when I added it to the section I mentioned did the provisioning work. What's unusual is that one of my unRAID systems is being seen as available (green in My Servers) whereas the other one is still red. Yet the port forward rules are identical other than the port number. I haven't had the time to play with it anymore yet, but will soon. I'll t
  3. I installed the plugin on both of my servers. When I went to Management Access under settings, my initial attempt to provision the Let's Encrypt certificates failed, indicating that it was likely my firewall's DNS rebinding protection. To resolve the DNS rebinding issue I went into my firewall config (pfSense) and under DNS Resolver I added the unraid.net domain to the 'Domain Overrides' section. One thing I'm not sure about is where pfSense asks me to provide the DNS 'Lookup Server IP Address' so I just set it to a Cloudflare one for now, as shown on the attached pic. Cloudflare r
  4. So still no luck... the 'Check' function fails on both of my servers. Tried the 'unraid-api restart' on both, rebooted, still no go. One of the servers shows up as available for remote access when I go to the url using my cell network, but it won't actually connect. After a reboot both servers show up in the My Servers section, one with a checkmark and the other with a red X. The one with the checkmark is the one that shows remote access as available, but won't connect. I'll potentially try the full reset method mentioned but I'll need to let my users finish their Plex sessions.
  5. That's why you use a complex password and hopefully eventual 2FA.
  6. OK, have added the plugin to both of my servers, configured my firewall to port forward a custom port to each server and added the 'unraid.net' domain to my DNS resolver. I was able to provision with Let's Encrypt and the flash drive backup activation appears successful. Alas even after trying the 'unraid-api restart' in a terminal on each server, I'm still unable to get remote access working. When I try the 'Check' function it fails. When attempting it from a phone using my cellular providers networks (WiFi turned off), I get a 'You do not have permissions to view this page' error for the htt
  7. It only needs the bootable partition if your VM is setup to use it for booting. If it's just for data, that 200mb partition is not required.
  8. The disk was likely GPT formatted with the vfat 200mb partition left on it. This partition is normally only needed for bootable media, i.e. it's commonly used for the EFI partition on UEFI bootable devices. You can unmount the drive and then you can click on the red 'X' beside the vfat partition to remove it. To be entirely safe, you may want to backup your drive again, remove both partitions, re-format and then re-copy your required data. Note that when you reformat, if the disk is going to be for data storage only, you can change it from GPT to MBR so you'll only have 1 partition. You'll lik
  9. Just a quick update - the 3rd 'false' parity check completed with 0 errors found, as I expected. I've increased the timeout to 120 seconds as @JorgeB suggested. I've also just successfully upgraded to 6.9.1 and hope that these 'false' unclean shutdowns won't re-occur. Also, just to confirm - 6.9.1 shows the correct colors for disk utilization thresholds on both the Dashboard and Main tabs. My OCD thanks you @limetech for correcting this. 🖖
  10. From what I know of how it works, it's based on the disk utilization thresholds and is out of 100%. The thresholds are setup in Disk Settings for both a warning level (color should be orange) and a critical level (red). Green should only be used when below the warning level threshold. As I prefer to fill my disks as completely as possible, my warning threshold is at 95% and my critical threshold is at 99%. This is just to alert me when I need to purchase more disks. Regardless, it's unusual that it's displaying correctly on the Main tab, but not on the Dashboard tab. A very minor i
  11. That's something I'll try. It could indeed be related to a settings issue, just like the minor issue with the incorrect colors being shown for disk utilization thresholds. Although I just noticed a few minutes ago that the Dashboard tab has reverted back to all drives showing as green, while the Main tab shows them correctly. Regardless, I'll try resetting the timeout for unclean shutdowns and hopefully after this current parity check completes, I won't see another likely false one. Thanks!
  12. I assume that reply was meant for me, but yes, I had closed all open console sessions, and even took the pro-active step of shutting down Docker containers manually before attempting the reboots. As I've got the pre and post diagnostics for this latest occurrance, I'll start doing some comparison today. I just find it odd that I've experienced 3 supposed unclean shutdowns since the upgrade to 6.9.0 stable. I had rebooted numerous times while using 6.9.0 RC2 but don't recall a single unclean shutdown occurring. And as reported, I believe they're completely false errors as all of my
  13. @limetech Just an update to the issue with disk usage thresholds - my media unRAID system has been 'corrected'. As mentioned previously, it was showing all disks as 'returned to normal utilization' and showing as 'green' after the upgrade to 6.9.0. As per the release notes, I tried numerous times to reset my thresholds in Disk Settings, along with a few reboots. Nothing had corrected it. After I was sure other aspects were working OK, I proceeded to add the 2 x 16TB new disks to the array. After the array started and the disks were formatted, disk utilization has now returned to us
  14. I've reported the same earlier in the thread regarding the disk usage thresholds and incorrectly making them green. Upon 1st boot after upgrading, there were notifications from every array drive stating that the drives had returned to normal utilization. There's a note in the release notes about this stating that users not using the unRAID defaults will have to reconfigure them, but as you, I and others have found, resetting the disk usage thresholds in Disk Settings hasn't corrected the issue. I and others are also seeing the IPV6 messages, but they seem pretty innocuous so not a
  15. [EDIT: SOLVED] I've corrected my flash drive error described below. Looking through the syslog shows that there was a USB disconnect on the port/controller that my unRAID flash drive was plugged into. Not sure how/why, but I had plugged it into a USB 3 port. I did a shutdown, checked the flash drive on a Windows system and it was OK. After plugging it back into a USB 2 port, it booted successfully. The only issue created seems to be that the 2 x 16TB drives that reported as successfully precleared are not being seen as precleared. I guess I could go ahead and add them to the array, letting unR
  16. EDIT: SOLVED - I don't understand why, but it took 2 reboots and 3 x changing/refreshing the root password before I could again use ssh from my Mac. Regardless, it's working for now. Original Message Starts: I'm having this issue as well, but only on my backup unRAID system. Refreshing the root password hasn't worked. I'm using MacOS Terminal like I always have and under the 'New Remote Connection' dialog it shows I'm issuing the command: ssh -p 22 root@AnimDL.local After refreshing the password I also tried using the built-in shell access from t
  17. Yes, I saw that in the release notes. I noted in my post that I have re-configured my thresholds but the issue persists. Perhaps I need to reboot so I'll try that. EDIT: After another reboot it still shows the drives in green on the Dashboard tab, and red on the Main tab. Another possible issue - again, after the reboot it reports an unclean shutdown, yet no errors were noticed during the reboot on the monitor directly attached to the system. I'm finished my other maintenance so I've started the system and am letting it proceed with the parity check. Something tells me it'll be fin
  18. I've upgraded both of my systems to 6.9.0 with no major issues, the backup going from 6.8.3 and the media from 6.9 RC2. Really the only issue that I'm seeing is like @S80_UK and @doron reported. Both systems showed numerous notifications that 'drive utilization returned to normal'. On my backup unRAID I still have the UI set to default color (grey for space used) but on my media unRAID, I changed it to show the color based on utilization. The Dashboard tab shows all drives as green, yet they are VERY HIGH on utilization. The Main tab shows them all as red, as I expect.
  19. Yes, I took a look and the 1050TI/1660TI/Super are highly inflated in price right now. I'll just have to stick it out until I can get the 3060 or 3070. Which could be a long while - there are reports surfacing that there's a pretty high failure rate on the new 3000 series cards. Combined with the crypto-miners grabbing everything they can, it's likely to be a LONG wait... As an option to try and resolve this, I'm going to re-purpose my old i7-6700K microATX system and try running it with just Plex on a stripped down Linux distro. The i7-6700K does have an iGPU but I'll likely put t
  20. I was having numerous issues with my old dual Xeon (x5650) setup so I broke down and bought an Intel i9-10940x and x299 motherboard (the only ones that support LGA2066 sockets). Alas the i9-10940x does NOT have an iGPU so I can't use Intel Quicksync.... 😪 My mistake as I didn't look close enough to confirm the i9-10940x had Quicksync support. The reason for choosing to go with a newer Intel CPU was made primarily because I needed more PCIe lanes, which only the 'X' series Core i9 offer. I also went with the Intel as I knew Quicksync could help, but alas not in my case for the processor I chos
  21. EDIT/UPDATE: while my GTX970 supports x.265/HEVC with YUV 4:2:0 chroma subsampling, it does NOT support x.265/HEVC content with a bit depth higher than 8. A lot of my 4K content is 10 bit, so that's why it's the CPU doing the transcode and not the GPU. Thanks again regardless for the work you've done to assist in enabling HW transcoding. The files I'm transcoding are 4K remuxes ripped to a MKV container from UHD Blu-ray. I already have the GPU Statistics plugin installed but it didn't show me anything about transcoding. Combined with my CPU usage of around 40% across all cores, I'm
  22. EDIT: Found the post where you say you had to disable the container for 6.8.3 by starting from this last page of the topic and then going backwards. Might be worth adding a note to the 1st post in the topic so others will know that the Docker container is only available after upgrading to 6.90 beta 35 or later. I've downloaded the prebuilt 6.8.3 with Nvidia drivers from the 1st post and moved the files into place on my USB flash drive and rebooted. Alas I'm now stuck as I can't seem to find the GPU UUID that's needed to add parameters in the Plex Docker container. EDIT#2: OK, decid
  23. I've got an old laptop drive that I can format as APFS and send to you. If you want to contact me by PM with the details on where to send it to, I'll add it to my trip to the post office this week.
  24. I've decided to try an alternate method, a variation on the 'Replace multiple smaller disks with a single larger one' - https://wiki.unraid.net/Replacing_Multiple_Data_Drives_with_a_Single_Larger_Drive. I've installed the 2 x 16TB (successfully precleared) as UD mounted drives. I formatted each with XFS and then copied the data from the respective 10TB drives that they're going to replace. That took about 18hrs to do both drives as both 10TB drives were almost full. I'm now in the process of shutting down and removing the 2 x 10TB drives. Then when I powerup (my array is set to NOT autostart),
  25. No rush at all... I was reminded by some users from another discussion that since most of us pass a dedicated USB hub through to the VM, setting UD to passthrough for APFS devices should let the Mac VM see them with no issue. I'll have to try that but there's still the occasional case where it might be useful to have UD+ be able to mount the partition(s).