Michael_P

Members
  • Posts

    668
  • Joined

  • Last visited

Everything posted by Michael_P

  1. That's not going to work. You need to stand up a tunnel that will route incoming requests thru the tunnel to your LAN, that means you need a VPN server 'in the cloud' that's not behind CGNAT. Most people use a VPS provider like Linode or AWS, chuck Wireguard or Tailscale on it and call it a day. Google is your friend here, but if you don't know what you don't know, it can bite you in the ass
  2. You can see what's laying around from the command line with docker ps -a and cleanup dangling with docker image prune -a More info here: https://docs.docker.com/config/pruning/#:~:text=The docker image prune command,%24 docker image prune WARNING!
  3. Have you tried pruning the unused containers? Is it still listed in the docker tab if you enable the advanced view?
  4. Almost always related to cabling, re-seat it or change it out - and they don't go away, so if it doesn't increase, ignore it once you've checked the cables
  5. Try adding this to the the conf proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_buffering off; proxy_max_temp_file_size 16384m; client_max_body_size 0;
  6. Is qbittorrent on libtorrent v2? If so try v1 https://forums.unraid.net/bug-reports/stable-releases/crashes-since-updating-to-v611x-for-qbittorrent-and-deluge-users-r2153/page/8/?tab=comments
  7. It was back on the 30th of July soif it's not recurring I'd just reboot and monitor it, and appears to be Frigate on your HA setup - either limit the memory it's allowed to have or set up the swapfile plugin
  8. Have you done what the release notes suggest? https://docs.unraid.net/unraid-os/release-notes/6.12.3 If Docker containers have issues starting after a while, and you are running Plex, go to your Plex Docker container settings, switch to advanced view, and add this to the Extra Params: --no-healthcheck
  9. You can limit them to a specific amount, for me Plex goes wonky during scheduled maintenance and gobbles up all the RAMs so I limit it to 4GB
  10. Heimdall is what I use, but there's also homer, homarr, dashy, and more
  11. Shot in the dark, but are you running a torrent docker container using the buggy libtorrent version 2.x
  12. Parity/RAID is strictly for high availability, so you can continue using the system while the failed disk(s) rebuild. There is absolutely zero data security, as it can be deleted/corrupted/changed at any time. Figure out what's really important and 3-2-1 back up that.
  13. I use Nextcloud, sync from and to any device with very little setup wizardry To backup, I use this script and you can set it up to automatically run when a USB drive is plugged in to the server
  14. Doesn't work like that, Parity has no idea what a 'file' is and wouldn't 'know' there's a difference in bits until a scan is done and if the parity calculation differs, it'll tell you something doesn't add up, but it won't know which is correct so it'll just fail the disk. Same with RAID, it'll just mirror the corrupted data. Parity/RAID is for rebuilding a faild disk. If you need data integrity, you need checksums and backups
  15. You should post diagnostics else any help is just reaching around in the dark
  16. CRCs are almost always cable related, replace them
  17. I've just finished dusting it out, re-seating RAM, changed out a power and SAS cable - going to re-build parity and cross my fingers.
  18. Since moving to 6.11 it's happened each of the times one has dropped offline, this is the 3rd time. The other two times were during parity re-builds after upgrading a drive and one of the others dropped (I rarely get a clean parity check/re-build on the first try without losing a drive likely due to power which I'm working on again..)
  19. After 2 drives dropped offline, a segfault occured in emhttpd which resulted in inaccurate health and status displays on the dashboard, and aslo the nightly health report email to erroneously report that the array was healthy. No indication from the server that anything was wrong, save for the entries in the syslog which I happened to read by chance. When this happened back on 6.11.1 during a parity re-build, the correct FAIL email was sent so I'm not sure if this is specific to 6.12.3 urserver-diagnostics-20230802-0520.zip
  20. It does it every time a drive drops, here's one from the last time I was re-building a new drive and another in the array decided to take a short nap (same disk btw, I suspect a cable or power delivery issue). The gui no longer reports any progress on the parity re-build, I just have to wait for it to finish and re-boot to get the dashboard back. This one at least sent the correct [FAIL] email, which leads me to question whether it's a 6.12 issue?
  21. They both dropped offline, and after a restart they're both disabled - diags before and after attached urserver-diagnostics-20230802-0520.zip urserver-diagnostics-20230802-0730.zip
  22. Looking at my nightly email from unraid about the array's health report, it shows as 'Passed' but I notice that all the drives were spun up (except for 1 anyway). So I log into the server to see if they still are, and they were all still spun up, so I click the button to spin them all down to see if they'll stay down but all the icons do is spin. I try each drive individually, with no effect on any of them, so I open up the log to see if there's any spindown commands noted. Turns out two of the drives were disabled, and the emhttpd process had segfaulted shortly after the drives show disconnected and re-connected. The dashboard continued to show no issues and send an ALL OK Bro! email..... If I hadn't noticed the drives spun up and investigated, it might have been a while before I would have any reason to look at it... What's the point of the health email if it doesn't report the actual health of the array?