plantsandbinary

Members
  • Posts

    341
  • Joined

  • Last visited

Everything posted by plantsandbinary

  1. Sorry, I didn't see this reply. Kind of gave up hope that someone could help. I checked this: https://www.dnsbl.info/dnsbl-database-check.php My IP isn't listed at all. So I don't think it's in any blacklist. The domain and TLD combo is brand-new. It's only been registered one month and has never been used or registered before from what I could see. It's a pretty unique domain and tld also. I changed my router DNS back to <blank> and told it to use my ISPs DNS. I still get the same problem. Do you have any other ideas? I was getting support from Cloudfare but they stopped responding... as I said it only seems to be images and other things on the server which is so weird. I'm using Cloudfare explicitly because I like that they proxy my IP so I don't need to give away my home IP.
  2. I'm running an Unraid server with a couple of containers: plex, heimdall, deluge, etc. I'm using Nginxproxymanager container on Unraid to expose these to the web, handle SSL and to access them with my own domain. I have Cloudfare set up as my site's DNS provider. Whenever I try to browse to eg. *https://deluge.mysite.tld* I get a whole bunch of 522 timeout errors. Most of the page resources like .css, html, etc. all load fine immediately but for some reason the **images** and other resources just take forever to load, or never load at all. Here is an example: https://imgur.com/a/ONr17kC My lab setup is pretty simple. Router (AX88U) > Dual Gig Ethernet > Unraid Homelab > Containers (eg. deluge, plex, etc). Here's the router settings: https://imgur.com/a/yp72dc2 The only thing I've changed on my router was the DNS. I changed to Adblock DNS just so I could block most ads on my home network without any extra fanciness. I have a pretty decent homelab and 1gig fiber connection. So it's weird that I am getting these timeout errors. Model: Custom M/B: ASRock X570M Pro4 Version - s/n: M80-XXXXXXXXXXXXX BIOS: American Megatrends Inc. Version P3.70. Dated: 02/23/2022 CPU: AMD Ryzen 5 5600 6-Core @ 3500 MHz HVM: Enabled IOMMU: Enabled Cache: 384 KiB, 3 MB, 32 MB Memory: 16 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB) Network: bond0: fault-tolerance (active-backup), mtu 1500 Kernel: Linux 5.19.17-Unraid x86_64 OpenSSL: 1.1.1s Uptime: 59 days, 3 hours, 20 minutes I'd appreciate any ideas on where I should start to debug this issue.
  3. Was this deleted from the app repository? It now says "Not available" when checking for updates.
  4. >AFAIK you can't use SSDs in the normal array as the TRIM function would break parity. This is a pretty huge setback. I wonder how Unraid would attempt to mitigate this problem then in the future as this is going to be a pretty massive setback as SSDs become cheaper. I guess I will just figure out about having a much larger cache drive and keeping movies there that I need to watch before moving them onto the slower drives.
  5. I'm getting sick of the performance of my WD REDs. I have 4x 4TB of them (3 + Parity) and the 130MB/s speeds just aren't cutting it. I've upgraded my local network and ethernet to 2.5GbE and these drives, whilst insanely reliable are starting to show that they are absolutely the weakest link in my home server. I realise that people will tell me to just use a bigger cache drive but with the mover just moving everything off the cache, I usually have to move files manually back onto it. I do realise I can also permanently set a specific share to "always" be on the cache. But for massive 80GB 4K movies this doesn't really seem like a suitable option for me either. I do have a 2TB cache drive and I don't think it's really worth it buying an even larger one. The biggest problem I'm having is the lack of good I/O performance and read speeds. When it comes to watching movies via Plex etc. it's always complaining that the drives aren't fast enough. Plus with other things running like Torrents downloading etc. the I/O performance is split between the tasks. So I was thinking of grabbing a 4x 2TB of these: https://store.patriotmemory.com/products/copy-of-new-patriot-p400-lite-m-2-2280-pcie-gen4-x4-solid-state-drive-1 They're 100€ a pop with free postage in my country. I also rationalised that I don't need as much storage, I'm only using about 1.2TB per drive at the moment, so they're largely under-utilised. They aren't the fastest SATA SSDs but they have pretty good reviews. I could get up to about 30MB/s read and 90MB/s faster write speeds if I went with the Crucial MX500 drives but they're also quite a bit more expensive per drive. Any arguments against them? I checked also the Crucial BX500 which are close in price but I was seeing wildly ranging views from people saying that they die very quickly. The Patriots seemed a bit slower overall but much longer lasting (960TB write capacity vs. 640TB respectively on most other cheap drives). Thoughts? Especially from anyone who is running all SSDs in their machine, would be really appreciated.
  6. In the last few weeks I've been experiencing extremely slow loading times on pages using nginx proxy manager. qBittorrent, rTorrent, Heimdall, Plex, etc. everything is really, really slow to load. It can take like 2 minutes for a page to resolve for some reason. Some images are not loading at all and I basically have to refresh the page a dozen times before the site seems to "wake up" and actually load fully. I have an extremely fast connection and before these pages would load almost instantly. I haven't changed any settings either. Could someone please give me some tips on how to debug this?
  7. I did not get a manual with this router which explains port forwarding. Also 0.0.0.0 - 255.255.255.255 are not valid IP addresses it says. Any more information?
  8. Can someone help with this? I am running the container in "host" mode so there should not be any port-forwarding needed? Yet I am getting these complaints in the logs: 2023/01/02 10:48:55 portmapper: failed to get PCP mapping: PCP response not ok, code 8 2023/01/02 10:48:55 magicsock: endpoints changed: 85.xxx.xxx.249:57588 (stun), [2001:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:f0e5]:51047 (stun), 172.17.0.1:54030 (local), 172.18.0.1:54030 (local), 192.168.1.50:54030 (local), [2001:999:484:47a3::19a]:54030 (local), [2001:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:2428]:54030 (local), [2001xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:f0e5]:54030 (local) 2023/01/02 10:48:55 portmapper: failed to get PCP mapping: PCP response not ok, code 8 2023/01/02 10:48:55 portmapper: PMP probe failed due result code: {OpCode:128 ResultCode:NetworkFailure SecondsSinceEpoch:2830 MappingValidSeconds:0 InternalPort:0 ExternalPort:0 PublicAddr:0.0.0.0} How do I fix this?
  9. I just got a new router. Any idea how I am supposed to port forward NginxProxyManager? I'm confused what it means by providing an external IP range. All of the asterisk* options are required.
  10. P.S. If you check the docker logs. You don't want to see this: 2022-11-24T23:02:48.854 INF ../../nat/upnp/discover.go:58 > UPnP gateways detected: 0 What you DO want to see is this: 2022-11-24T23:04:31.264 INF ../../nat/upnp/discover.go:58 > UPnP gateways detected: 1 2022-11-24T23:04:31.264 INF ../../nat/upnp/discover.go:60 > UPnP gateway detected map[deviceType:urn:schemas-upnp-org:device:InternetGatewayDevice:1 friendlyName:RT-AX88U-9B20 manufacturer:ASUSTeK Computer Inc. modelName:RT-AX88U modelNo:386.8 server:AsusWRT/4.1.51 UPnP/1.1 MiniUPnPd/2.3.0] That's my router. You want to see yours in there. EDIT: Mind you, I took my node offline very quickly after I saw how pitiful the earnings were and just how much of a waste of time it was. So I don't really recommend this container at all.
  11. Alright boiz. Here's how I went from "restricted cone" and got "ALL" and here's the proof: That's the port range I chose. Pretty fair, 180 ports. Nothing insane like the default option. I installed the docker container as "HOST" normally, I have everything on a specific network called "public" that I use to connect via the web thanks to Nginxproxymanager. In this case just for the beginning, I kept it as Host as locust said it shouldn't require anything special. It's the only other container running the same way as Plex. Though my Plex server allows me to access it via Plex.tv so it doesn't need to be on my "bridge" network. The main thing that I think worked for me was in my Router (Asus RT-AX88U) I changed my NAT from "Symmetrical" to "Full-Cone" Which is blah blah less secure but I don't give a rat's ass. I've already got my entire network locked down like a fortress anyway. I then went to Port forward those ports above: 52820:53000 on UDP. I did that in the WAN > Port Forwarding tab I don't think the Port Forwarding step was necessary but I did it anyway. One way or another, the Myst container detects my router and immediately sets "fullcone" NAT type. That's literally all it took for me. I don't have any pfSense stuff or PiHole crap running on my network. This router is more than enough security for me, as is having isolated subnets. Anyway I hope this helps someone. I think the most important thing is changing to full cone NAT. EDIT: Trying later on my custom "public" network didn't work. I cannot get the ports open no matter watch. Bridge mode doesn't work either, and neither does my default "br0" network. I can run the container on another internal IP (eg. 192.168.1.67) and access it via that from my internet network but not from outside my network. I'll try later with Tailscale and see what I can do. If you have a pfSense dohickey this might help you:
  12. I love your SearX docker mate. Works absolutely flawlessly! Any chance of you whipping up one for SearXNG? It's got a few more features that I'd like to make use of. Would be happy to throw a few bob your way. ❤️
  13. So randomly this QWANT error has gone away, and now it's been replaced with an error regarding Soundcloud... raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2022-10-26 11:39:45,726 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 448, in result raise TimeoutError() concurrent.futures._base.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() File "/usr/local/searxng/searx/engines/soundcloud.py", line 57, in get_client_id response = http_get(app_js_url) File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2022-10-26 11:39:45,738 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 448, in result raise TimeoutError() concurrent.futures._base.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() File "/usr/local/searxng/searx/engines/soundcloud.py", line 45, in get_client_id response = http_get("https://soundcloud.com") File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2022-10-26 11:39:45,740 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 448, in result raise TimeoutError() concurrent.futures._base.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() File "/usr/local/searxng/searx/engines/soundcloud.py", line 57, in get_client_id response = http_get(app_js_url) File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout 2022-10-26 11:39:45,741 ERROR:searx.engines.soundcloud: Fail to initialize Traceback (most recent call last): File "/usr/local/searxng/searx/network/__init__.py", line 96, in request return future.result(timeout) File "/usr/lib/python3.10/concurrent/futures/_base.py", line 448, in result raise TimeoutError() concurrent.futures._base.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/searxng/searx/search/processors/abstract.py", line 75, in initialize self.engine.init(get_engine_from_settings(self.engine_name)) File "/usr/local/searxng/searx/engines/soundcloud.py", line 69, in init guest_client_id = get_client_id() File "/usr/local/searxng/searx/engines/soundcloud.py", line 57, in get_client_id response = http_get(app_js_url) File "/usr/local/searxng/searx/network/__init__.py", line 165, in get return request('get', url, **kwargs) File "/usr/local/searxng/searx/network/__init__.py", line 98, in request raise httpx.TimeoutException('Timeout', request=None) from e httpx.TimeoutException: Timeout
  14. Ok so it seems that Startpage just sucks and makes the searches take over 6 seconds. So I disabled it. I also disabled Brave because it has an average completion time of 3 seconds... which is too slow. With Google, Bing and Duckduckgo, the searches are much, much faster. Qwant has disappeared though after I disabled the others. Now I get this error which says the config isn't set correctly for some reason. Also the container is complaining that I am missing a line in my uwsgi.ini which is not true. That line does exist at the bottom. spawned uWSGI worker 11 (pid: 133, cores: 4) spawned 12 offload threads for uWSGI worker 10 spawned uWSGI worker 12 (pid: 143, cores: 4) cache sweeper thread enabled spawned 12 offload threads for uWSGI worker 11 spawned 12 offload threads for uWSGI worker 12 2022-10-21 22:02:52,630 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,631 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,633 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,646 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,648 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,653 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,654 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,655 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,657 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,672 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,691 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:52,694 ERROR:searx.shared: uwsgi.ini configuration error, add this line to your uwsgi.ini cache2 = name=searxngcache,items=2000,blocks=2000,blocksize=4096,bitmap=1 2022-10-21 22:02:53,370 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,372 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,373 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,374 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,375 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,377 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,378 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,378 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,379 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,381 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,382 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,383 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,384 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,386 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,387 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,387 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,387 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,389 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,389 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,390 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,390 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,391 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,392 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,392 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,393 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,394 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,394 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,395 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,398 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,401 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,402 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,402 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,407 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,410 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,411 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,411 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,415 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,417 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,418 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,418 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,424 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,424 ERROR:searx.engines: Missing engine config attribute: "qwant.qwant_categ" 2022-10-21 22:02:53,426 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,426 ERROR:searx.engines: Missing engine config attribute: "qwant news.qwant_categ" 2022-10-21 22:02:53,427 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,427 ERROR:searx.engines: Missing engine config attribute: "qwant images.qwant_categ" 2022-10-21 22:02:53,428 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" 2022-10-21 22:02:53,428 ERROR:searx.engines: Missing engine config attribute: "qwant videos.qwant_categ" WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 72 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 59 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 143 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 47 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 104 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 22 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 89 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 10 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 34 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 116 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 133 (default app) WSGI app 0 (mountpoint='') ready in 2 seconds on interpreter 0x155493bcfb50 pid: 12 (default app)
  15. i'm getting constant error messages from the SearXNG docker that all of the search providers I want to use are banning me. Basically I keep getting: BRAVE - Timeout (error) Qwant - Banned (error) Startpage - Timeout (error) etc. etc. If I re-run the search or mash the enter button a few times the search usually completes. However, it's really unreliable. I am the ONLY one using this docker container, and I have it secured pretty heavily. It's not showing any other people connecting to it. Why do I keep getting timeout or bans or captcha failure errors from this container?
  16. Well after a reboot, all of my images are now without links. How? So I fixed the above issue. It was a misconfigured PiHole. I just deleted the PiHole because I barely use it. However, it doesn't explain why my SearX container became orphaned. Thankfully deleting and redownloading SearX made it work again no problems.
  17. Try and Google a couple of ways to repair your superblock. The data is very likely still there, but you've lost your metadata and the system doesn't know what to do with the files (let alone what they even are). I had the same issue due to a freak error that occured with my motherboard model. Not due to an unclean shutdown during rebuild. I'm just going to be really honest with you. If you cannot fix your superblocks, the only option you have is professional data recovery service. An unclean shutdown during a rebuild is one of the worst things that can happen to your array and you should always consult the pros before ever killing power. At the earlier stage you could have backed up your superblocks at the very least before killing the power. Often-times the best thing is to just stop the process, do a disk image and then try again. It's a lot more difficult if you have totally borked disks as you need to remember during a rebuild everything is being overwritten with what's in parity. I "lost" 11TB of data from the same thing, irrecoverable superblocks. Those drives are in my cupboard now waiting until a day when I bother to fork over a few thousand dollars to have the data restored or until some "magic" machine learning can fix it for me. The data is still there on the disks but nothing can tell me what it even is, how to organize it, etc. there's no metadata whatsoever. It's worse than just an accidental delete because I could recover from that. Good luck. P.S. Remember that every write operation potentially makes you data harder to get back. So pick your next steps incredibly carefully. P.P.S. I guess it does not need to be said but UNRAID is not a backup solution and you should always have a backup of the backup. The 3, 2, 1 method works.
  18. So I was downloading some torrents. The cache drive filled up (I forgot to disable some). 1. My docker containers all crashed due to no space. 2. I stopped docker. 3. Started mover (but mover immediately stops) 4. Moved 190GB of files manually via CMD line 5. Cache still shows 99% full 6. Started docker again 7. One missing/orphaned image (SearX) and 3 docker images that suddenly don't have links anymore What happened? Normally if the mover stops immediately it's because docker is using something, but if I stop docker. It always works. Now both the cache is still full and the mover is not working. There's nothing useful in the logs I could see either. tower-diagnostics-20220821-1925.zip
  19. How do I add multiple subdomains for the Cloudfare-DDNS container? @SelfHoster I've tried: www, subdomain and: www subdomain but it keeps telling me in the log "not found/failed to create subdomain"
  20. Fixed it by killing the container and reinstalling it entirely. Seems the GDPR acceptance expires after 1 year.
  21. Speedtest Tracker just spontaneously stopped working for me on the 20th of July. I haven't rebooted my machine. Haven't installed anything. Haven't changed anything. Graph literally just shows that after 3am on July 20th every single speedtest has failed with "invalid date" I wiped the SQL database from the settings page but this has not fixed the problem either. I can't even run a test because nothing happens. Has anyone else experienced this?
  22. Any idea how to fix this? I get this error every time after a reboot. Running Version: 6.10.3 Model: Custom M/B: ASRock X570M Pro4 Version BIOS: American Megatrends Inc. Version P3.70. Dated: 02/23/2022 CPU: AMD Ryzen 5 5600 6-Core @ 3500 MHz HVM: Enabled IOMMU: Enabled Cache: 384 KiB, 3 MB, 32 MB Memory: 8 GiB DDR4 Multi-bit ECC (max. installable capacity 128 GiB) Samsung M391A1G43EB1-CRC, 8 GiB DDR4 @ 2400 MT/s Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 1000 Mbps, full duplex, mtu 1500 eth1: 1000 Mbps, full duplex, mtu 1500 Kernel: Linux 5.15.46-Unraid x86_64 OpenSSL: 1.1.1o