Jump to content

Zuim

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Zuim's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I think there are serveral different issues with similar symptoms being discussed in this thread. The one I had (with the same log messages as the op) was indeed "solved" by disabling automatic ipv6 dns. I did not test yet if the last update fixed the issue, since it has been stable this way for me. But in my case and the others with that exact issue I saw only the web interface was unresponsive and ssh and other services still worked normally. My issue also occurred with a completely fresh usb stick without any settings and storage setup. Since these issues seem to be similar and are quite severe maybe the unraid team should consider sorting the issues by type and providing commands to collect additional information (maybe start some services in verbose mode or something similar), since the logs in the diagnostics pack didn't contain much useful info, when I looked into it.
  2. Description: The diagnostics zip files created using the "Anonymize Diagnostics" option in the UI still contain the public facing IPv6 address, which can be used to roughly geolocate users or gather lists of unraid hosts on the internet, scan them to find exposed services and so on. In the dhcplog.txt is a line like "eth0: adding address 2a02:9...." and " Registering new address record for ..." in the syslog.txt. Since with ipv6 there is no nat and instead all addresses are directly addressing individual devices, this can be used to geolocate you to about a 10km radius and try accessing services running on the unraid box, if it is exposed to the internet (e.g. by enabling connect?). For IPv4 this is not much of an issue, since pretty much everyone will be using NAT, so the ip in the logs will be in the private ranges in this case. See the Screenshots for the full list of occurences. I also checked a few diagnostics files from the forum, to verify it's not just in mine and found that the IPv6 address is also in the log of other people who have it enabled. Reproduction: Make fresh 6.12.3 USB Stick Enable IPv6 in IP settings Create diagnostic with anonymization turned on Look in the following files: Suggested Solution: Replace each unique IP address with a placeholder like IP1 IP2 etc, so you can still keep them apart for debugging, but the ip doesn't need to be included in the logs. tower-diagnostics-dhcp.zip
  3. It has been stable for me in the last couple of days by just setting ipv6 DNS to static servers, with dynamic address. So the issue seems to be related to ipv6 DNS instead of just ipv6. I hope this helps and would really appreciate an official update on this.
  4. Are there any updates on the progress to fix this? Or would additional data/commands to gather information help? Also a bit offtopic, but maybe related, since it is another issue with ipv6: The managment access menu shows the public ipv6 link in this format: http://[aaaa:aaaa:aaaa:aaaa::bbb]b/ When it obviously should be: http://[aaaa:aaaa:aaaa:aaaa::bbbb]/ So maybe some part responsible for parsing ipv6 addresses in unraid is broken that gets reused?
  5. Since setting the IPv6 config to static it has been stable! The nfs service also stopped attempting to start. Unfortunately I can't keep it static forever, since my provider semi-regularly changes the assigned prefix. I think the issue stanger89 has is different, since this did not occur for me. as opposed to these messages, when it crashes for me also only the webui is unresponsive. SSH and other services still work after the webui crashes.
  6. yes it comes back and when setting it to auto again it goes away again. I attached a log, where I switch from static to automatic at 0:47, which results in the nginx errors again and then switch back at 0:49. nas-diagnostics-20230718-0050.zip
  7. In my other tests with the full os I actually used the 1Gig nic (pci card) to connect to the router. In my usual setup the 2.5Gbit is in bridge mode to my pc. But I tested both individually with the other disabled. I don't know, if you saw my last message yet, but it seems to be related to the dhcp function specifically.
  8. yeah, the only reason I enabled ssh was, to be able to use the diagnostics command and restart nginx, without a reboot, since that would loose the syslog. The fresh install I just tested also did not have any storage devices setup and not even activated the trial (so a bunch of stuff stayed disabled). -------------- Big update! I just found that setting address assignment to static for ipv6 resolves the atd process reloading nginx and there are no more entries in the nginx error log. The address settings are unmodified from the suggested values. So this probably means there is an issue with the unraid dhcp for ipv6.
  9. The file is attached. Btw the webui is accessible using both the ipv6 and ipv4 directly. That I can't get rpc nfs to not start also seems very weird to me. Maybe I'll try making a completely clean install on another usb stick later to see if there is something going on with my install. servers.conf So after testing the fresh install I just created with the USB Creator I got the exact same issue! The only thing I did was enable ipv4+ipv6 and ssh. diagnostics-fresh-install.zip
  10. So it just crashed in safe mode. I also disabled my vpn, before booting to safemode, to also exclude that. The crash was at about 2:48 in the log. At about 2:32 I tried enabling and then disabling nfs again, since it also seems weird that the nfs/rpc stuff attempts to start, even though it is disabled in the ui. The array was stopped during this. The nginx error log is also attached, since I think it is not included in the diag zip and it contains more lines than the syslog. ------- I also just did another test with my second network interface disabled only ipv6 enabled (instead of both). It had the same crash in the attached file. There are also a bunch of these mesasages emhttpd: error: get_limetech_time, 251: Connection timed out (110): -2 (7) which seems to be an unrelated bug, with some unraid server not supporting ipv6. I'm out of ideas, since I basically disabled everything possible and it still crashes. nas-diagnostics-20230717-0250.zip nginx_error.log nas-diagnostics-20230717-0412_only_ipv6.zip
  11. IPv6 connectivity definitely works in both directions. The nas has internet access and can be reached from the local network through ipv6. The reason I want to use IPv6 in the first place is that it is more reliable and faster with my provider, since they use DSLite. Interesting that it is continuously changing IPs I didn't see that in the log. I'll reboot it in safe mode with ipv6 on and will add the diagnostics, if it crashes again.
  12. The nginx proxy manager docker was just for other docker services and I never used it to access the webui. To be sure it is not docker related I disabled docker entirely in the ui and still had the same issue (see attached diagnostics). That could be true, unfortunately it is and ISP provided router with literally no IPv6 configuration options except a basic firewall. But this router has worked with the the same NAS that is now running unraid, when I was running debian on it before. IPv6 also works fine with android and windows devices as well as debian on my notebook. So I don't think it is totally broken. Another thing I just noticed is that with IPv6 enabled there always is this /usr/sbin/atd running a nginx reload every three seconds. This stays there all the time like in my screenshot (the rc.rpc update is not always there) Also the Task number roughly doubles when enabling IPv6: nas-diagnostics-20230716-2015.zip
  13. The attached diagnostic is with "ipv4 only" in network settings and no issues as well as no logged nginx error. I rebooted after changing to ipv4 and the system was running for about two hours. After I wrote the last message I also noticed that the ipv6 network went down (ping -6 gave an error). So it could be that the ipv6 connection breaks for some reason and that causes the nginx crash? Maybe something related to my network (cheap isp router that might not use standards). I'll enable ipv6 again and try to find the cause of the connection issue, when it happens again. -> After trying again I could not replicate this, so it was probably something else. The error from the screenshot above occured again. nas-diagnostics-20230716-0607.zip
  14. After updating to 6.12.3 I thought I could enable IPv6 again, but after only a few minutes the webui crashed again. Nginx log contains this error a bunch of times, when the error occured: 2023/07/16 01:54:54 [alert] 5530#5530: shared memory zone "memstore" was locked by 4007 ter process /usr/sbin/nginx -c /etc/nginx/nginx.conf: ./nchan-1.3.6/src/store/memory/memstore.c:705: nchan_store_init_worker: Assertion `procslot_found == 1' failed. 2023/07/16 01:54:54 [alert] 5530#5530: worker process 4087 exited on signal 6 Let me know, if I can do anything else, to help find the cause. nas-diagnostics-20230716-0206.zip
  15. Since disabling IPv6 in the network settings (6 days ago) I didn't have another crash . Hopefully that helps narrowing the issue down and fixing it.
×
×
  • Create New...