pkoci

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by pkoci

  1. I have checked the uptime of containers and found no crashing.
  2. Any idea what's going on? Mar 26 13:48:09 UNRAID dhcpcd[22784]: eth0: adding address fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2/64 Mar 26 13:48:09 UNRAID avahi-daemon[23854]: Registering new address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0.*. Mar 26 13:48:09 UNRAID dhcpcd[22784]: eth0: adding route to fd11:8a45:f39:54d:6700::/64 Mar 26 13:48:15 UNRAID avahi-daemon[23854]: Withdrawing address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0. Mar 26 14:18:11 UNRAID dhcpcd[22784]: eth0: expired address fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2/64 Mar 26 14:18:11 UNRAID dhcpcd[22784]: eth0: part of a Router Advertisement expired Mar 26 14:18:11 UNRAID dhcpcd[22784]: eth0: deleting route to fd11:8a45:f39:54d:6700::/64 Mar 26 15:23:19 UNRAID dhcpcd[22784]: eth0: adding address fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2/64 Mar 26 15:23:19 UNRAID avahi-daemon[23854]: Registering new address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0.*. Mar 26 15:23:19 UNRAID dhcpcd[22784]: eth0: adding route to fd11:8a45:f39:54d:6700::/64 Mar 26 15:23:25 UNRAID avahi-daemon[23854]: Withdrawing address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0. Mar 26 15:35:43 UNRAID avahi-daemon[23854]: Registering new address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0.*. Mar 26 15:44:01 UNRAID avahi-daemon[23854]: Withdrawing address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0. Mar 26 15:44:06 UNRAID avahi-daemon[23854]: Registering new address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0.*. Mar 26 15:45:19 UNRAID avahi-daemon[23854]: Withdrawing address record for fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2 on eth0. Mar 26 16:15:18 UNRAID dhcpcd[22784]: eth0: pid 0 deleted address fd11:8a45:f39:54d:2d0:b4ff:fe02:2cf2/64 Mar 26 16:15:19 UNRAID dhcpcd[22784]: eth0: part of a Router Advertisement expired Mar 26 16:15:19 UNRAID dhcpcd[22784]: eth0: deleting route to fd11:8a45:f39:54d:6700::/64
  3. Hi, I have been testing IPv6 for some time and found some issues. Macvlan docker containers cannot communicate with the host via IPv6 although this option is enabled in the docker settings. This works via IPv4. Additionally, once in a while ULA address is assigned to the docker network see below. Mar 24 18:12:10 UNRAID avahi-daemon[9726]: Registering new address record for fd11:8a45:f39:54d:2d0:XXXX:2cf2 on eth0.*. Mar 24 18:13:09 UNRAID avahi-daemon[9726]: Withdrawing address record for fd11:8a45:f39:54d:2d0:XXXX:2cf2 on eth0.Mar 24 18:12:10 UNRAID avahi-daemon[9726]: Registering new address record for fd11:8a45:f39:54d:2d0:XXXX:2cf2 on eth0.*. Mar 24 18:13:09 UNRAID avahi-daemon[9726]: Withdrawing address record for fd11:8a45:f39:54d:2d0:XXXX:2cf2 on eth0. Furthermore, I cannot visit the web UI via https using Ipv6 and myunraid.net (similar link like below). Again IPv4 works and even http via IPv6. https://2a03-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxx.mycert.myunraid.net:64443/ Any ideas?
  4. Hi, I purchased an ASM1166 sata expansion card, updated the firmware and plugged it into the second m.2 slot (I have a Samsung NVMe SSD in the first m.2 slot). Unfortunately, when ASPM is enabled the pcie port bios settings (used by ASM1166 controller), the UNRAID system does not see any HDDs connected, but C8 states are reached. See the logs below. When I disable ASPM in the BIOS settings for the pcie port that the asm1166 controller is plugged into, UNRAID sees the connected HDDs, but only C3 states are reached. I thought that ASM1166 controller with correct firmware supports ASPM and therefore higher Cstates. Mar 13 22:48:22 UNRAID kernel: ata3: failed to resume link (SControl FFFFFFFF) Mar 13 22:48:22 UNRAID kernel: ata3: SATA link down (SStatus FFFFFFFF SControl FFFFFFFF) Mar 13 22:48:22 UNRAID kernel: ahci 0000:02:00.0: AHCI controller unavailable! Any ideas?
  5. Hi, any idea if an intel i226-v prevents higher C-states? Yes, it does.
  6. Hi guys, could you please recommend any nvme ssd that supports low power C states? Does not have to be Samsung. Thanks
  7. Jellyfin does not have native 2FA support but there is a workaround using Authentik+Duo. VPN is an option as well.
  8. Hi, I have recently changed my ISP which supports IPv6 and I have been assigned the /56 prefix by the ISP. I would like to make my NAS including docker containers fully accessible by both protocols. Is it possible for the containers to be able to reach the host via IPv6, the way they can on IPv4. I would almost say it doesn't work via IPv6. By the way, is there any way to prevent the NAS from getting a different IPv6 (not prefix) after each reboot outside of static network settings? What are your recommended practices for IPv6? Thank you
  9. In this case, the page won't open using my domain because cannot establish a secure connection in safari. Any idea? I know that adguards DNS rewrites works because my domain is resolved as nextclouds private IP.
  10. You mean to set DNS rewrite in AdGuard like this: *.mydomain.com => NPM private IP instead of nextcloud.mydomain.com => nextcloud private IP?
  11. ITX motherboards with CPU N100 and N305 finally appeared on aliexpress. I mean the ones with two m.2 slots, 6x SATA ports and 4x 2.5g ethernet ports. https://www.aliexpress.com/item/1005006313184714.html?spm=a2g0o.productlist.main.15.462b24ff0Zl0kN&algo_pvid=d5ac8f21-5945-49b7-9f6a-f4f7aa2fb1f8&algo_exp_id=d5ac8f21-5945-49b7-9f6a-f4f7aa2fb1f8-7&pdp_npi=4%40dis!CZK!10559.58!5279.79!!!457.38!!%40210388c917022402004374163e6632!12000036721962358!sea!CZ!0!AB&curPageLogUid=8NHNEKK0b4OW
  12. Hi, I just installed Nextcloud. For external access I use a reverse proxy (NPM) and Cloudflare proxy. Can you please advise me how to set up nextcloud to connect to a server on the local network directly if the communication is through my domain. This way all traffic goes through Cloudflare even though I am on the local network. I use 2x AdGuard Home as my DNS server. I tried rewriting the domain to Nextcloud's private IP address via the DNS rewrite function. However, if I try to connect to the nextcloud server on the local network via my domain in Safari (Mac or iOS), the page won't open with the error - it can't establish a secure connection. An external connection through the domain, however, works fine. What's odd is that if I try to connect through my domain on the local network using the anonymous tab in safari, the nextcloud page loads without a problem. Any idea how to solve this?
  13. Hi, In Unraid versions 6.11 (kernel 5.19.17) and 6.12 (6.1.x) on the Gemini Lake CPU architecture, a faulty OpenCL implementation is leading to GPU hangs when Plex attempts hardware-accelerated HDR tone mapping. It's important to note that Plex has verified that the issue is related to Unraid or the kernel's implementation. In contrast, in Unraid version 6.10.3 (kernel 5.15), HW-accelerated HDR tone mapping works without any issues in Plex. This problem has been acknowledged by Plex as stemming from Unraid's implementation or the kernel 5.19.17+. Normal SDR transcoding works perfectly on all versions of Unraid. Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] Resetting rcs0 for preemption time out Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] Plex Transcoder[32177] context reset due to GPU hang Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 9:1:e757fefe, in Plex Transcoder [32177] See the older logs (v6.11.5) unraid-diagnostics-20230807-2045 2.zip Thank you!
  14. Hi, what is the best approach to downgrade from v6.12.4 to 6.10.3? It seems that there is some OpenCL bug with both 6.12 and 6.11 (probably due to kernel implementation). This causes gpu hang when attempting HW HDR tone mapping with Plex. Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] Resetting rcs0 for preemption time out Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] Plex Transcoder[32177] context reset due to GPU hang Oct 25 21:59:27 UNRAID kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 9:1:e757fefe, in Plex Transcoder [32177] Thanks for any advice
  15. Any idea? See the diagnostics unraid-diagnostics-20230807-2045.zip
  16. there's one more thing. Every time the system is rebooted, this shows up in the system log: Jun 19 19:15:40 UNRAID kernel: i915 0000:00:02.0: [drm] failed to retrieve link info, disabling eDP Any ideas?
  17. Hi, I'm using UNRAID 6.11.5 and asrock J4105 motherboard (gemini lake) and I've been having a problem with the latest versions of PMS for a long time. Specifically if it attempts HW HDR tone mapping it fails every time and then attempts software transcoding. At this point the following appears in the unraid system logs: Jun 17 15:23:34 UNRAID kernel: i915 0000:00:02.0: [drm] Resetting rcs0 for preemption time out Jun 17 15:23:34 UNRAID kernel: i915 0000:00:02.0: [drm] Plex Transcoder[23298] context reset due to GPU hang Jun 17 15:23:34 UNRAID kernel: i915 0000:00:02.0: [drm] GPU HANG: ecode 9:1:e757fefe, in Plex Transcoder [23298] HW transcoding works seamlessly if HDR tone mapping is turned off. Is it possible that this is caused by buggy Intel drivers that Plex needs for HW HDR tone mapping? Or is the bug on the UNRAID side? Plex staff confirmed that HW HDR tone mapping works on systems with the same or similar CPU (gemini lake). Any ideas?
  18. I have cooler master elite 110 case with single 12 inch fan.
  19. I have asrock j4105 mb with 2 sata ssd and 3 HDDs Ironwolf 4TBs+ m.2 to sata expansion card and seasonic gold 500w ATX PSU, 16GB RAM. The system idles at 16w and never exceeds 40w.
  20. Forget a dedicated GPU and get some modern intel with integrated gpu.
  21. A single download. Before I switched to UNRAID my NAS used to run xpenology (DSM) with raid5 and was able to handle gigabit downloads.
  22. Hi, I am running version 6.11.2 on asrock J4105 with 2 sata ssd as a cache pool and 3 HDDs, linuxserver.io qbittorrent with gigabit internet connection. Download location is set to cache pool. Unfortunately all downloads cannot exceed 40 - 50 MB/s speed and CPU utilization reaches up to 100% followed with warnings "CPU_IOWAIT". I tested network speed with both iperf3 and speedtest with speeds around 900 - 940 Mb/s so the issue is not network related. I guess this point is crucial... When a large file is moved from array to cache pool the transfer speed is quite low (around 60 - 80 MB/s), the speed is far more faster when the same file is being moved from cache pool to array (130 - 180 MB/s). Strange is it? I know J4105 is not the fastest CPU but should be capable of handling gigabit speed. Any idea what can be wrong?