permissionBRICK

Members
  • Posts

    25
  • Joined

  • Last visited

Everything posted by permissionBRICK

  1. Update: Its looking like I finally managed to fix the issue. The last thing I changed was uninstalling anything network related from Nerd Tools, and updating the rest. The server has been up without issues since the last post.
  2. (not a support request) I made a fork of this lancache repo, and added the ability to define custom DNS entries in the docker config. Its simple and super jank, but If you really want to for example give your NAS a local DNS name, but don't want to spin up a separate DNS server, you can use this 2 in 1 and save resources. Sidenote: I didn't make a pull request since I don't think this is a feature that people would usually want, also the implementation is kinda lazy https://github.com/permissionBRICK/lancache-bundle
  3. I thought the problem was fixed this time when I replaced the 10g card with an SFP+ Card, but it happened again. This time with the SFP+ Card it didn't take down the network, and the server was still accessible initially, however I got the same CPU Stall errors in syslog again, and after a few minutes the server stopped responding again. I have found several other topics and nobody who has encountered these CPU Stall errors seems to have any solution to it despite downgrading, but I have been getting these issues for several versions now, so I have no idea if I can downgrade that far... Also I managed to get a snapshot of netdata when it happened, one of the Cores seems to Stall on SOFTIRQ, while another one stalls on SYSTEM. Anyone got any more Ideas what I can do to try and fix this? Up to now I have replaced every single hardware component except the hard drives, the usb key and the PSU, and the issue persists. Do I need to reinstall Unraid from scratch on a new usb key? syslog.log
  4. Were you running any dockers when it happened or are you not running any at all?
  5. The issue happened again this weekend, at exactly 2020-06-13 17:10 the network went down. This time I found an error in the syslog of the nas on the syslog sync server. The time of the error is about 2 min before the lockdown occured, and seeing it is the only message after hours of nothing, and the lockdown gradually gets worse until it locks up the entire network, this might very well be the cause or at least related to the issue: syslog.log
  6. Can you post the hardware specs of your setup? If it is entirely different than mine maybe we can root out hardware as the cause completely... I have already swapped all components except the drives, the PSU and the unraid install itself on the stick. Can you post your current diagnostics file, so we can compare installed plugins etc? nethub-diagnostics-20200615-1146.zip
  7. This time it happened again, and it locked up the network as well, like usual. I guess if it is the fault of the 10g card it is a driver issue, or else it has nothing to do with the 10g card. The only remaining components that i haven't swapped yet are the drives, the PSU and the unraid install itself.
  8. Now something happened, but I'm not sure if it is the same issue. The network is still up, and the machine is still reachable but all cores except one were stuck on 100% iowait, the mover is running and doesn't seem to make progress. the syslog spits out this error: Jun 5 00:39:29 Nethub shutdown[1443]: shutting down for system reboot Jun 5 00:40:33 Nethub kernel: rcu: INFO: rcu_sched self-detected stall on CPU Jun 5 00:40:33 Nethub kernel: rcu: 5-....: (240002 ticks this GP) idle=566/1/0x4000000000000002 softirq=18902205/18902205 fqs=58516 Jun 5 00:40:33 Nethub kernel: rcu: (t=240004 jiffies g=37787229 q=503044) Jun 5 00:40:33 Nethub kernel: Sending NMI from CPU 5 to CPUs 4: Jun 5 00:40:33 Nethub kernel: NMI backtrace for cpu 4 Jun 5 00:40:33 Nethub kernel: CPU: 4 PID: 30342 Comm: kworker/u16:2 Tainted: G B D W 4.19.107-Unraid #1 Jun 5 00:40:33 Nethub kernel: Hardware name: MSI MS-7A63/Z270 GAMING PRO CARBON (MS-7A63), BIOS 1.90 07/03/2018 Jun 5 00:40:33 Nethub kernel: Workqueue: btrfs-endio-write btrfs_endio_write_helper Jun 5 00:40:33 Nethub kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x11e/0x171 Jun 5 00:40:33 Nethub kernel: Code: 48 03 04 cd 20 37 db 81 48 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 0a 48 85 c9 74 c9 0f 0d 09 8b 07 66 85 c0 74 04 f3 90 <eb> f5 41 89 c0 66 45 31 c0 44 39 c6 74 0a 48 85 c9 c6 07 01 75 1b Jun 5 00:40:33 Nethub kernel: RSP: 0018:ffffc9000ce77908 EFLAGS: 00000202 Jun 5 00:40:33 Nethub kernel: RAX: 0000000000140101 RBX: ffff88880b9a8a00 RCX: 0000000000000000 Jun 5 00:40:33 Nethub kernel: RDX: ffff88884eb20740 RSI: 0000000000140000 RDI: ffff88880b9a8b60 Jun 5 00:40:33 Nethub kernel: RBP: ffff8881076c61a0 R08: 0000000000000005 R09: 0000000000000000 Jun 5 00:40:33 Nethub kernel: R10: ffff88880b9a8b60 R11: ffff88884e405301 R12: ffff888535e4f130 Jun 5 00:40:33 Nethub kernel: R13: ffff8882e09824e0 R14: ffff8888475a2000 R15: 0000000000000000 Jun 5 00:40:33 Nethub kernel: FS: 0000000000000000(0000) GS:ffff88884eb00000(0000) knlGS:0000000000000000 Jun 5 00:40:33 Nethub kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 5 00:40:33 Nethub kernel: CR2: 0000153317b04000 CR3: 0000000001e0a002 CR4: 00000000003606e0 Jun 5 00:40:33 Nethub kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Jun 5 00:40:33 Nethub kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Jun 5 00:40:33 Nethub kernel: Call Trace: Jun 5 00:40:33 Nethub kernel: _raw_spin_lock+0x16/0x19 Jun 5 00:40:33 Nethub kernel: btrfs_add_delayed_tree_ref+0x214/0x2a4 Jun 5 00:40:33 Nethub kernel: btrfs_alloc_tree_block+0x483/0x510 Jun 5 00:40:33 Nethub kernel: alloc_tree_block_no_bg_flush+0x45/0x4d Jun 5 00:40:33 Nethub kernel: __btrfs_cow_block+0x143/0x4ee Jun 5 00:40:33 Nethub kernel: btrfs_cow_block+0x105/0x113 Jun 5 00:40:33 Nethub kernel: btrfs_search_slot+0x330/0x84a Jun 5 00:40:33 Nethub kernel: btrfs_lookup_file_extent+0x47/0x61 Jun 5 00:40:33 Nethub kernel: __btrfs_drop_extents+0x16f/0xb12 Jun 5 00:40:33 Nethub kernel: ? next_state+0x9/0x13 Jun 5 00:40:33 Nethub kernel: ? __set_extent_bit+0x280/0x430 Jun 5 00:40:33 Nethub kernel: insert_reserved_file_extent.constprop.0+0x98/0x2cc Jun 5 00:40:33 Nethub kernel: btrfs_finish_ordered_io+0x317/0x5d2 Jun 5 00:40:33 Nethub kernel: normal_work_helper+0xd0/0x1c7 Jun 5 00:40:33 Nethub kernel: process_one_work+0x16e/0x24f Jun 5 00:40:33 Nethub kernel: worker_thread+0x1e2/0x2b8 Jun 5 00:40:33 Nethub kernel: ? rescuer_thread+0x2a7/0x2a7 Jun 5 00:40:33 Nethub kernel: kthread+0x10c/0x114 Jun 5 00:40:33 Nethub kernel: ? kthread_park+0x89/0x89 Jun 5 00:40:33 Nethub kernel: ret_from_fork+0x35/0x40 Jun 5 00:40:33 Nethub kernel: NMI backtrace for cpu 5 Jun 5 00:40:33 Nethub kernel: CPU: 5 PID: 32176 Comm: kworker/u16:1 Tainted: G B D W 4.19.107-Unraid #1 Jun 5 00:40:33 Nethub kernel: Hardware name: MSI MS-7A63/Z270 GAMING PRO CARBON (MS-7A63), BIOS 1.90 07/03/2018 Jun 5 00:40:33 Nethub kernel: Workqueue: btrfs-endio-write btrfs_endio_write_helper Jun 5 00:40:33 Nethub kernel: Call Trace: Jun 5 00:40:33 Nethub kernel: <IRQ> Jun 5 00:40:33 Nethub kernel: dump_stack+0x67/0x83 Jun 5 00:40:33 Nethub kernel: nmi_cpu_backtrace+0x71/0x83 Jun 5 00:40:33 Nethub kernel: ? lapic_can_unplug_cpu+0x97/0x97 Jun 5 00:40:33 Nethub kernel: nmi_trigger_cpumask_backtrace+0x57/0xd4 Jun 5 00:40:33 Nethub kernel: rcu_dump_cpu_stacks+0x8b/0xb4 Jun 5 00:40:33 Nethub kernel: rcu_check_callbacks+0x296/0x5a0 Jun 5 00:40:33 Nethub kernel: update_process_times+0x24/0x47 Jun 5 00:40:33 Nethub kernel: tick_sched_timer+0x36/0x64 Jun 5 00:40:33 Nethub kernel: __hrtimer_run_queues+0xb7/0x10b Jun 5 00:40:33 Nethub kernel: ? tick_sched_handle.isra.0+0x2f/0x2f Jun 5 00:40:33 Nethub kernel: hrtimer_interrupt+0xf4/0x20e Jun 5 00:40:33 Nethub kernel: smp_apic_timer_interrupt+0x7b/0x93 Jun 5 00:40:33 Nethub kernel: apic_timer_interrupt+0xf/0x20 Jun 5 00:40:33 Nethub kernel: </IRQ> Jun 5 00:40:33 Nethub kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x6b/0x171 Jun 5 00:40:33 Nethub kernel: Code: 42 f0 8b 07 30 e4 09 c6 f7 c6 00 ff ff ff 74 0e 81 e6 00 ff 00 00 75 1a c6 47 01 00 eb 14 85 f6 74 0a 8b 07 84 c0 74 04 f3 90 <eb> f6 66 c7 07 01 00 c3 48 c7 c2 40 07 02 00 65 48 03 15 80 6a f8 Jun 5 00:40:33 Nethub kernel: RSP: 0018:ffffc9000ef4f908 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff13 Jun 5 00:40:33 Nethub kernel: RAX: 0000000000140101 RBX: ffff88880b9a8a00 RCX: 0000000000004000 Jun 5 00:40:33 Nethub kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff88880b9a8b60 Jun 5 00:40:33 Nethub kernel: RBP: ffff88880226b8f0 R08: 0000000000000005 R09: 0000000000000000 Jun 5 00:40:33 Nethub kernel: R10: ffff88880b9a8b60 R11: ffff88884e405301 R12: ffff88852d64ced8 Jun 5 00:40:33 Nethub kernel: R13: ffff8881971515b0 R14: ffff8888475a2000 R15: 0000000000000000 Jun 5 00:40:33 Nethub kernel: _raw_spin_lock+0x16/0x19 Jun 5 00:40:33 Nethub kernel: btrfs_add_delayed_tree_ref+0x214/0x2a4 Jun 5 00:40:33 Nethub kernel: btrfs_alloc_tree_block+0x483/0x510 Jun 5 00:40:33 Nethub kernel: alloc_tree_block_no_bg_flush+0x45/0x4d Jun 5 00:40:33 Nethub kernel: __btrfs_cow_block+0x143/0x4ee Jun 5 00:40:33 Nethub kernel: btrfs_cow_block+0x105/0x113 Jun 5 00:40:33 Nethub kernel: btrfs_search_slot+0x330/0x84a Jun 5 00:40:33 Nethub kernel: btrfs_lookup_file_extent+0x47/0x61 Jun 5 00:40:33 Nethub kernel: __btrfs_drop_extents+0x16f/0xb12 Jun 5 00:40:33 Nethub kernel: ? next_state+0x9/0x13 Jun 5 00:40:33 Nethub kernel: ? __set_extent_bit+0x280/0x430 Jun 5 00:40:33 Nethub kernel: insert_reserved_file_extent.constprop.0+0x98/0x2cc Jun 5 00:40:33 Nethub kernel: btrfs_finish_ordered_io+0x317/0x5d2 Jun 5 00:40:33 Nethub kernel: normal_work_helper+0xd0/0x1c7 Jun 5 00:40:33 Nethub kernel: process_one_work+0x16e/0x24f Jun 5 00:40:33 Nethub kernel: worker_thread+0x1e2/0x2b8 Jun 5 00:40:33 Nethub kernel: ? rescuer_thread+0x2a7/0x2a7 Jun 5 00:40:33 Nethub kernel: kthread+0x10c/0x114 Jun 5 00:40:33 Nethub kernel: ? kthread_park+0x89/0x89 Jun 5 00:40:33 Nethub kernel: ret_from_fork+0x35/0x40 Jun 5 00:40:33 Nethub kernel: Sending NMI from CPU 5 to CPUs 6: Jun 5 00:40:33 Nethub kernel: NMI backtrace for cpu 6 Jun 5 00:40:33 Nethub kernel: CPU: 6 PID: 30933 Comm: kworker/u16:0 Tainted: G B D W 4.19.107-Unraid #1 Jun 5 00:40:33 Nethub kernel: Hardware name: MSI MS-7A63/Z270 GAMING PRO CARBON (MS-7A63), BIOS 1.90 07/03/2018 Jun 5 00:40:33 Nethub kernel: Workqueue: btrfs-endio-write btrfs_endio_write_helper Jun 5 00:40:33 Nethub kernel: RIP: 0010:native_queued_spin_lock_slowpath+0x63/0x171 Jun 5 00:40:33 Nethub kernel: Code: 2f 08 b8 00 01 00 00 0f 42 f0 8b 07 30 e4 09 c6 f7 c6 00 ff ff ff 74 0e 81 e6 00 ff 00 00 75 1a c6 47 01 00 eb 14 85 f6 74 0a <8b> 07 84 c0 74 04 f3 90 eb f6 66 c7 07 01 00 c3 48 c7 c2 40 07 02 Jun 5 00:40:33 Nethub kernel: RSP: 0018:ffffc9000d90f9e0 EFLAGS: 00000202 Jun 5 00:40:33 Nethub kernel: RAX: 0000000000000101 RBX: ffff888536993850 RCX: ffffc9000d90fb28 Jun 5 00:40:33 Nethub kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffff888536993888 Jun 5 00:40:33 Nethub kernel: RBP: ffff88877e7452f8 R08: ffff88839f210ba0 R09: ffffc9000d90fb2c Jun 5 00:40:33 Nethub kernel: R10: ffff88880b9a8b60 R11: 0000000000000000 R12: ffff88880b9a8b78 Jun 5 00:40:33 Nethub kernel: R13: ffff888536993888 R14: ffffc9000d90fb28 R15: 0000000000000000 Jun 5 00:40:33 Nethub kernel: FS: 0000000000000000(0000) GS:ffff88884eb80000(0000) knlGS:0000000000000000 Jun 5 00:40:33 Nethub kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 5 00:40:33 Nethub kernel: CR2: 0000153317b04000 CR3: 0000000001e0a002 CR4: 00000000003606e0 Jun 5 00:40:33 Nethub kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Jun 5 00:40:33 Nethub kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Jun 5 00:40:33 Nethub kernel: Call Trace: Jun 5 00:40:33 Nethub kernel: _raw_spin_lock+0x16/0x19 Jun 5 00:40:33 Nethub kernel: update_existing_head_ref.isra.0+0x32/0x111 Jun 5 00:40:33 Nethub kernel: add_delayed_ref_head.isra.0+0x102/0x189 Jun 5 00:40:33 Nethub kernel: btrfs_add_delayed_tree_ref+0x231/0x2a4 Jun 5 00:40:33 Nethub kernel: btrfs_free_tree_block+0x86/0x1dd Jun 5 00:40:33 Nethub kernel: __btrfs_cow_block+0x4a0/0x4ee Jun 5 00:40:33 Nethub kernel: btrfs_cow_block+0x105/0x113 Jun 5 00:40:33 Nethub kernel: btrfs_search_slot+0x330/0x84a Jun 5 00:40:33 Nethub kernel: btrfs_lookup_csum+0x4d/0x130 Jun 5 00:40:33 Nethub kernel: ? _cond_resched+0x1b/0x1e Jun 5 00:40:33 Nethub kernel: ? kmem_cache_alloc+0xdf/0xeb Jun 5 00:40:33 Nethub kernel: btrfs_csum_file_blocks+0x8b/0x563 Jun 5 00:40:33 Nethub kernel: add_pending_csums+0x40/0x5b Jun 5 00:40:33 Nethub kernel: btrfs_finish_ordered_io+0x3d2/0x5d2 Jun 5 00:40:33 Nethub kernel: normal_work_helper+0xd0/0x1c7 Jun 5 00:40:33 Nethub kernel: process_one_work+0x16e/0x24f Jun 5 00:40:33 Nethub kernel: worker_thread+0x1e2/0x2b8 Jun 5 00:40:33 Nethub kernel: ? rescuer_thread+0x2a7/0x2a7 Jun 5 00:40:33 Nethub kernel: kthread+0x10c/0x114 Jun 5 00:40:33 Nethub kernel: ? kthread_park+0x89/0x89 Jun 5 00:40:33 Nethub kernel: ret_from_fork+0x35/0x40 I tried to reboot the nas, but then it went to 80% iowait and 20% system fixed on all cores, and even though it logged System reboot NOW, it never rebooted. I tried diagnostics but it never finished generating them. syslog.log
  9. Alright, RAM is officially debunked, the same issue just happened while I had the old 8GB Modules in the system. I now tried to switch the 10G card with the 10G card in my main workstation, lets see if that fixed the issue.
  10. I have been running Memtest on the 32GB RAM Modules on the old board (where the same issue used to occur) for 2 weeks now. One week of memtest86, and one week of Windows Memory Diagnosis on max settings. Both resulted in no errors. I even ran the memtest tool @Benson suggested but again no errors. Here is a picture:
  11. No, sadly it doesn't. The 10G Link was still connected, and it locked up the network again as well. Yeah, I now put back the old 4GB modules into the System, so I have plenty of time to test the RAM for any issues. Thanks
  12. Okay, it happened again, this time while the 1G Link was down. This resulted in network lock-up as well, but once i reset the switch, the network was fine again, but the server was still frozen. So I guess when the server freezes up the 1G does reestablish the link if its lost while the 10g card doesn't, but both lock up the network when it happens. I also found these threads, which seem to be exactly my issue as well: The first one seems to indicate that a memory issue might have been the cause, however i would find it strange if a generic memory defect would cause specific issues this similar with several people... Nevertheless I will try to swap out the ram again, and have the current modules run memtest for a week to be sure...
  13. Yes, there is only one link cable between the two switches. When the nas is plugged into the 1g switch, the link cable between the 1g and 10g switch flashes rapidly during this issue The other ports don't flash as much, so I guess the buffers are overflowing instead of a broadcast storm overloading the network. I don't know enough about data link layer logic to guess why the switches are blinking "getting data" but when I plug the cable directly into the pc, wireshark doesn't show any packets. None of the VMs were started at any point when the issues occured. With Docker containers I guess the only unusual thing would be the lancache docker that I run on br0 with a separate ip, but running ifconfig in that docker also shows a different mac.
  14. No, that didn't fix it either. I set Bonding to Disabled, and the second interface to just not get any ip (I just need it for Wake on LAN really). However today the exact same thing happened. I tried to unplug the cable from the 1g switch and plug it into the 10g switch while it was happening, this caused the lockdown to continue. The remote syslog files didn't contain any entries from around the time when the issue happened. Anyone got any more ideas? I will now try to disable the link entirely to see if that changes anything
  15. Another wild idea: is it possible that the active-backup bond plugged into two different switches caused the switches to enter a switching loop if they used the same mac for some reason, where each switch would think the other one was connected to this mac? However if that was the cause: I'm not sure what happened the first time since I think at that time both bonds were plugged into the 10g switch. Although back then power cycling the 10g switch completely fixed the network issues, where today that only made it a little bit better, but it seemed both switches would lock up as long as one bond was attached, and both switches were connected to each other. Perhaps the same thing happened in the 10g switch if the 10g ports are switched separately from the 1g ports Also I tried repeatedly switching the active interface on the bond, but that didn't produce the error.
  16. Well, sadly, it happened again. This time I didn't rush to restart the NAS and spent some time trying to analyze the issue live. Since last time I switched the Mainboard for an MSI z270 gaming pro carbon, the CPU for an intel i7 7700k, and the dual sata ssd cache for a single NVME ssd. The only components that stayed the same was the 10g card, the ram, the PSU, the hard drives, and the unraid config on my usb. First I checked whether I could see if there was garbled data being sent over the Network using Wireshark, however both on the 10g port as well as the backup 1g port on my pc I could only see packets from my local pc, or on the 1g link there was some packets from devices on the second switch in between the main switch and this port. Like last time, plugging a monitor into the NAS did nothing. First I tried to unplug the 10g link of the NAS from the router. This resulted in devices on the 10g switch being able to do limited communication. Then I unplugged the backup link to the 1g switch, which then restored full communication on all other devices. Then I replugged the 10g link, and nothing changed, however when I replugged the 1g link, devices started to fail again. This might be due to the fact that the 10g card does not reinitialize its connection. Then I even tried to root out if the 10g card was hard-locking the system by hot-unplugging it, however that changed nothing. Then I unpowered the NAS, reinstalled the 10g card and restarted it. Luckily this time there was no corruption. However, When I stopped the docker service to check it, and then restarted it, the same thing happened again right away. This is where it gets weird. All the devices slowly started to block / drop communication, until nothing was working anymore. then I used the opportunity to plug the 1g cable from the nas into the pc directly to look for packets using wireshark. However, again no packets were being received. Then i replugged the 1g cable back into the switch, which immediately caused all devices to fail again. Finally, I hard reset the NAS again, and since then I have not been able to reproduce this today. So apparently this issue happens with both 1g and 10g switches and with both intel and amd mainboards. It seems to lock up the NAS, as well as any network switch connected to it, even if you unplug and replug the lan ports, but apparently no packets are being sent. I'm slowly approaching the end of my ideas here, can anyone help me out? Is it sending garbled data that isn't in packet form? If so, why would it transfer to the second switch? Edit: It just occured to me to set up a remote syslog server, in case it happens again.
  17. No, sadly it happened again tonight. The switch locked up and got power-cycled by the watchdog script I made, but the NAS was still stuck and I had to hard-reset it. Any more ideas? (Luckily it appears there was no corruption this time) Well as I just went AMD on my main workstation and freed up an MSI Board and I7 7700k CPU, I will try to swap to that for now
  18. I ran memtest for 24h, didn't find any errors. I guess I'll wait and see if the problem occurs again or if it was fixed with the bios update and settings
  19. Based on your recommendation I updated the BIOS and set the Power Supply Idle Control to typical, I checked the RAM Speeds and they were on Auto (which for this ram defaults to 2400 if I read the BIOS right) (2 sticks with 4 slots, ryzen 2 gen) but set them fixed to 2400 anyway. Will keep you updated if the crash happens again.
  20. Hello fellow Unraid users, I have been running Unraid for a year now, but recently I have run into a weird issue: Every once in a while (sometimes a few days, sometimes a few weeks, I would notice my network slowly failing. My setup is kind of like this: Unraid-PC and Windows PC are plugged into a 10GBit switch (Netgear GS110MX) using Asus XG-C100C network cards. They are also both connected using their onboard gigabit as a backup. On Unraid I configured this as an "active-backup" bond. The NAS is using a ASRock B450 Pro4 AM4. The wifi and internet gateway are also plugged into this switch, as well as any other media streaming device. All the other devices are plugged into a separate Gigabit Switch, which is plugged into this switch. Whenever the network issues start, it would usually start with the NAS disconnecting (Plex stream interrupts, Web GUI is down, SSH is down). Then slowly all the other devices on the 10g switch would start to fail, until finally the whole network is down. I cannot connect from any device on the switch to any other device. However, all the devices on the second gigabit switch can communicate fine. The way to resolve this is to power-cycle the 10g switch, then communication is immediately reset for all devices. Except for Unraid, as apparently my Unraid server is completely frozen after this, even when I plug in a monitor and keyboard there is no response until I press the reset button. And almost every time I do that, the BTRFS on my ssd corrupts, and I have to reformat it. I first sent the switch in to be replaced, but the new one has exactly the same issue, and since it starts with the Unraid server, and Unraid is also frozen after the fact, I now suspect the issue lies with Unraid instead of my switch. While I have seen many customer reviews saying that the switch will sometimes fail under high loads, all of my crashes happened under zero load, like while I was asleep. Has anyone experienced anything similar? How could I go about diagnosing this further? Is the switch issue causing the NAS to freeze, or is the NAS freezing up maybe somehow locking up the switch? (next time it happens I will try to unplug only the nas from the switch to see if the issue is resolved) Thanks and best regards nethub-diagnostics-20200402-1947.zip
  21. When setting up SSL to access the Webgui, the help tooltip/dropdown on the "Use SSL/TLS" selection for Yes/No/Auto has a typo in the Filenames for the certificates. This is where I got stuck for some time trying to set up SSL. The manual says certficate_bundle.pem, but the file actually needs to be named certificate_bundle.pem for it to work.