danioj

Members
  • Posts

    1530
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by danioj

  1. @ljm42 what are your / LT initial thoughts on this report!? I am happy to stay on the RC and help debug but I think we need a pragmatic and consistent approach that multiple users can follow.
  2. My sentiments exactly. Something like this, there has to be more people impacted by it if it is related to unRAiD itself. I actually saw @nuhll post some logs regarding the USB backup feature of MyServers that appeared to have similar regular disk spin up events. Open of course to the fact that it could be something else causing it - just not sure what.
  3. Just checking in. Uptime is now 3 days 13 hours 9 minutes since I last issued the netfilter fix command. Not one call trace in the log or hard lock up / crash since.
  4. I am raising this bug report to formally capture discussions on the 6.10.0-rc1 release thread relating to a possible bug. In summary, I have found that emhttpd keeps issuing read SMART events consistently to my array disks, which is spinning them up. The effect is that my usually spun down Array is pretty much spun up 24x7. Here are the relevant discussions in that thread. There was some suggestion that the CATurboWrite Plugin or my Marvel Controller (which only has 3 disks of 16 connected to it) could be at fault but I have since removed the plugin and have confirmed that this issue appears to also be experienced by another user @Mathervius who doesn't have a Marvel controller. Diagnostics are attached. Chosen minor but really feels a bit more than that. Having the array spun up all the time isn't desirable at all. unraid-diagnostics-20210902-1852.zip
  5. Posting this here in the hope that it assists someone in the future. I host my instance of HomeAssistant in a VM on unRAID. I have recently purchased a ConBee II USB-Gateway so I can add Zigbee devices. I added the USB using the unRAID VM GUI, like I imagine most would, by just checking the tick box next to the device. This didn't work. While Home Assistant found the device, the integration would not add (there were communication errors). The trick was to add the device as a serial-usb device. AFAIK you cannot do this via the GUI. So I added the following code to my VM config: <serial type='dev'> <source path='/dev/serial/by-id/<yourusbid>'/> <target type='usb-serial' port='1'> <model name='usb-serial'/> </target> <alias name='serial1'/> <address type='usb' bus='0' port='4'/> </serial> I was then able to add the integration easily. Interestingly, it didn't auto discover, but that's just an aside. Note, <yourusbid> can be found via the command line - it contains the device serial so its not to be posted.
  6. Thanks. Since I posted earlier this morning (well for me it is in Australia) that I spun my array down this has been my log: Sep 3 09:09:48 unraid emhttpd: spinning down /dev/sdc Sep 3 09:09:49 unraid emhttpd: spinning down /dev/sdf Sep 3 09:09:49 unraid emhttpd: spinning down /dev/sdd Sep 3 09:09:50 unraid emhttpd: spinning down /dev/sdg Sep 3 09:09:50 unraid emhttpd: spinning down /dev/sdl Sep 3 09:09:51 unraid emhttpd: spinning down /dev/sde Sep 3 09:09:51 unraid emhttpd: spinning down /dev/sdq Sep 3 09:09:52 unraid emhttpd: spinning down /dev/sdr Sep 3 09:09:52 unraid emhttpd: spinning down /dev/sdm Sep 3 09:09:53 unraid emhttpd: spinning down /dev/sds Sep 3 09:09:53 unraid emhttpd: spinning down /dev/sdo Sep 3 09:09:54 unraid emhttpd: spinning down /dev/sdn Sep 3 09:09:55 unraid emhttpd: spinning down /dev/sdp Sep 3 09:09:55 unraid emhttpd: spinning down /dev/sdi Sep 3 09:09:56 unraid emhttpd: spinning down /dev/sdh Sep 3 09:09:57 unraid emhttpd: spinning down /dev/sdj Sep 3 09:23:24 unraid webGUI: Successful login user root from 10.9.69.31 Sep 3 09:29:01 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Sep 3 09:29:01 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Sep 3 09:59:41 unraid webGUI: Successful login user root from 10.9.69.31 Sep 3 10:06:42 unraid emhttpd: read SMART /dev/sdm Sep 3 10:06:52 unraid emhttpd: read SMART /dev/sdi Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdh Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdr Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sds Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdn Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdq Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdo Sep 3 10:07:21 unraid emhttpd: read SMART /dev/sdp Sep 3 10:07:41 unraid emhttpd: read SMART /dev/sdj Sep 3 10:07:41 unraid emhttpd: read SMART /dev/sdc Sep 3 10:28:30 unraid emhttpd: read SMART /dev/sdg Sep 3 10:28:42 unraid emhttpd: read SMART /dev/sdl Sep 3 10:28:49 unraid emhttpd: read SMART /dev/sde Sep 3 10:28:56 unraid emhttpd: read SMART /dev/sdd Sep 3 10:34:43 unraid emhttpd: read SMART /dev/sdf Sep 3 11:17:57 unraid emhttpd: spinning down /dev/sdh Sep 3 11:18:58 unraid emhttpd: spinning down /dev/sdj Sep 3 11:18:58 unraid emhttpd: spinning down /dev/sdc Sep 3 11:34:04 unraid emhttpd: spinning down /dev/sdd Sep 3 11:34:35 unraid emhttpd: spinning down /dev/sde Sep 3 11:59:54 unraid emhttpd: spinning down /dev/sdf Sep 3 11:59:54 unraid emhttpd: spinning down /dev/sdn Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdm Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdg Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdq Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdo Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdl Sep 3 11:59:56 unraid emhttpd: spinning down /dev/sdi Sep 3 11:59:58 unraid emhttpd: spinning down /dev/sdr Sep 3 11:59:58 unraid emhttpd: spinning down /dev/sds Sep 3 11:59:58 unraid emhttpd: spinning down /dev/sdp Sep 3 12:00:36 unraid emhttpd: read SMART /dev/sdf Sep 3 12:03:02 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Sep 3 12:03:02 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Sep 3 12:03:04 unraid emhttpd: read SMART /dev/sdm Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdh Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdr Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sds Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdn Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdq Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdo Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdi Sep 3 12:03:29 unraid emhttpd: read SMART /dev/sdp Sep 3 12:03:49 unraid emhttpd: read SMART /dev/sdj Sep 3 12:03:49 unraid emhttpd: read SMART /dev/sdc Sep 3 12:03:59 unraid emhttpd: read SMART /dev/sdg Sep 3 12:03:59 unraid emhttpd: read SMART /dev/sdd Sep 3 12:03:59 unraid emhttpd: read SMART /dev/sde Sep 3 12:03:59 unraid emhttpd: read SMART /dev/sdl Something appears (to me) to be causing unRAID to keep reading SMART of inactive devices and spinning them up. Not sure what it is. I think I will just raise this as a bug and we can track it separately from this announcement thread. Ill summarise these posts into one opening post. Happy to work with you closely on this @Mathervius P.S. My array is spinning down quicker too but that is to be expected I think this is an effect of the Turbo Write Plugin.
  7. Will you list what Plugin's you are using and also if you are using a Marvel controller in your setup please? I have read that and it is good advice. I will probably retire it when I upgrade the Motherboard, CPU and RAM. Almost impossible to get a M/B with the same number of ports as the one I have so a good controller card will be a must. For this issue, I find it hard to believe that it is the card which is causing this issue. I base this on nothing other than observing that the spin up is happening across all drives and not just the 3 that are on the controller. That is a good nod. I will do that. Thanks. So - what do we collectively think? Worth posting as a bug? EDIT: I have just disabled the Turbo Write Plug-in and spun the array down. I am heading to work now. It will be interesting to see if the SMART read commands are issued during the day.
  8. Last few hours of my logs: Sep 2 16:26:51 unraid webGUI: Successful login user root from 10.9.69.31 Sep 2 16:27:59 unraid crond[1959]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Sep 2 17:29:37 unraid emhttpd: spinning down /dev/sdg Sep 2 17:29:37 unraid emhttpd: spinning down /dev/sdd Sep 2 17:29:37 unraid emhttpd: spinning down /dev/sde Sep 2 17:29:37 unraid emhttpd: spinning down /dev/sdl Sep 2 17:29:53 unraid emhttpd: spinning down /dev/sdf Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdm Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdr Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sds Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdn Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdq Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdo Sep 2 17:29:55 unraid emhttpd: spinning down /dev/sdp Sep 2 17:30:20 unraid emhttpd: read SMART /dev/sdg Sep 2 17:30:26 unraid emhttpd: spinning down /dev/sdh Sep 2 17:30:39 unraid emhttpd: spinning down /dev/sdj Sep 2 17:30:39 unraid emhttpd: spinning down /dev/sdc Sep 2 17:30:39 unraid emhttpd: spinning down /dev/sdi Sep 2 17:31:35 unraid kernel: mdcmd (54): set md_write_method 0 Sep 2 17:31:35 unraid kernel: Sep 2 18:30:22 unraid emhttpd: spinning down /dev/sdg Sep 2 18:44:34 unraid webGUI: Successful login user root from 10.9.69.31 Sep 2 18:51:29 unraid emhttpd: read SMART /dev/sds Sep 2 18:51:39 unraid emhttpd: read SMART /dev/sdr Sep 2 18:51:50 unraid emhttpd: read SMART /dev/sdp Sep 2 18:52:01 unraid emhttpd: read SMART /dev/sdo Sep 2 18:52:12 unraid emhttpd: read SMART /dev/sdm Sep 2 18:52:23 unraid emhttpd: read SMART /dev/sdq Sep 2 18:52:42 unraid emhttpd: read SMART /dev/sdj Sep 2 18:52:59 unraid emhttpd: read SMART /dev/sdc Sep 2 18:52:59 unraid emhttpd: read SMART /dev/sdn Sep 2 18:53:18 unraid emhttpd: read SMART /dev/sdh Sep 2 18:53:38 unraid emhttpd: read SMART /dev/sdi Sep 2 18:56:40 unraid kernel: mdcmd (55): set md_write_method 1 Sep 2 18:56:40 unraid kernel: Sep 2 19:09:03 unraid emhttpd: read SMART /dev/sdd Sep 2 19:09:41 unraid emhttpd: read SMART /dev/sde Sep 2 19:09:57 unraid emhttpd: read SMART /dev/sdf Sep 2 19:52:35 unraid emhttpd: spinning down /dev/sdj Sep 2 19:52:44 unraid emhttpd: spinning down /dev/sdc Sep 2 19:53:01 unraid emhttpd: spinning down /dev/sdh Sep 2 20:09:12 unraid emhttpd: spinning down /dev/sdd Sep 2 20:09:17 unraid emhttpd: spinning down /dev/sdq Sep 2 20:09:27 unraid emhttpd: spinning down /dev/sdm Sep 2 20:09:49 unraid emhttpd: spinning down /dev/sde Sep 2 20:09:50 unraid emhttpd: spinning down /dev/sdr Sep 2 20:09:52 unraid emhttpd: spinning down /dev/sds Sep 2 20:09:52 unraid emhttpd: spinning down /dev/sdo Sep 2 20:09:56 unraid emhttpd: spinning down /dev/sdp Sep 2 20:10:08 unraid emhttpd: spinning down /dev/sdf Sep 2 20:10:08 unraid emhttpd: spinning down /dev/sdn Sep 2 20:11:40 unraid kernel: mdcmd (56): set md_write_method 0 Sep 2 20:11:40 unraid kernel: Sep 2 20:27:20 unraid emhttpd: read SMART /dev/sdr Sep 2 20:27:31 unraid emhttpd: read SMART /dev/sds Sep 2 20:27:44 unraid emhttpd: read SMART /dev/sdm Sep 2 20:27:54 unraid emhttpd: read SMART /dev/sdg Sep 2 20:28:02 unraid emhttpd: read SMART /dev/sdl Sep 2 20:28:09 unraid emhttpd: read SMART /dev/sdo Sep 2 20:28:19 unraid emhttpd: read SMART /dev/sdq Sep 2 20:28:30 unraid emhttpd: read SMART /dev/sdn Sep 2 20:28:45 unraid emhttpd: read SMART /dev/sdp Sep 2 20:28:58 unraid emhttpd: read SMART /dev/sde Sep 2 20:29:04 unraid emhttpd: read SMART /dev/sdd Sep 2 20:29:16 unraid emhttpd: read SMART /dev/sdf Sep 2 20:29:40 unraid emhttpd: read SMART /dev/sdj Sep 2 20:29:40 unraid emhttpd: read SMART /dev/sdc Sep 2 20:31:41 unraid kernel: mdcmd (57): set md_write_method 1 Sep 2 20:31:41 unraid kernel: Sep 2 20:31:47 unraid emhttpd: read SMART /dev/sdh
  9. Predominantly ports on my motherboard. However I do have one expansion card. My motherboard is an Supermicro X10SL7-F and my expansion card is a Marvell based card. Here is how they are connected: [8086:8c02] 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05) [2:0:0:0] disk ATA CT1000MX500SSD1 023 /dev/sdb 1.00TB [3:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sdc 8.00TB [4:0:0:0] disk ATA WDC WD30EFRX-68A 0A80 /dev/sdd 3.00TB [5:0:0:0] disk ATA WDC WD30EFRX-68A 0A80 /dev/sde 3.00TB [6:0:0:0] disk ATA WDC WD30EURS-63R 0A80 /dev/sdf 3.00TB [7:0:0:0] disk ATA WDC WD30EFRX-68A 0A80 /dev/sdg 3.00TB [1000:0086] 02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) [1:0:0:0] disk ATA WDC WD30EFRX-68A 0A80 /dev/sdl 3.00TB [1:0:1:0] disk ATA ST8000AS0002-1NA AR13 /dev/sdm 8.00TB [1:0:2:0] disk ATA ST8000VN0022-2EL SC61 /dev/sdn 8.00TB [1:0:3:0] disk ATA ST8000AS0002-1NA AR13 /dev/sdo 8.00TB [1:0:4:0] disk ATA ST8000AS0002-1NA AR13 /dev/sdp 8.00TB [1:0:5:0] disk ATA ST8000AS0002-1NA AR13 /dev/sdq 8.00TB [1:0:6:0] disk ATA ST8000AS0002-1NA AR13 /dev/sdr 8.00TB [1:0:7:0] disk ATA ST8000AS0002-1NA AR13 /dev/sds 8.00TB [1b4b:9230] 07:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe 2.0 x2 4-port SATA 6 Gb/s RAID Controller (rev 11) [8:0:0:0] disk ATA ST8000VN004-2M21 SC60 /dev/sdh 8.00TB [9:0:0:0] disk ATA ST8000VN004-2M21 SC60 /dev/sdi 8.00TB [10:0:0:0] disk ATA ST8000DM004-2CX1 0001 /dev/sdj 8.00TB [11:0:0:0] disk ATA CT250BX100SSD1 MU01 /dev/sdk 250GB From the diagnostics, I cannot see a pattern as to which drives get their SMART read but the timing is conspiquously regular. unraid-diagnostics-20210902-1852.zip
  10. I am posting this here just for some direction on whether this is a known bug or not (i.e. if we know a rouge or non updated plugin could be causing this). If the former then I will raise a bug report and post diagnostics. In summary, in this release I am noticing that my array seems to be spun up more than usual. Anecdotal I know. Querying the log, it appears that disks are spinning up and spinning down all day long even when there is little to no usage. What I am seeing is the disks will spin down and then at regular intervals Ill get "unraid emhttpd: read SMART" on each disk which causes them to spin up. Then, as per my settings, the disks will spin down after a period of time "unraid emhttpd: spinning down". Rinse and repeat every two hours. Has anyone else noticed this?
  11. I have to openly admit that I do not have the technical insights as to how it works. What I can share is what I experienced. The server was stable ever since I issued the command initially. I had to shut off the server as I had an electrician in to install some smart light switches and we had to cut the power. When I turned back on - well, 5 hours after I turned back on - the server crashed. I hard reset and went back in an issued the same above command and it's been stable again ever since. I concluded from that (and it will be interesting to see if I get a crash again over the coming week - noting they did come daily previously), that the command has to keep being issued.
  12. I have just tested this. The command does not survive reboots.
  13. Is this right? My understanding that any command I issue on unRAID would not survive a reboot as it is loaded into RAM each time from a baseline config. Given I am setting contract max to 131072 and this seemingly "fixes" the call traces - I would have to set this each time I reboot? My NICS are: 2 x Intel Corporation I210 Gigabit Network Connection (rev 03) onboard an Supermicro X10SL7-F Motherboard. Diagnostics attached. unraid-diagnostics-20210901-0925.zip
  14. Update since I applied this "Fix" - no call traces in the log or hard crashes yet. If this works I will probably need to add the command to a user script to execute on array start.
  15. Thanks for your input guys. Rather than try and pick apart your advice I thought I would just share my network config and Docker config. In short, I am using 2 interfaces. I am not sure how I could have hit the limit of my NIC? But I am also sure that I am not doing anything that wild either. eth0 is my primary unraid interface and eth1 is used to allow dockers and VM's access to other VLANS on my network. I have minimal inter VLAN routing established in pfsense to allow for things like administer dockers on other networks and allow some of them to interact with one another where needed.
  16. Update. I had another hard lock up over night. Same issues in the log. Tried the fix linked to me by @ljm42 above: sysctl net/netfilter/nf_conntrack_max=131072 Let's see how it goes.
  17. Thanks for the suggestion @ljm42. Comparing the logs of the two crashes they look different - certainly the one in the post you linked does have many references to netfilter where mine doesn’t - but I guess there is no harm in giving it a go. It will only take 24-48 hours to test and find out.
  18. OK, I tried it anyway. Disabling host access to custom networks made no difference, Plex was unavailable. I tested further. It turns out that it is not just Plex that my Google clients (on another VLAN) can't access just by switching from macvlan to ipvlan it is any other container that is running on another network. So here is my setup to make it clearer (all my containers have their own IP and I use my secondary interface on the server to establish access to VLANS): Dockers 1-5 on br1.66 For VPN routing Dockers 6 on br1.77 For main LAN segregation but normal gateway Dockers 7-8 on br0 For main LAN Dockers 9-10 on custom network proxynet for external access With ipvlan enabled Docker 1-5 can access each other. Docker 6 can't access Docker 1-5. Docker 7-8 can't access Dockers 1-5 or 6 but can access each other. There appears to be no issue with Docker 9-10. Enable macvlan, all is well until it crashes. It appears that traffic across networks doesn't happen when I enable ipvlan. Traffic is fine within the same network. Not sure what is going on in the background between the two but something is different.
  19. I am afraid I need to keep this enabled. This is critical to other aspects of my setup.
  20. Update. I made the change to IPVLAN. Things haven't worked as expected. Things seemed to have worked fine with the change until I tried to watch Plex in the evening. My Plex clients could not connect. Rather than debug, I switched back to macvlan and hey presto Plex was working again. My setup is not as standard as most but not overly unusual. - Plex uses br0 for a custom IP and runs on my main VLAN - My Plex clients sit on my IoT VLAN - Firewall rules and mDNS are set to allow clients discovery and access to Plex - Plex works fine from my phone which is on my main VLAN Additional info: Host access to custom networks is set to yes Preserve user defined networks is set to yes
  21. Thanks @trurl. I think I must have missed that because I had not had macvlan crash issues previously. I will change to ipvlan mode and report back after a week (or if another crash occurs).
  22. Interestingly, I have also noticed this in the log this morning too: Aug 25 10:59:38 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:38 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:39 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:39 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:40 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:40 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:41 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:41 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:42 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:42 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:43 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:43 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:44 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:44 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:45 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:45 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:46 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:46 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:47 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:47 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:48 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:48 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:49 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:49 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:50 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:50 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:51 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:51 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:52 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:52 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:53 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:53 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:54 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:54 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:55 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:55 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:56 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:56 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:57 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:57 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:58 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:58 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 10:59:59 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 10:59:59 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 11:00:00 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 11:00:00 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 11:00:01 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 11:00:01 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 11:00:02 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 11:00:02 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 25 11:00:03 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 25 11:00:03 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Only posting as it appears just before the crash in the above post. Not sure if this is a precurser to a crash.
  23. I previously mentioned (on the release thread) that I had experienced a few random crashes since upgrading to 6.10.0-rc1. Each crash required a hard reset of the server making capturing diagnostics problematic. I enabled syslog mirroring though and have been able to capture the following error: Aug 24 08:58:01 unraid kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000dc000-0x000dffff window] Aug 24 08:58:01 unraid kernel: caller _nv000722rm+0x1ad/0x200 [nvidia] mapping multiple BARs Aug 24 09:56:19 unraid kernel: ------------[ cut here ]------------ Aug 24 09:56:19 unraid kernel: WARNING: CPU: 3 PID: 4821 at net/netfilter/nf_conntrack_core.c:1132 __nf_conntrack_confirm+0xa0/0x1eb [nf_conntrack] Aug 24 09:56:19 unraid kernel: Modules linked in: nvidia_modeset(PO) nvidia_uvm(PO) veth xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle xt_nat xt_tcpudp ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs md_mod nvidia(PO) nct6775 hwmon_vid jc42 ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm ast drm_vram_helper drm_ttm_helper ttm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm crypto_simd cryptd rapl ahci ipmi_ssif agpgart intel_cstate mpt3sas syscopyarea intel_uncore sysfillrect sysimgblt i2c_i801 libahci fb_sys_fops input_leds intel_pch_thermal video i2c_algo_bit raid_class i2c_smbus scsi_transport_sas i2c_core led_class backlight thermal button acpi_ipmi fan ipmi_si [last unloaded: igb] Aug 24 09:56:19 unraid kernel: CPU: 3 PID: 4821 Comm: kworker/3:1 Tainted: P O 5.13.8-Unraid #1 Aug 24 09:56:19 unraid kernel: Hardware name: Supermicro X10SL7-F/X10SL7-F, BIOS 3.2 06/09/2018 Aug 24 09:56:19 unraid kernel: Workqueue: events macvlan_process_broadcast [macvlan] Aug 24 09:56:19 unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa0/0x1eb [nf_conntrack] Aug 24 09:56:19 unraid kernel: Code: e8 7e f6 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 92 f4 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 c6 ed ff ff e8 09 f3 ff ff e9 22 01 Aug 24 09:56:19 unraid kernel: RSP: 0018:ffffc9000015cd20 EFLAGS: 00010202 Aug 24 09:56:19 unraid kernel: RAX: 0000000000000188 RBX: 0000000000008091 RCX: 00000000b4e7b974 Aug 24 09:56:19 unraid kernel: RDX: 0000000000000000 RSI: 000000000000033c RDI: ffffffffa0264eb0 Aug 24 09:56:19 unraid kernel: RBP: ffff888390b048c0 R08: 00000000f0e26b20 R09: ffff88818087a6a0 Aug 24 09:56:19 unraid kernel: R10: ffff88822f778040 R11: 0000000000000000 R12: 000000000000233c Aug 24 09:56:19 unraid kernel: R13: ffffffff82168b00 R14: 0000000000008091 R15: 0000000000000000 Aug 24 09:56:19 unraid kernel: FS: 0000000000000000(0000) GS:ffff8887ffcc0000(0000) knlGS:0000000000000000 Aug 24 09:56:19 unraid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 24 09:56:19 unraid kernel: CR2: 00007f9528a8af73 CR3: 000000000200a005 CR4: 00000000001726e0 Aug 24 09:56:19 unraid kernel: Call Trace: Aug 24 09:56:19 unraid kernel: <IRQ> Aug 24 09:56:19 unraid kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Aug 24 09:56:19 unraid kernel: nf_hook_slow+0x3e/0x93 Aug 24 09:56:19 unraid kernel: ? ip_protocol_deliver_rcu+0x115/0x115 Aug 24 09:56:19 unraid kernel: NF_HOOK.constprop.0+0x70/0xc8 Aug 24 09:56:19 unraid kernel: ? ip_protocol_deliver_rcu+0x115/0x115 Aug 24 09:56:19 unraid kernel: ip_sabotage_in+0x4c/0x59 [br_netfilter] Aug 24 09:56:19 unraid kernel: nf_hook_slow+0x3e/0x93 Aug 24 09:56:19 unraid kernel: ? ip_rcv_finish_core.constprop.0+0x351/0x351 Aug 24 09:56:19 unraid kernel: NF_HOOK.constprop.0+0x70/0xc8 Aug 24 09:56:19 unraid kernel: ? ip_rcv_finish_core.constprop.0+0x351/0x351 Aug 24 09:56:19 unraid kernel: __netif_receive_skb_one_core+0x77/0x98 Aug 24 09:56:19 unraid kernel: process_backlog+0xab/0x143 Aug 24 09:56:19 unraid kernel: __napi_poll+0x2a/0x114 Aug 24 09:56:19 unraid kernel: net_rx_action+0xe8/0x1f2 Aug 24 09:56:19 unraid kernel: __do_softirq+0xef/0x21b Aug 24 09:56:19 unraid kernel: do_softirq+0x50/0x68 Aug 24 09:56:19 unraid kernel: </IRQ> Aug 24 09:56:19 unraid kernel: netif_rx_ni+0x56/0x8b Aug 24 09:56:19 unraid kernel: macvlan_broadcast+0x116/0x144 [macvlan] Aug 24 09:56:19 unraid kernel: macvlan_process_broadcast+0xc7/0x10b [macvlan] Aug 24 09:56:19 unraid kernel: process_one_work+0x196/0x274 Aug 24 09:56:19 unraid kernel: worker_thread+0x19c/0x240 Aug 24 09:56:19 unraid kernel: ? rescuer_thread+0x2a2/0x2a2 Aug 24 09:56:19 unraid kernel: kthread+0xdf/0xe4 Aug 24 09:56:19 unraid kernel: ? set_kthread_struct+0x32/0x32 Aug 24 09:56:19 unraid kernel: ret_from_fork+0x22/0x30 Aug 24 09:56:19 unraid kernel: ---[ end trace 44186f4b6dd2c3e1 ]--- After a reset, all is well and there appears to be no obvious regularity of trigger for the above. EDIT: set to urgent as per priority definition instructions given it was a server crash.
  24. Don't hate to ask, it's common to forget an obvious step sometimes, nice to be reminded to check the basics. However, on this occasion, I had tried that. Also, as mentioned, was able to reproduce the issue on an iPhone that had never access unRAID before.