Inenting

Members
  • Posts

    59
  • Joined

  • Last visited

Everything posted by Inenting

  1. I am still having this problem on 6.11.1. I got unraid tabs sometimes open on a VM but I can't check everytime if I closed it or not. We need a real solution because the auto reload doesn't seem to work right unfortunately. zeus-diagnostics-20221022-1430.zip
  2. I can't seem to get a tdarr node on another unraid server to work, I keep getting the following error(s) : 2022-04-03T19:42:01.799Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[Step W03] [C2] Analyse file 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:New cache file has already been scanned, no need to scan again 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Updating Node relay: Processing 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[1/2] Checking file frame count 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[2/2] Frame count 0 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Transcode task, determining transcode settings 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Plugin stack selected 2022-04-03T19:42:01.802Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Plugin: Tdarr_Plugin_00td_action_re_order_all_streams_v2 2022-04-03T19:42:01.802Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[1/5] Reading plugin 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[2/5] Plugin read 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[3/5] Installing dependencies 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[4/5] Running plugin 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Error TypeError: Cannot read property 'forEach' of undefined 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker config: { 2022-04-03T19:42:01.804Z "processFile": false, 2022-04-03T19:42:01.804Z "preset": "", 2022-04-03T19:42:01.804Z "container": "", 2022-04-03T19:42:01.804Z "handbrakeMode": "", 2022-04-03T19:42:01.804Z "ffmpegMode": "", 2022-04-03T19:42:01.804Z "error": true 2022-04-03T19:42:01.804Z } 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker log: 2022-04-03T19:42:01.804Z Pre-processing - Re-order all streams V2☒Plugin error! TypeError: Cannot read property 'forEach' of undefined 2022-04-03T19:42:01.804Z 2022-04-03T19:42:01.805Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker config [-error-]: I also uploaded the Log. Both servers run Unraid and I mapped the Plex and Temp share. I tried both NFS and SMB, the shares are on Export: Yes and Public, I don't see what could be wrong. VXk3kw6uhVs-log.txt C9Hx0dw7N-log.txt
  3. I've got grafana installed with prometheus but I can't see anything out of the ordinary: Does anyone have an explanation for this?
  4. I woke up today the moment I got the warning and checked dashboard but the ram was still fine (80% unused), I checked my server health event logs via IPMI but there is nothing there (about ram and recent) so probably not a hardware fault either right?
  5. I haven't run tdarr nodes for a few months now, at 12 am there is a ssd trim, mover works every 3 hours. Everything major should start after 2 am. But if it happens at 11pm maybe Plex is the problem? I think I am using Ram as storage for plex transcoding: --runtime=nvidia --device=/dev/dri --mount type=tmpfs,destination=/tmp,tmpfs-size=20000000000 --no-healthcheck --restart unless-stopped --log-opt max-size=50m But the error is everyday and my watch time and error time doesn't match up
  6. Hi Guys, I keep getting an "Out Of Memory errors detected on your server" error on my server every single day for the past week. The weird thing is that it happens 4:46 AM every time, only thing I could correlate this with is the following: - Automatic appdata backup (starts at 4AM) I did have problems with a kernel panic crash but that hasn't happened in a month or two. That was also the reason I upgraded to 6.10.0-rc2 but that didn't fix it. How can I check where the problem lies? zeus-diagnostics-20220306-1852.zip syslog-10.50.0.254.log
  7. Hey, see the attachment for the syslogsyslog-127.0.0.1.log . I've had the syslog on for a while so it should have all the crashes (if it registers that) Edit: I just noticed that it didn't update the syslog for a very long time... I disabled and enabled it but I also enabled remote syslog as well and point it now to my other unraid server (it wrote something so it should work). I'll wait for a crash now I guess...
  8. I've been thinking but could I get a kernel panic if the hardware is maybe defect? Like if a ram stick or cpu is not good?
  9. The kernel panic happened again today, I restarted server around 18:35 Could it maybe be a plugin? Would it help to delete every plugin and docker and start over? Is there a way to keep my data and cache pool when making a new usb with unraid? unraidserver-diagnostics-20220107-1946.zip
  10. Unfortunately I still have this problem even when updating to 6.10 RC2 and changing to IPVLAN. However my whole network doesn't go down with it anymore so that's an added bonus I have a second unraid server, is there something I can do with that to diagnose this problem further?
  11. Hey, Don't worry about it! I'm just glad that that there is someone to reply unlike other forums I'd rather not update to a rc build but if there is no choice I will have to do it. At the moment the server hasn't crashed yet since the last one. I will update the next time it happens. Thank you for the tip and hopefully this'll be my last message in this topic
  12. Did anyone maybe have the same problem recently?
  13. Hi All, The past few days something very weird is happening, this is the second time I find my network down because somehow when the server has a kernel panic it takes the whole network down (maybe because of link aggregation protocol?) This is the screenshot: And the logs are in the attachments, I added one before I started array and one after because i'm not sure if that matters or not. Does anyone know what it might be? The last one happened in the weekend and now its today at around 14:30 CET. unraidserver-diagnostics-20211129-1804.zip unraidserver-diagnostics-20211129-1811.zip
  14. I'm on Unraid Version 6.9.2 now and this still keeps happening to me. I increased my log size to 1gb but it just keeps on filling up and I don't want to keep on restarting so i just delete the old syslogs and nginx logs (both filled with same errors). Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: *1125090 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [crit] 3377#3377: ngx_slab_alloc() failed: no memory Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: shpool alloc failed I do keep multiple chrome tabs with unraid open for a very long time but this was never a problem before, it just started in december, fixed itself and now it came back? unraidserver-diagnostics-20210703-1357.zip
  15. Ah thank you! Do you perhaps know how I can check in unraid which dimm this is? DIMM#0 doesn't say much for me cause on my motherboard they are labeled DIMM_A1 DIMM_B1 etc Or should i just run memtest86 (does this work for ecc ram?)
  16. Hey Guys, I have got the Fix common problems plugin installed and it threw up an error and told me to post it here. The error I got: Your server has detected hardware errors. You should install mcelog via the NerdPack plugin, post your diagnostics and ask for assistance on the unRaid forums. The output of mcelog (if installed) has been logged. The diagnostics is in the attachments. Thank you in advance! unraidserver-diagnostics-20210428-2250.zip
  17. Hey Guys, I have a very weird problem. When I setup a docker app, for example mariadb, on br0.15 with an ipv4 and ipv6 address, I can not reach the ipv4 address from another vlan while I can with the ipv6 address. My situation is like this: Untagged vlan 10, tagged vlan 15,20 Unraid is on vlan 10 and has extra vlan added in the network interface (15,20) Docker get assigned a static ip from me on the br0.15 (vlan15) interface Client is on vlan 10, can ping unraid + router on ipv4 vlan 10, 15 and 20 Client can ping dockers on vlan 10 Client can not ping docker apps on vlan 15 with ipv4 Client can ping docker apps on vlan 15 with ipv6 and also other clients/unraid/router with ipv4 I just checked but I can't ping ipv4 address' in the docker console either, can't even ping 8.8.8.8. So it looks like there is something going wrong in there. Does anyone know what might be going on?
  18. I have rebooted a few times but that hasn't helped, main vlan has no problems. Would it help if i dedicate a nic to that vlan?
  19. Hi all, I have a very weird problem... after I removed some drives and used the "new config" option my vm's seems to lose connection every few seconds with internet and lan. When I ping the servers I get this: Reply from 172.254.254.12: bytes=32 time=1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Request timed out. Request timed out. Request timed out. Request timed out. Request timed out. Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Request timed out. Request timed out. Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Request timed out. Request timed out. Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Reply from 172.254.254.12: bytes=32 time<1ms TTL=63 Does anyone know what the problem might be? unraidserver-diagnostics-20200617-2100.zip
  20. Hey, I have a quick question, when I started up my array I got a warning: unraidserver: Warning [UNRAIDSERVER] - Cache pool BTRFS missing device(s) KINGSTON_SH103S3480G_50026B724B05B2DE (sdi) But when I look at the Cache pool, it looks fine: And the output of "btrfs fi show" is: Label: none uuid: 0130b325-8ce1-4ceb-a715-467c12ccc4eb Total devices 4 FS bytes used 252.93GiB devid 2 size 447.12GiB used 148.03GiB path /dev/mapper/sdj1 devid 3 size 465.75GiB used 167.03GiB path /dev/mapper/sdh1 devid 6 size 447.12GiB used 149.00GiB path /dev/mapper/sdg1 devid 7 size 447.12GiB used 148.00GiB path /dev/mapper/sdi1 Label: none uuid: 666f8abb-816d-4947-bcf4-58e359c735d0 Total devices 1 FS bytes used 8.12GiB devid 1 size 50.00GiB used 11.52GiB path /dev/loop2 Should I be worried? I can't restart the server atm because it is doing a rebuild and parity check The server did restart like 4 to 5 times because I was diagnosing multiple disks being offline (and it is fine now) Also it gives me a warning on the shares pages: "Some or all files are unprotected" even the shares without cache drive, or is this because of the rebuild? Don't remember seeing it before..
  21. I tried this out without DHCP and somehow it still didn't work. I set it up with a static IP and there was no local link address but I still couldn't use custom interfaces.
  22. Hey, I forgot about this but I recently managed to fix the high cpu usage with ipv6 so I decided to try this again, I wanted to make a ticket in the ubiquiti forums but considering I made a few tickets and all of them took months for a proper reply I decided to ask it here again so I hope you guys can help me. I tried adding the host address and slaac service with a command first to test it: set interfaces ethernet eth0 vif 128 pppoe 2 dhcpv6-pd pd 0 interface eth1.50 host-address '::1' set interfaces ethernet eth0 vif 128 pppoe 2 dhcpv6-pd pd 0 interface eth1.50 service slaac Also I'm not sure if this matters but I have set prefix-only option on dhcpv6-pd (to fix the high cpu usage) The unraid server is connected with vlan 50 native and others, including 1, are tagged. But I still get a link-local address as gateway And even when I have no internet I still get the link-local address as gateway In unraid I have set up a Static IP with the gateway (using 2a02 address) with metric 1, however in the routing table the fe80 address still comes back even after I delete it and in the docker settings it keeps using that as gateway address. Does anyone know what I might be doing wrong? I also have 2 nics (which are bonded), is it better if I just use 1 for ipv4 and 1 for ipv6?