Inenting

Members
  • Posts

    59
  • Joined

  • Last visited

About Inenting

  • Birthday January 19

Converted

  • Gender
    Male
  • Location
    Netherlands

Recent Profile Visitors

1050 profile views

Inenting's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. I am still having this problem on 6.11.1. I got unraid tabs sometimes open on a VM but I can't check everytime if I closed it or not. We need a real solution because the auto reload doesn't seem to work right unfortunately. zeus-diagnostics-20221022-1430.zip
  2. I can't seem to get a tdarr node on another unraid server to work, I keep getting the following error(s) : 2022-04-03T19:42:01.799Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[Step W03] [C2] Analyse file 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:New cache file has already been scanned, no need to scan again 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Updating Node relay: Processing 2022-04-03T19:42:01.800Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[1/2] Checking file frame count 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[2/2] Frame count 0 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Transcode task, determining transcode settings 2022-04-03T19:42:01.801Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Plugin stack selected 2022-04-03T19:42:01.802Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Plugin: Tdarr_Plugin_00td_action_re_order_all_streams_v2 2022-04-03T19:42:01.802Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[1/5] Reading plugin 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[2/5] Plugin read 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[3/5] Installing dependencies 2022-04-03T19:42:01.803Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:[4/5] Running plugin 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Error TypeError: Cannot read property 'forEach' of undefined 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker config: { 2022-04-03T19:42:01.804Z "processFile": false, 2022-04-03T19:42:01.804Z "preset": "", 2022-04-03T19:42:01.804Z "container": "", 2022-04-03T19:42:01.804Z "handbrakeMode": "", 2022-04-03T19:42:01.804Z "ffmpegMode": "", 2022-04-03T19:42:01.804Z "error": true 2022-04-03T19:42:01.804Z } 2022-04-03T19:42:01.804Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker log: 2022-04-03T19:42:01.804Z Pre-processing - Re-order all streams V2☒Plugin error! TypeError: Cannot read property 'forEach' of undefined 2022-04-03T19:42:01.804Z 2022-04-03T19:42:01.805Z VXk3kw6uhVs:Node[Zeus-Quadro-P620]:Worker[S4bMIcoD4]:Worker config [-error-]: I also uploaded the Log. Both servers run Unraid and I mapped the Plex and Temp share. I tried both NFS and SMB, the shares are on Export: Yes and Public, I don't see what could be wrong. VXk3kw6uhVs-log.txt C9Hx0dw7N-log.txt
  3. I've got grafana installed with prometheus but I can't see anything out of the ordinary: Does anyone have an explanation for this?
  4. I woke up today the moment I got the warning and checked dashboard but the ram was still fine (80% unused), I checked my server health event logs via IPMI but there is nothing there (about ram and recent) so probably not a hardware fault either right?
  5. I haven't run tdarr nodes for a few months now, at 12 am there is a ssd trim, mover works every 3 hours. Everything major should start after 2 am. But if it happens at 11pm maybe Plex is the problem? I think I am using Ram as storage for plex transcoding: --runtime=nvidia --device=/dev/dri --mount type=tmpfs,destination=/tmp,tmpfs-size=20000000000 --no-healthcheck --restart unless-stopped --log-opt max-size=50m But the error is everyday and my watch time and error time doesn't match up
  6. Hi Guys, I keep getting an "Out Of Memory errors detected on your server" error on my server every single day for the past week. The weird thing is that it happens 4:46 AM every time, only thing I could correlate this with is the following: - Automatic appdata backup (starts at 4AM) I did have problems with a kernel panic crash but that hasn't happened in a month or two. That was also the reason I upgraded to 6.10.0-rc2 but that didn't fix it. How can I check where the problem lies? zeus-diagnostics-20220306-1852.zip syslog-10.50.0.254.log
  7. Hey, see the attachment for the syslogsyslog-127.0.0.1.log . I've had the syslog on for a while so it should have all the crashes (if it registers that) Edit: I just noticed that it didn't update the syslog for a very long time... I disabled and enabled it but I also enabled remote syslog as well and point it now to my other unraid server (it wrote something so it should work). I'll wait for a crash now I guess...
  8. I've been thinking but could I get a kernel panic if the hardware is maybe defect? Like if a ram stick or cpu is not good?
  9. The kernel panic happened again today, I restarted server around 18:35 Could it maybe be a plugin? Would it help to delete every plugin and docker and start over? Is there a way to keep my data and cache pool when making a new usb with unraid? unraidserver-diagnostics-20220107-1946.zip
  10. Unfortunately I still have this problem even when updating to 6.10 RC2 and changing to IPVLAN. However my whole network doesn't go down with it anymore so that's an added bonus I have a second unraid server, is there something I can do with that to diagnose this problem further?
  11. Hey, Don't worry about it! I'm just glad that that there is someone to reply unlike other forums I'd rather not update to a rc build but if there is no choice I will have to do it. At the moment the server hasn't crashed yet since the last one. I will update the next time it happens. Thank you for the tip and hopefully this'll be my last message in this topic
  12. Did anyone maybe have the same problem recently?
  13. Hi All, The past few days something very weird is happening, this is the second time I find my network down because somehow when the server has a kernel panic it takes the whole network down (maybe because of link aggregation protocol?) This is the screenshot: And the logs are in the attachments, I added one before I started array and one after because i'm not sure if that matters or not. Does anyone know what it might be? The last one happened in the weekend and now its today at around 14:30 CET. unraidserver-diagnostics-20211129-1804.zip unraidserver-diagnostics-20211129-1811.zip
  14. I'm on Unraid Version 6.9.2 now and this still keeps happening to me. I increased my log size to 1gb but it just keeps on filling up and I don't want to keep on restarting so i just delete the old syslogs and nginx logs (both filled with same errors). Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory. Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: *1125090 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [crit] 3377#3377: ngx_slab_alloc() failed: no memory Jul 3 04:45:04 unraidserver nginx: 2021/07/03 04:45:04 [error] 3377#3377: shpool alloc failed I do keep multiple chrome tabs with unraid open for a very long time but this was never a problem before, it just started in december, fixed itself and now it came back? unraidserver-diagnostics-20210703-1357.zip