Tristankin

Members
  • Posts

    154
  • Joined

  • Last visited

Everything posted by Tristankin

  1. Thought I would give 6.12.6 a go as I am starting to worry about the age and vulnerability of some of the containers I have to hold back to keep compatibility with an ancient version of docker on 6.8.3. I got about 7 hours before the machine froze up. 6.8.3 has not frozen once on me, including the latest stint from April till now. I added the options i915 enable_dc=0 to the i915.conf file. I do not have intel GPU top installed. cat /sys/module/i915/parameters/enable_dc returns 0 Should I be looking at disabling power well? Attached are the latest syslog and diagnostics. syslog-previous firefly-diagnostics-20231222-2218.zip
  2. Try blacklisting the gpu and removing intel gpu top. You may be experiencing the following issue.
  3. I'm running 9th gen so probably won't help. That and I have given up trying to fix the issue and sticking to 6.8.3 for as long as I can keep my docker containers going. I tried for 4 months of crashes (so far this year, many more months on 6.9.x) and being told it's my hardware so I will have to budget an upgrade of the platform some time in the future. (funny how many unraid systems are running on apparently bad hardware)
  4. VT-d did work for a bit, definitely decreased the number of freezes but did not stop them outright. Now that I am back on 6.8.3 I have had 3 weeks uptime without a single crash. This is a repeat of what happened after the upgrade to 6.9.2 was causing the same crashes and I went back to 6.8.3 for 6 months, again without a single crash. Issue is now the docker version is so old on 6.8.3 that I am getting incompatibilities on containers, forcing me to go with different version or rolling back to ones ~6 months old. (specifically the *arrs). If someone could repackange 6.8.3 with an updated version of docker I am sure I could get a few more years out of the system (maybe call it 6.8.4?) So it seems now I have the choice between stability and security/new features. One possible difference is the IOMMU mode, for some time now that Unraid uses pass-through mode, before is used DMA translation mode, IIRC Ubuntu still uses that one. Is there anything I can do to test this? I imagine this is a kernel level adjustment?
  5. I have moved to a completely different client and rolled back to the 4.3.9 version as listed here: https://forums.unraid.net/bug-reports/stable-releases/crashes-since-updating-to-v611x-for-qbittorrent-and-deluge-users-r2153/page/8/#comments They seem to get more in their logs though....
  6. I have tried 6.9.x, 6.10.x, and 6.11.x with the same result. If something was fixed I would have thought it would have happened across those versions. I have just changed from deluge to binhex/arch-qbittorrentvpn:4.3.9-2-01 after seeing the freeze reports under stable releases. I have been trying to find a solution to this issue for the past 2 years. None of this is particularly fun.
  7. Good for them I guess. What about the people in this thread that are reporting the same issue? So my options are now, run out of date software or replace my hardware.
  8. Would it be possible to run a 4.19 kernel on the latest build? Could help identify the cause?
  9. The system is 100% stable with 6.8.3. 6 months when it was the latest version, and then another 6 months after trying to upgrade to 6.9.x and then downgrading to 6.8.3 again. Unless it is a hardware fault that only occurs with a 5.x kernel then no.
  10. Does it have to be written to the log before tail can work? I'm getting nothing in the syslog to flash. Do you think that the log in ram isn't written to the flash quickly enough?
  11. Turns out I didn't have to wait too long Nothing. No response to input devices. Reset button doesn't work. Long press of the power button is the only thing that shuts it down. root@Firefly:~# cat /sys/class/graphics/*/modes U:1920x1080p-0 root@Firefly:~# cat /sys/class/graphics/*/virtual_size 1920,1080 I checked the res on the box after reboot and putting the dummy plug back in. I assume this means the dummy plug is fine. Anything else I should be checking?
  12. Only the single physical NIC, 2 x docker bridges, one for swag reverse proxy. You are right, I have to give this a go. I am just wary of the power usage having a monitor on 24/7.
  13. And again today Apr 1 08:33:04 Firefly crond[1067]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Apr 1 10:33:38 Firefly emhttpd: spinning down /dev/sde Apr 1 10:33:45 Firefly emhttpd: spinning down /dev/sdb Apr 1 10:35:39 Firefly emhttpd: spinning down /dev/sdh Apr 1 10:35:39 Firefly emhttpd: spinning down /dev/sdg Apr 1 10:35:39 Firefly emhttpd: spinning down /dev/sdc Apr 1 11:42:40 Firefly emhttpd: read SMART /dev/sdh Apr 1 11:42:40 Firefly emhttpd: read SMART /dev/sdc Apr 1 13:26:28 Firefly emhttpd: spinning down /dev/sdf Apr 1 13:41:59 Firefly emhttpd: spinning down /dev/sdj Apr 1 13:52:28 Firefly emhttpd: read SMART /dev/sde Apr 1 14:09:10 Firefly emhttpd: read SMART /dev/sdf Apr 1 15:37:59 Firefly emhttpd: spinning down /dev/sdh Apr 1 15:37:59 Firefly emhttpd: spinning down /dev/sdc Apr 1 16:25:51 Firefly kernel: microcode: microcode updated early to revision 0xf0, date = 2021-11-12 Apr 1 16:25:51 Firefly kernel: Linux version 5.19.17-Unraid (root@Develop) (gcc (GCC) 12.2.0, GNU ld version 2.39-slack151) #2 SMP PREEMPT_DYNAMIC Wed Nov 2 11:54:15 PDT 2022 Apr 1 16:25:51 Firefly kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot Attached diagnostics just in case. I am really starting to run out of ideas... firefly-diagnostics-20230401-1628.zip
  14. Not sure I am out of the woods yet, Got another freeze after 15 days uptime. I have just turned on syslog to flash so I will see if there is anything recorded but I doubt it.
  15. That is a weird one. My system will always respond to the long power button push to force a power off. I am up to 9 days of uptime now since I reinstalled fresh plex and turned off HDR tone mapping. This has happened in the past and took about a month before it started freezing again last time after changing the mount point and turning off vt-d. I am going to wait for 2 months and then try turning features back on to see if i can hone in on the issue with my particular system.
  16. Could also be turning off HDR tone mapping, vt-d, mounting directly to the cache drive, or a combination of factors. 0 issues with this install on 6.8.3 with a year and a half with 0 crashes, anything with a 5.x kernel it fails with. I am pretty sure the install was fine but I'm continuing to try. Would be interesting if it is just the tone mapping, but I can't be 100% sure till I know I have a stable system again and can start testing those features.
  17. linuxserver puts the pms install in the following location Where binhex was in the top level directory Is what it is, not the end of the world. 100% less annoying than a server that needs restarting twice a day.
  18. I tried to change the repository to keep config on the same container. Official would not boot and linuxserver required me to set up from scratch anyway. Lost all my play history but meh. Hasn't crashed yet though.
  19. Well I was good for over a month and then the system has started freezing again every 12 hours or so. I have migrated from binhex-plex to linuxerver/plex. I have also turned off HRD tone mapping. Fingers crossed I can return to stability again.
  20. Alright, just to make things 100% clear, here is what I have done to my machine, now with 21 days uptime. All C states turned off Ram @ 2100Mhz VT-d turned off All unnecessary peripherals disabled including serial and parallel port iGPU as first gpu binhex-plex with transcode directory set to /dev/shm on the host, which translates to /transcode in the container config directory pointed directly to cache and appdata share set to cache only MacVLAN -> IPVLAN Everything commented out in the go file Intel GPU Top installed I hope this helps anyone else having problems
  21. Sorry, you are right, I meant only. Set cache to only. My bad, I will fix my other post too.