phyzical

Members
  • Posts

    79
  • Joined

  • Last visited

Converted

  • Personal Text
    b

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

phyzical's Achievements

Rookie

Rookie (2/14)

4

Reputation

1

Community Answers

  1. hasn't crashed, if you think your running into this try all the generic stuff first changing the FS from btrfs to xfz/zfs seemed to reduce the issues as the cpu panics would avoid system lockups but was not the root cause it just looks like my new cpu just kept cpu panicking due to default power restrictions. even without any cpu boosting on... its just a greedy mofo.
  2. another small update, i started having it crash once an hour performing heavy cpu intensive workloads, were talking 95-100c for 3 mins straight which made me think i might have a cooling issue. Turns out the i7-14700K just runs realllly hot. Anyway that then led me to this intel thread https://community.intel.com/t5/Processors/Unstable-i7-14700k/m-p/1569028 After applying method 2 which is ``` Method 2 Access BIOS select "Tweaker" select "Advanced Voltage Settings" select "CPU/VRAM Settings" adjust "CPU Vcore Loadline Calibration" recommend starting from "Low" to "Medium" until system is stable. ``` the same intense task run for 12 hours straight without a crash... so my issues may have just been the cpu freaking our about not getting enough oomph due to a shitty mobo default.. Will post back and close if it says up for a week 🤞
  3. yeah thanks, i was about to do exactly this i also noticed the logs i posted were missing i think due to the timing around the syslog server starting up and what ends up in the saved filesyslog.txt attached is the same log but with the full startup of stuff incase it leads to anything else.
  4. sure, nothing i can see that different to the others but attached. Explore-logs-2024-03-27 10 57 31.txt the other reason i am suspicious around this directory is it could have todo with the startup loop issue as that panic is right after jellyfin starts up and jellyfin has this in its logs
  5. ah actually spoke too soon... stopped the array juuust incase and again system locked up and took 3 restarts to come back on so i am kinda leaning towards it being bad data in the array casuing issues? i have since noticed this and if i use tab completion it comes up as pokmon but doesn't exist apparently? im wondering if i just have some bad blocks in the array? but im not sure how to remove these as they technically don't exist? has anyone experienced this befoe
  6. Okay has now been stable for almost a week which i have not had since getting the new hardware, again the true cause of the panics i havnt found. But if you feel like your experiencing something similar the following should hopefully keep it chugging along * make sure your cache drive is not btfs * make sure your docker image is not btfs * make sure your not using maclan * run memtest just incase in my case im 99% sure im avoiding the lockups just by moving off btfs.
  7. small update: ran mem tests for both sticks all passed after 4 hours, ran it with 1 stick all passed in about 2.5 hours. Then unraid took 6 reboots with constant crashes as soon as docker started, manually disabled the docker service and it came alive again. One thing i did notice is that my docker img was actually still a btfs img, so i figured screw it ill try the directory filesystem instead, so its now using the zfs cache disk instead of a btfs docker img file. As soon as i started installing my images i noticed similar seg faults occurring but it all came up fine, has been running for 2 days now, i have noticed more seg faults but instead of it eventually killing/ locking up the entire docker process its letting the containers that are crashing gracefully release the cpu threads? Ill post back next week if i dont need to intervene this time. so far atleast this avoids the problem. why im getting segfaults idk hardware issues? but if it keeps chugging along what do i care 🤞
  8. another small update, I changed my cache disk from btfs to zfs and now instead of the cpu panics crashing the entire unraid system. they instead seem to cause certain containers to fail for example, the last crash i got it seemed to kill jellyfin spitting ffmpeg errors, but idk if its container specific as the entire docker processes refuses to kill the container even with an docker rm. then if i try to shutdown the system it gets stuck after unmounting everything saying "clean shutdown" "mounting /boot readonly"
  9. im not using either of those containers, but it is "isolated" to something docker adjacent im more suspicious of heavy network + harddrive activity, i.e i find it seems to occur when im transcoding via tdarr but not always, like theres something fishy in a media file that causes a cpu panic? The weird part is the main server where tdarr and the shares live doesnt actually do any of the transcoding, but it does host a shared cached for the slave pcs to use and the media it will transcode. Though as files get changed it would incurr load for the raid system and also any containers that are watching for file changes i.e jellyfin, sonarr, radarr So this would incur lots of back n forth between 3 pcs, this would also cause load on multiple hardrives at once and the cache drive. i also have two pci devices to support additional hardrives (maybe that plays into things) i wouldnt be suprised if theres just alot going on at once and it gives up, though i had no issue on my old mobo until it was fried one morning (maybe my old mobo got fried for something that is now crashing instead? 😆)
  10. another crash but there is about 4 in this one, they dont even seem to be the same issue.. so i am suspecting hardware more now... i updated the bios this morning just incase Explore-logs-2024-03-15 09 06 18.txt
  11. Thanks for the suggestion, it is new ram and i have never mem tested it so ill give that a go next time it crashes to rule out. Its just like all the other threads like this ofc, its almost impossible to replicate, like for example the server had been running for around a week and me spinning down the array is what locked it up . the even crazier part was i just kept getting served the boot screen i thought i had lost another usb within a week, but once i manually edited the config in a windows machine it started booting unraid again. but with crashes until i think the 3rd boot.
  12. Hey, I have been getting random crashes and then usually it keeps crashing until i pull out the usb and manually edit the config to disable docker on boot, sometimes i will have to do this 2 or 3 times and then the system lets me turn the docker service back on. i also had 1 usb become corrupted (according tot he unraid ui) due to the hard shutdowns i have to perform due to the entire system locking up.,i assume So i have been google these issues and the most common suggestions i see are * make sure your not using macvlan (im not) * a dodgy container - if this is the case is there really no other way to test besides just not running containers for a while? The other thing to note is this started occuring alot more when i replaced my mobo and cpu (i7-14700K, Z790) attached is a snippet of the panic when this lockup occurs, one thing i noticed was `kernel tried to execute NX-protected page - exploit attempt? (uid: 99)` should i be worried Also i am being spammed by port reallocations and ipv6 reallocations for 1 port? reading up on this most people say its normal and itll just be 1 container causing pain, but is it normal to do this many logs non stop were talking 3 eth interface renames every second? (also attached) This ended up being a docker container stuck in a reboot loop opps... i was getting an actual cpu panic a few weeks ago, and moving my appdata to a exclusive share seemed to fix that issue. Thanks! crash1.txt Explore-logs-2024-03-15 01 49 12.txt
  13. just wanted to add i was having intermittent problems too, then one morning the docker service would just lock up the entire server with the cpu stalling related logs no matter how many times i rebooted, i had syslog server writing logs to flash due to it becoming unresponsive proably 2 years ago and forgot, this eventually corrupted the usb (it was 6 years old) ( well unraid kept saying it was buggered) transferred over to a new usb, blatted my docker img file and bam. same issue started instantly occurring as soon as i started loading up all my existing container configs Then i stumbled across this thread Thanks @Dreytac moving my appdata to exclusive has stopped the crashing.
  14. i was in the same boat as @Januszmirek but it was going from 6.9.1 to 6.12.x, took me a few hours to workout that having wireguard enabled broke the connectivity. (webui access, ssh access and internet access) disabling the WG tunnel fixes it for the time being and everything can talk again, i assume it relates to "remote access to lan" and all the new security stuff 6.12 introduced but i just don't have the time to dig deeper atm. i dont have any crash issues just network issues. edit: for me the issue ended up being that fact i was providing my local networks CIDR as an allowed ip, this used to work now it seems to kill everything. setting a csv of allowed local ips resolves the issue. The only issue left now is to make the vpn have access to my pihone dns for translation of container hostnames
  15. Thanks for that @L0rdRaiden. worked like a charm, no plugin needed