Jump to content

phyzical

Members
  • Posts

    79
  • Joined

  • Last visited

Posts posted by phyzical

  1. hasn't crashed,

     

    if you think your running into this try all the generic stuff first

     

    changing the FS from btrfs to xfz/zfs seemed to reduce the issues as the cpu panics would avoid system lockups but was not the root cause


    it just looks like my new cpu just kept cpu panicking due to default power restrictions. even without any cpu boosting on... its just a greedy mofo.

     

  2. another small update,

    i started having it crash once an hour performing heavy cpu intensive workloads, were talking 95-100c for 3 mins straight which made me think i might have a cooling issue.

    Turns out the i7-14700K just runs realllly hot.

    Anyway that then led me to this intel thread https://community.intel.com/t5/Processors/Unstable-i7-14700k/m-p/1569028

     

    After applying method 2 which is
    ```

    Method 2

    Access BIOS

    select "Tweaker"

    select "Advanced Voltage Settings"

    select "CPU/VRAM Settings"

    adjust "CPU Vcore Loadline Calibration"

    recommend starting from "Low" to "Medium" until system is stable.
    ```


    the same intense task run for 12 hours straight without a crash... so my issues may have just been the cpu freaking our about not getting enough oomph due to a shitty mobo default..

    Will post back and close if it says up for a week 🤞

    • Like 1
  3. ah actually spoke too soon... stopped the array juuust incase and again system locked up and took 3 restarts to come back on so i am kinda leaning towards it being bad data in the array casuing issues?

    i have since noticed this image.png.68fd1022c278e1036089129ac1b07842.png

     

    and if i use tab completion image.png.5cae2bdf764dfcf1626af7bbdf8d7e3f.png

    it comes up as pokmon but doesn't exist apparently? im wondering if i just have some bad blocks in the array? but im not sure how to remove these as they technically don't exist?

    has anyone experienced this befoe

  4. Okay has now been stable for almost a week which i have not had since getting the new hardware, again the true cause of the panics i havnt found.

    But if you feel like your experiencing something similar the following should hopefully keep it chugging along

    * make sure your cache drive is not btfs

    * make sure your docker image is not btfs

    * make sure your not using maclan

    * run memtest just incase

     

    in my case im 99% sure im avoiding the lockups just by moving off btfs.

  5. small update:

     

    ran mem tests for both sticks all passed after 4 hours, ran it with 1 stick all passed in about 2.5 hours.

    Then unraid took 6 reboots with constant crashes as soon as docker started, manually disabled the docker service and it came alive again. One thing i did notice is that my docker img was actually still a btfs img, so i figured screw it ill try the directory filesystem instead, so its now using the zfs cache disk instead of a btfs docker img file.

    As soon as i started installing my images i noticed similar seg faults occurring but it all came up fine, has been running for 2 days now, i have noticed more seg faults but instead of it eventually killing/ locking up the entire docker process its letting the containers that are crashing gracefully release the cpu threads?

    Ill post back  next week if i dont need to intervene this time.  so far atleast this avoids the problem. why im getting segfaults idk hardware issues? but if it keeps chugging along what do i care 🤞

  6. another small update,

    I changed my cache disk from btfs to zfs and now instead of the cpu panics crashing the entire unraid system. they instead seem to cause certain containers to fail for example, the last crash i got it seemed to kill jellyfin spitting ffmpeg errors, but idk if its container specific as the entire docker processes refuses to kill the container even with an docker rm.


    then if i try to shutdown the system it gets stuck after unmounting everything saying "clean shutdown" "mounting /boot readonly"
     

  7. im not using either of those containers, but it is "isolated" to something docker adjacent

    im more suspicious of heavy network + harddrive activity, i.e i find it seems to occur when im transcoding via tdarr but not always, like theres something fishy in a media file that causes a cpu panic?

    The weird part is the main server where tdarr and the shares live doesnt actually do any of the transcoding, but it does host a shared cached for the slave pcs to use and the media it will transcode. Though as files get changed it would incurr load for the raid system and also any containers that are watching for file changes i.e jellyfin, sonarr, radarr

    So this would incur lots of back n forth between 3 pcs, this would also cause load on multiple hardrives at once and the cache drive.

    i also have two pci devices to support additional hardrives (maybe that plays into things)

    i wouldnt be suprised if theres just alot going on at once and it gives up, though i had no issue on my old mobo until it was fried one morning (maybe my old mobo got fried for something that is now crashing instead? 😆)

  8. Thanks for the suggestion, it is new ram and i have never mem tested it so ill give that a go next time it crashes to rule out.

     

    Its just like all the other threads like this ofc, its almost impossible to replicate, like for example the server had been running for around a week and me spinning down the array is what locked it up . the even crazier part was i just kept getting served the boot screen i thought i had lost another usb within a week, but once i manually edited the config in a windows machine it started booting unraid again. but with  crashes until i think the 3rd boot.
     

  9. Hey,

    I have been getting random crashes and then usually it keeps crashing until i pull out the usb and manually edit the config to disable docker on boot, sometimes i will have to do this 2 or 3 times and then the system lets me turn the docker service back on.

    i also had 1 usb become corrupted (according tot he unraid ui) due to the hard shutdowns i have to perform due to the entire system locking up.,i assume

    So i have been google these issues and the most common suggestions i see are
    * make sure your not using macvlan (im not)

    * a dodgy container - if this is the case is there really no other way to test besides just not running containers for a while?


    The other thing to note is this started occuring alot more when i replaced my mobo and cpu (i7-14700K, Z790)

    attached is a snippet of the panic when this lockup occurs, one thing i noticed was `kernel tried to execute NX-protected page - exploit attempt? (uid: 99)`  should i be worried :/

    Also i am being spammed by port reallocations and ipv6 reallocations for 1 port? reading up on this most people say its normal and itll just be 1 container causing pain, but  is it normal to do this many logs non stop were talking 3 eth interface renames every second? (also attached)
    This ended up being a docker container stuck in a  reboot loop opps...

     

    i was getting an actual cpu panic a few weeks ago, and moving my appdata to a exclusive share seemed to fix that issue.

    Thanks!

    crash1.txt

    Explore-logs-2024-03-15 01 49 12.txt

  10. just wanted to add i was having intermittent problems too, then one morning the docker service would just lock up the entire server with the cpu stalling related logs no matter how many times i rebooted, i had syslog server writing logs to flash due to it becoming unresponsive proably 2 years ago and forgot, this eventually corrupted the usb (it was 6 years old) ( well unraid kept saying it was buggered)

    transferred over to a new usb, blatted my docker img file and bam. same issue started instantly occurring as soon as i started loading up all my existing container configs

     

    Then i stumbled across this thread Thanks @Dreytac moving my appdata to exclusive has stopped the crashing.


     

  11. i was in the same boat as @Januszmirek but it was going from 6.9.1 to 6.12.x, took me a few hours to workout that having wireguard enabled broke the connectivity. (webui access, ssh access and internet access)


    disabling the WG tunnel fixes it for the time being and everything can talk again, i assume it relates to "remote access to lan" and all the new security stuff 6.12 introduced  but i just don't have the time to dig deeper atm.

     

    i dont have any crash issues just network issues.

     

    edit: for me the issue ended up being that fact i was providing my local networks CIDR as an allowed ip, this used to work now it seems to kill everything. setting a csv of allowed local ips resolves the issue.

    The only issue left now is to make the vpn have access to my pihone dns for translation of container hostnames

     

     

  12. Hey was just wondering is there anyway to increase the amount of line shown in the "show log" window? or is this a limitation of unraids popup windows.

     

    i know its not hard to download the log and view its just when you do it 20 times gets a lil tedious, just one of the QOL things i was wondering

     

    edit: i guess i could just look into getting log viewers setup if the above isnt possible where do the logs get stored?

     

    thanks!

  13. Hey all,

    This might be a more generic unix question but is there an easy way to enable timestamps for all logs that a "userscript" would produce?

    quick googling i can do things like pipe my commands into various things to achieve this but i was wondering if there was an easy "turn on for everything in unraid" or maybe just "turn on for user scripts logs"?

    Thanks in advance!

  14. @oko2708

    i might have done it wrong but i got it working by exposing the docker port on my unraid box then

    configuring a cloud tcp://192.168.XX.XX:XXXX

    then i just create an cloud agent template

    this was also a bit annoying as the docker image then needed to apply the jlnp agent stuff

    i.e `FROM jenkins/jnlp-slave`

     

    if you figure out how to "just use docker images" let me know as it feels kinda iffy.

     

    @binhex

    i updated my image earlier today and then jenkis just died prompting something along the line of

    ` libfreetype.so.6: cannot open shared object file:` in JDK 8. is this just a  jenkis issue? i assume your auto building based on tags in jenkins land or something.

     

    i just rolled back to 2.239 for now

    • Thanks 1
  15. technically, its two.

     

    but it seems that this second one only provides 1 usb3 and 1 micro c?

     

    is this what your referring to by "vfio-pci plugin"?

     

    Edit: looks like it. but now that i have the groups what can i do with these?  it seems to reflect what i saw when i was re-reviewing the custom vfio directive for the usb passthrough earlier today?

     

    image.png.f048876e2989e9121f55cb1d0c1a08ab.pngimage.thumb.png.61aab2caafbe8419a3935a9f808281e5.png 

  16. Hey all,

     

    I finally got around to buying a UPS, out the box works great with the provided ups addon. my only issue is that when i passthrough my usb controller to my windows VM it passes through all 8/9 usb slots.. leaving 1 for the unraid usb so there is no spare usb for the UPS connection.

     

    So i tried going back to the oldschool method of assigning usbs manually, but for some reason the keyboard and mouse's "Logitech Unifying Receiver" just doesnt work in the vm. ive tried usb 2/3 modes all 8 spare usbs nothing.. ive tied the libvert "Hotplug USB" addon to detach and reattach but no luck.

     

    there seems to be a few people that have had the same issues and have just got a "new keyboard" :D

     

    https://www.reddit.com/r/VFIO/comments/di4jto/logitech_unifying_receiver_not_working_in_windows/

     

     

    just wondering if there was anything else i could do? or why its not detected with manual assign passthrough o.0?

     

    Thanks in advance!

     

    Edit: my Mobo is ASRock Z270 Extreme4 Version

     

     

  17. 1 hour ago, Leoyzen said:

    It's interesting that vt-x not working....I'm on a AMD build so I just use docker-machine + virtualbox to developing, so I don't know much about Intel builds.

    But it should work like linux or windows, so something must be wrong.

    Oh really? i am actually trying this on an amd cpu also.. to confirm are you using macinabox also? or is it a straight mac osx vm?

     

    thats pretty much the functionality im chasing

×
×
  • Create New...