flyize

Members
  • Posts

    429
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by flyize

  1. On 12/17/2023 at 12:50 AM, alturismo said:

     

    ok, may post me the output from unraid terminal (while xteve_guide2go is YOUR dockername)

     

    docker exec xteve_guide2go crontab -l

     

     

     

     

    root@Truffle:~# docker exec xteve_guide2go crontab -l
    0  0  *  *  *  /config/cronjob.sh

     

  2. I've got an odd problem. I'm running xteve_guide2go, and the cronjob.sh file seemingly isn't running correctly unless I manually go in and run it. I can see the timestamps getting changed, so the script is running. But it isn't downloading anything new. Any ideas?

  3. 10 hours ago, KluthR said:

    I checked it and I bet that /mnt/cache/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/ is being skipped during backup because there are no other folders left beside Cache Media dn Metadata? And therefore its empty?

    That folder is full of stuff.

  4. I got the following error when the backup ran last night:

     

    [13.11.2023 03:08:18][][Plex-Media-Server] tar verification failed! Tar said: tar: /mnt/cache/appdata/Plex-Media-Server: Not found in archive; tar: Exiting with failure status due to previous errors

     

    Debug log: 3c4431dd-b78f-4391-963a-871abb2ed18e

     

    Thanks for any help.

  5. On 10/5/2023 at 1:47 PM, JorgeB said:

    Not necessarily but recommend starting the array in maintenance mode zero an array drive, or you need to first manually unmount that disk.

    Shouldn't it work even if I don't, since zeroing will nuke the partition making a write impossible? That said, I tried to unmount it, but got a 'target is busy' error.

  6. On 3/18/2021 at 5:05 AM, mgutt said:

    I had the same problem:

    Mar 18 10:01:35 Thoth emhttpd: Retry unmounting disk share(s)...
    Mar 18 10:01:40 Thoth emhttpd: Unmounting disks...
    Mar 18 10:01:40 Thoth emhttpd: shcmd (32548): umount /mnt/cache
    Mar 18 10:01:40 Thoth root: umount: /mnt/cache: target is busy.
    Mar 18 10:01:40 Thoth emhttpd: shcmd (32548): exit status: 32

     

    I tried "lsof /mnt/cache", but it returned nothing. Finally I found out it was my test enabling a swapfile. After "swapoff -a" the cache was unmounted. Strange, that it did not return something through lsof.

    Thanks @mgutt and Google. This fixed a hang for me. Not sure why it didn't run swapoff on its own.

  7. On 10/3/2023 at 8:18 AM, shaunvis said:

    I didn't get the swap file set up, I was reading other people that set up a swap file & the OOM just filled that up too.  But your post sent me down a rabbit hole that I think FINALLY fixed my OOM errors. 

     

    It looks like the "Unassigned Devices" plugin was causing avahi-daemon to eat up all my RAM until it was killed. I reinstalled it had it's been OK since. 

     

    Now I'm wondering of your experience of .12 locking up from issues like that instead of showing an OOM errors in the log is what I was seeing. I might tempt 6.12 again to see if it works now. 

    How did you determine that it was Unassigned Devices? I too have that installed.

     

    edit: Also, after upgrading to the new 6.12 the other day, my server crashed again this morning. Same thing OOM killed everything, but server still responds to pings. I just downgraded back to 6.11 and since I'm headed out of town for a few days, I've set the RAM compact script (posted in the SO threads I linked) to run hourly.

  8. If all my docker and VMs are on a cache drive, and I go into Global Share Settings and exclude a drive that I want to zero - can that be done online relatively safely? I really don't want to have to be without the server for two days while the drive zeroes out.

  9. @shaunvis As expected, @JorgeB seems to be correct in that memory fragmentation was causing my issues. 6.11 seems to handle this *much* better, in that it only seemed to kill a couple of things - and most importantly put it in syslog. 6.12 seemed to kill almost everything, and had ZERO logging of it.

     

    I solved it with a swap file, but it could seemingly also be solved by compacting memory according to these threads:

    https://stackoverflow.com/questions/62077590/why-can-a-user-process-invoke-the-linux-oom-killer-due-to-memory-fragmentation

    https://stackoverflow.com/questions/60079327/how-to-mmap-a-large-file-without-risking-the-oom-killer/62855363#62855363

     

    So maybe try 6.12 again but with swap and see if it help you too.

  10. 52 minutes ago, shaunvis said:

    I'm assuming you're on 6.12, correct? If so, try 6.11.5.

     

    Lots of people, myself included can't go a day on 6.12 without it doing this exact sort of thing. Have to do a hard reboot, then it works for a little while again.

     

    I've tried each version of 6.12 and always end up back on 6.11.5 where I have no issues. 

    I'm kinda out of ideas. It's been running fine for weeks until this.