-
Posts
429 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by flyize
-
-
On 12/17/2023 at 12:50 AM, alturismo said:
ok, may post me the output from unraid terminal (while xteve_guide2go is YOUR dockername)
docker exec xteve_guide2go crontab -l
root@Truffle:~# docker exec xteve_guide2go crontab -l 0 0 * * * /config/cronjob.sh
-
On 12/12/2023 at 9:35 PM, alturismo said:
anything i the logs ?
and when you run it manually it works ? sample from unraid terminal
docker exec xteve_guide2go /config/cronjob.sh
Correct. Nothing in logs and it works manually.
-
I've got an odd problem. I'm running xteve_guide2go, and the cronjob.sh file seemingly isn't running correctly unless I manually go in and run it. I can see the timestamps getting changed, so the script is running. But it isn't downloading anything new. Any ideas?
-
On 11/14/2023 at 3:51 PM, KluthR said:
Maybe the other mentioned patterns are matching?
I don't think so. Any chance you have a suggested pattern for backing up Plex?
-
10 hours ago, KluthR said:
I checked it and I bet that /mnt/cache/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/ is being skipped during backup because there are no other folders left beside Cache Media dn Metadata? And therefore its empty?
That folder is full of stuff.
-
I got the following error when the backup ran last night:
[13.11.2023 03:08:18][❌][Plex-Media-Server] tar verification failed! Tar said: tar: /mnt/cache/appdata/Plex-Media-Server: Not found in archive; tar: Exiting with failure status due to previous errors
Debug log: 3c4431dd-b78f-4391-963a-871abb2ed18e
Thanks for any help.
-
36 minutes ago, JorgeB said:
It will likely cause a few sync errors due to the filesystem being mounted and not correctly unmounted.
How do I unmount it then. lsof is blank
-
On 10/5/2023 at 1:47 PM, JorgeB said:
Not necessarily but recommend starting the array in maintenance mode zero an array drive, or you need to first manually unmount that disk.
Shouldn't it work even if I don't, since zeroing will nuke the partition making a write impossible? That said, I tried to unmount it, but got a 'target is busy' error.
-
On 3/18/2021 at 5:05 AM, mgutt said:
I had the same problem:
Mar 18 10:01:35 Thoth emhttpd: Retry unmounting disk share(s)... Mar 18 10:01:40 Thoth emhttpd: Unmounting disks... Mar 18 10:01:40 Thoth emhttpd: shcmd (32548): umount /mnt/cache Mar 18 10:01:40 Thoth root: umount: /mnt/cache: target is busy. Mar 18 10:01:40 Thoth emhttpd: shcmd (32548): exit status: 32
I tried "lsof /mnt/cache", but it returned nothing. Finally I found out it was my test enabling a swapfile. After "swapoff -a" the cache was unmounted. Strange, that it did not return something through lsof.
Thanks @mgutt and Google. This fixed a hang for me. Not sure why it didn't run swapoff on its own.
-
On 10/8/2023 at 5:46 AM, JorgeB said:
Create the small script below with the user scripts plugin and schedule it to run hourly, it will output the memory stats to the syslog, then see if there's anything abnormal in the persistent syslog.
#!/bin/bash free -h |& logger &
Just happened again. Memory seems fine?
-
Will do. I had seutp netdata for that purpose, but it seems that it doesn't store data for very long.
-
But if I'm really not running out of RAM, and I think the fragmentation issue should be properly band-aided, there's nothing that *should* be causing it. Any ideas on how to further troubleshoot this?
-
On 9/28/2023 at 10:36 AM, JorgeB said:
yes, or
Any chance you have any other suggestions? I thought it was fixed, but OOM is still killing things. I can confirm its not running out of RAM (80GB), I've added a 40GB swap file, and I've started running the memory compact command to defrag the RAM.
-
Any reason the standard wisdom is that the Docker and VM services need to be shut down to zero out a drive?
-
On 10/3/2023 at 8:18 AM, shaunvis said:
I didn't get the swap file set up, I was reading other people that set up a swap file & the OOM just filled that up too. But your post sent me down a rabbit hole that I think FINALLY fixed my OOM errors.
It looks like the "Unassigned Devices" plugin was causing avahi-daemon to eat up all my RAM until it was killed. I reinstalled it had it's been OK since.
Now I'm wondering of your experience of .12 locking up from issues like that instead of showing an OOM errors in the log is what I was seeing. I might tempt 6.12 again to see if it works now.
How did you determine that it was Unassigned Devices? I too have that installed.
edit: Also, after upgrading to the new 6.12 the other day, my server crashed again this morning. Same thing OOM killed everything, but server still responds to pings. I just downgraded back to 6.11 and since I'm headed out of town for a few days, I've set the RAM compact script (posted in the SO threads I linked) to run hourly.
-
If all my docker and VMs are on a cache drive, and I go into Global Share Settings and exclude a drive that I want to zero - can that be done online relatively safely? I really don't want to have to be without the server for two days while the drive zeroes out.
-
Would it be possible for someone to add Threadfin to Unraid? Apparently its based on xTeve and possibly in more active development.
-
@shaunvis As expected, @JorgeB seems to be correct in that memory fragmentation was causing my issues. 6.11 seems to handle this *much* better, in that it only seemed to kill a couple of things - and most importantly put it in syslog. 6.12 seemed to kill almost everything, and had ZERO logging of it.
I solved it with a swap file, but it could seemingly also be solved by compacting memory according to these threads:
So maybe try 6.12 again but with swap and see if it help you too.
-
Interesting. I assume you can see that I have 80GB in there and I'm pretty sure I'm not hitting that. The way you describe it, its a Linux problem. That seems odd. In the real world, that's solved by adding a swap file?
-
So I'm running 6.11 now and the server has stayed up, which is great! Thanks for the suggestion @shaunvis!
However, my suspicion may be correct as I was able to capture an OOM issue in the logs. By chance can anyone determine what happened?
-
Yep, I'm running it now. Hopefully I can report back tomorrow that its still up and running.
I still think it has to be some memory leak somewhere causing OOM to kill everything. That would explain the one time I was able to get into PiHole and see CPU/memory maxed. And sometimes the Home Assistant VM was still available. And *every* time, I could ping it.
-
52 minutes ago, shaunvis said:
I'm assuming you're on 6.12, correct? If so, try 6.11.5.
Lots of people, myself included can't go a day on 6.12 without it doing this exact sort of thing. Have to do a hard reboot, then it works for a little while again.
I've tried each version of 6.12 and always end up back on 6.11.5 where I have no issues.
I'm kinda out of ideas. It's been running fine for weeks until this.
-
I sure did. The server crashed again but is still responding to pings. This is driving me crazy!
-
I'm probably going to curse myself by saying this, but I think it was Plex that was crashing everything. If its still up tomorrow, I'll be confident that its fixed.
[Support] Linuxserver.io - Nextcloud
in Docker Containers
Posted · Edited by flyize
Anyone else seeing this error when attempting to login from Chrome/Edge? Firefox works fine.
Refused to send form data to 'https://nc.xxxxxxxxx.com/index.php/login' because it violates the following Content Security Policy directive: "form-action 'self'".