volcs0
-
Posts
116 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by volcs0
-
-
I woke up to my server mostly unresponsive -all cores at 100%. I've had a problem like this in the past, and I noticed that it was Navidrome using the CPU. So I managed to quit the Navidrome docker, and all went back to normal. I've never been able to figure out how and why Navidrome is causing this problem. I looked at the Navidrome logs from this morning, and I didn't see any entries overnight.
Diagnostics are attached.
I looked at the server.log and saw this set of output below...At 0552, you can see where I logged in, slowly navigated to the dockers tab and quit Navidrome.
Any thoughts are appreciated. Thanks for your advice.
Apr 30 00:20:10 Tower vnstatd[20426]: Warning: Writing cached data to database took 10.4 seconds. Apr 30 00:20:17 Tower sSMTP[10259]: Creating SSL connection to host Apr 30 00:20:17 Tower sSMTP[10259]: SSL connection using TLS_AES_256_GCM_SHA384 Apr 30 00:20:20 Tower sSMTP[10259]: Sent mail for [email protected] (221 2.0.0 closing connection r16-20020a05620a03d000b0078f108b6765sm10894383qkm.48 - gsmtp) uid=0 username=root outbytes=1590 Apr 30 00:25:49 Tower nginx: 2024/04/30 00:25:49 [error] 17972#17972: *16510429 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 00:25:49 Tower nginx: 2024/04/30 00:25:49 [error] 17972#17972: *16510431 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 00:25:49 Tower nginx: 2024/04/30 00:25:49 [error] 17972#17972: *16510432 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 00:25:49 Tower nginx: 2024/04/30 00:25:49 [error] 17972#17972: *16510433 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 00:31:06 Tower vnstatd[20426]: Warning: Writing cached data to database took 5.9 seconds. Apr 30 00:49:52 Tower emhttpd: spinning down /dev/sdf Apr 30 01:19:46 Tower nginx: 2024/04/30 01:19:46 [error] 17972#17972: *16531581 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:19:46 Tower nginx: 2024/04/30 01:19:46 [error] 17972#17972: *16531582 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:19:46 Tower nginx: 2024/04/30 01:19:46 [error] 17972#17972: *16531583 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:19:46 Tower nginx: 2024/04/30 01:19:46 [error] 17972#17972: *16531584 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:29:50 Tower nginx: 2024/04/30 01:29:49 [error] 17972#17972: *16535538 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:29:50 Tower nginx: 2024/04/30 01:29:50 [error] 17972#17972: *16535541 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:29:50 Tower nginx: 2024/04/30 01:29:50 [error] 17972#17972: *16535542 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:29:50 Tower nginx: 2024/04/30 01:29:50 [error] 17972#17972: *16535543 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:48:11 Tower nginx: 2024/04/30 01:48:11 [error] 17972#17972: *16544601 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:48:11 Tower nginx: 2024/04/30 01:48:11 [error] 17972#17972: *16544602 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:48:11 Tower nginx: 2024/04/30 01:48:11 [error] 17972#17972: *16544603 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:48:11 Tower nginx: 2024/04/30 01:48:11 [error] 17972#17972: *16544604 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:52:22 Tower nginx: 2024/04/30 01:52:22 [error] 17972#17972: *16546487 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:52:22 Tower nginx: 2024/04/30 01:52:22 [error] 17972#17972: *16546488 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:52:22 Tower nginx: 2024/04/30 01:52:22 [error] 17972#17972: *16546489 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:52:22 Tower nginx: 2024/04/30 01:52:22 [error] 17972#17972: *16546490 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:57:45 Tower nginx: 2024/04/30 01:57:45 [error] 17972#17972: *16548677 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:57:45 Tower nginx: 2024/04/30 01:57:45 [error] 17972#17972: *16548678 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:57:45 Tower nginx: 2024/04/30 01:57:45 [error] 17972#17972: *16548679 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 01:57:45 Tower nginx: 2024/04/30 01:57:45 [error] 17972#17972: *16548680 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 02:00:04 Tower emhttpd: read SMART /dev/sdf Apr 30 02:04:36 Tower nginx: 2024/04/30 02:04:36 [error] 17972#17972: *16551620 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 02:04:36 Tower nginx: 2024/04/30 02:04:36 [error] 17972#17972: *16551624 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 02:04:36 Tower nginx: 2024/04/30 02:04:36 [error] 17972#17972: *16551626 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 02:04:36 Tower nginx: 2024/04/30 02:04:36 [error] 17972#17972: *16551627 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 03:01:25 Tower root: /var/lib/docker: 21.8 GiB (23410118656 bytes) trimmed on /dev/loop2 Apr 30 03:01:25 Tower root: /mnt/samsung_nvme: 616.6 GiB (662057234432 bytes) trimmed on /dev/nvme0n1p1 Apr 30 03:01:25 Tower root: /mnt/cache: 548.3 GiB (588752728064 bytes) trimmed on /dev/sde1 Apr 30 03:03:50 Tower emhttpd: spinning down /dev/sdf Apr 30 04:32:12 Tower nginx: 2024/04/30 04:32:12 [error] 17972#17972: *16625055 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:32:12 Tower nginx: 2024/04/30 04:32:12 [error] 17972#17972: *16625056 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:32:12 Tower nginx: 2024/04/30 04:32:12 [error] 17972#17972: *16625057 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:32:12 Tower nginx: 2024/04/30 04:32:12 [error] 17972#17972: *16625058 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:35:43 Tower monitor: Stop running nchan processes Apr 30 04:35:50 Tower nginx: 2024/04/30 04:35:50 [error] 17972#17972: *16626490 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:35:50 Tower nginx: 2024/04/30 04:35:50 [error] 17972#17972: *16626491 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:35:50 Tower nginx: 2024/04/30 04:35:50 [error] 17972#17972: *16626492 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 04:35:50 Tower nginx: 2024/04/30 04:35:50 [error] 17972#17972: *16626493 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 05:52:32 Tower webGUI: Successful login user root from 10.0.0.166 Apr 30 05:52:46 Tower monitor: Stop running nchan processes Apr 30 05:53:50 Tower php-fpm[5924]: [WARNING] [pool www] child 31840 exited on signal 9 (SIGKILL) after 62.393102 seconds from start Apr 30 05:53:54 Tower kernel: docker0: port 4(vethc2542b2) entered disabled state Apr 30 05:53:54 Tower kernel: veth6456e96: renamed from eth0 Apr 30 05:53:55 Tower vnstatd[20426]: Interface "veth6456e96" added with 1000 Mbit bandwidth limit. Apr 30 05:53:55 Tower vnstatd[20426]: Monitoring (104): veth6456e96 (1000 Mbit) veth9b65937 (10000 Mbit) vethccc4924 (10000 Mbit) veth0d1ba7b (10000 Mbit) veth089f88a (10000 Mbit) vetha6bffd1 (10000 Mbit) veth708b092 (10000 Mbit) veth7b2a77b (10000 Mbit) veth3940d1a (10000 Mbit) veth52aea6a (10000 Mbit) vethc2542b2 (10000 Mbit) vethed159e8 (10000 Mbit) veth2677478 (10000 Mbit) vetha69e1db (10000 Mbit) veth4392a41 (10000 Mbit) veth0f7ee3b (10000 Mbit) vethfde0868 (10000 Mbit) veth62612f3 (10000 Mbit) vethbb969e6 (10000 Mbit) veth1f80d05 (10000 Mbit) veth258460e (10000 Mbit) veth407cda6 (10000 Mbit) veth0bad994 (10000 Mbit) veth70e8181 (10000 Mbit) veth99f05dd (10000 Mbit) veth7a0bad4 (10000 Mbit) veth0facb37 (10000 Mbit) veth2f6bd85 (10000 Mbit) vethb4ceb0d (10000 Mbit) veth49ce358 (10000 Mbit) veth8575af4 (10000 Mbit) veth917f7f3 (10000 Mbit) veth01c8c50 (10000 Mbit) veth73926cc (10000 Mbit) vethc1c909d (10000 Mbit) vethce14198 (10000 Mbit) veth9415899 (10000 Mbit) vethd0e0f4d (10000 Mbit) vethdf311a5 (10000 Mbit) ... Apr 30 05:53:55 Tower kernel: docker0: port 4(vethc2542b2) entered disabled state Apr 30 05:53:55 Tower kernel: device vethc2542b2 left promiscuous mode Apr 30 05:53:55 Tower kernel: docker0: port 4(vethc2542b2) entered disabled state Apr 30 05:54:00 Tower vnstatd[20426]: Interface "vethc2542b2" disabled. Apr 30 05:54:00 Tower vnstatd[20426]: Error: Unable to get interface "veth6456e96" statistics. Apr 30 05:54:00 Tower vnstatd[20426]: Interface "veth6456e96" not available, disabling. Apr 30 05:54:11 Tower nginx: 2024/04/30 05:54:11 [error] 17972#17972: *16634764 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 05:54:11 Tower nginx: 2024/04/30 05:54:11 [error] 17972#17972: *16634765 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 05:54:11 Tower nginx: 2024/04/30 05:54:11 [error] 17972#17972: *16634766 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1:80" Apr 30 05:54:11 Tower nginx: 2024/04/30 05:54:11 [error] 17972#17972: *16634767 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1:80" Apr 30 05:56:53 Tower monitor: Stop running nchan processes
-
Thanks.
1 hour ago, JorgeB said:Jellyfin container was killed twice because it was using a lot of RAM, and making the server run OOM, check its config or limit its RAM usage.
Thank you - I've been googling for how to manage the Jellyfin docker container on unRAID - if you have any insights on how to limit a particular containers memory usage, please let me know. I will keep searching.
-
Server became mostly non-responsive. CPUs all 100% in the GUI (once it finally loaded)
htop command shows high usage of containderd and /usr/bin/dockerd with -log-level=fatal.
I noticed Navidrome using a lot of CPU at one point, so I killed it, and now the system is back to being responsive.
Screenshot and logs attached.
Navidrome was running fine for weeks.
Advice appreciated for finding the root cause and how to fix.
Thanks.
-
On 7/15/2022 at 11:19 AM, Tredaptive said:
Hello guys,
i hope someone can help me with my problem, i am relatively new to docker containers.
I could not find anything online that helps me find a solution to my problem.
Paperless-ngx always fills my docker.image instead of saving it to the array. Currently it is 96% filled and I don't know why. All files are max 1.2Gb in total. I have already enlarged the docker.image to 40Gb! in the docker settings.Here you can see the memory usage is ok, but the docker usage is nearly at the max.
This is the template settings i'm using for the storage(do i have them wrong?)
Update:
I think I found part of the problem. I removed and reinstalled everything again and uploaded individual files as a test. Default allocation of the docker.image was about 8.26GB. Then when uploading it fluctuated between 8.26-8.27GB.
I have a PDF file with 481MB which I then uploaded, now it slowly but steadily increased by that amount until it stopped at 20GB. 1 file is responsible for a little over 11GB even though the file is actually only 481MB.On the Paperless interface, the file remains in the uploader with "Processing Document" without anything else happening....
Did you solve your problem? I have everything on the array, but my docker.img is totally filled up by paperless. Currently at 13gb, and I just can't figure out why. My setup is like yours. Thoughts?
-
Around 9:30am CT this morning, my server became unresponsive - I could not get to the GUI nor could I SSH in. Over the next hour, I could intermittently SSH in but could not do anything - the system was pretty frozen.
A little after 10:10 CT, everything started working again. I resisted the temptation to restart during the freeze up, because I wanted to see (1) if it came back, and (2) if I could isolate the source of the problem.
Diagnostics are attached.
Thanks.
-
34 minutes ago, itimpi said:
Bear in mind that the value is only checked when a new file is created and does not take into account the size of the file about to be written so you may want a little more headroom. The normal recommendation is twice the size of the largest file you expect to write during day-to-day operation. BTRFS file systems seem a bit fragile when getting close to full.
OK - Terrific. Thanks for the tip.
-
10 hours ago, itimpi said:
You need to set the Minimum Free Space value for the cache pool as mentioned in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. The free space dropping below this value is what tells Unraid to stop writing to the cache and start writing directly to the array instead.
Thanks. Just did this - had to stop the array first. I made it 10GB, which should work pretty well for most things.
-
I have about 400gb free on my 1Tb cache drive (NVME).
Today, I was running a python script to copy video files from my photo library to my photos share on unRAID. The total amount to be copied will be about 1 Tb. About halfway through, the script crashed, because it said it was out of space. The cache drive was full. I had to invoke the mover and wait until it cleared some out and then I could start the script again.
Is this the expected behavior? Do I just need to restrict downloads and copy functions to the available space on the cache drive?
Thanks for the help. Diagnostics attached.
-
After the upgrade to 6.12.4, many of my dockers are not working, due to permission errors. In some cases, I've done in and changed owner and. group to nobody/users and that has worked. In other cases, I've changed the entire folder to 655 and that has worked. In some cases, I just can't figure out how to fix it. I've tried deleting the appdata folder for that app and reinstalling it, but that hasn't worked.
Any thoughts as to why this happend and how to fix it?
Almost all of the folders in appdata now have both owner and group as "1000".
An example: my lidarr app is now throwing this error:
[v1.4.1.3564] code = ReadOnly (8), message = System.Data.SQLite.SQLiteException (0x800017FF): attempt to write a readonly database
attempt to write a readonly database
Diagnostics attached.
Any ideas are appreciated.
-
Diagnostics attahed.
I just upgraded my CPU, MoBo, and memory. It took a long time to get everything hooked up and running, but it finally worked and booted.
I had to change the folder name EFI- to EFI on the USB to get it to boot - based on other's comments in this forum.
When it did boot, I noticed that it went through some sort of update - I saw it downloading something - went to 100% and then finished booting. Now I realize that it was upgrading unRAID OS to 6.12.4.
When I started my array for the first time with the new hardware, I saw that there was no docker tab.
So, I went and enabled Docker, but no dockers are present.
I backed up my appdata folder right before doing this, but I did not back up my docker.img file...
So, given the number of things I changed, what do you think happened? And is there a recovery path short of reinstalling my docker containers?
Thanks for your help.
Edit: I went back to an old diagnostics, and I saw that docker vdisk location had been changed from
/mnt/cache/system/docker.img
to
/mnt/user/system/docker/docker.img
I changed it back, and now I'm in business again.
Why did this docker location folder change?
-
I'm trying to install Swingmusic -
The git clone command works fine.
But when I cd into the swingmusic directory and issue the "docker build . -t swingmusic"
I get this error:
Sending build context to Docker daemon 1.615MB Step 1/8 : FROM ubuntu:latest ---> e4c58958181a Step 2/8 : WORKDIR / ---> Using cache ---> 24c53c2f5562 Step 3/8 : COPY ./dist/swingmusic /swingmusic COPY failed: file not found in build context or excluded by .dockerignore: stat dist/swingmusic: file does not exist
Any help is appreciated. Thank you.
-
On 4/30/2018 at 1:46 PM, John_M said:
OSXFuse is available as a .dmg from here. Then you need the Fuse-XFS module from here, also a .dmg, so you won't have to compile anything from source. Note that it works in read-only mode - writes are not implemented - and the journal is ignored. The XFS module is alpha software and hasn't been updated for a couple of years but I used it a year or so ago and it worked on Mountain Lion. The disk won't automount but see the Readme that's included:
fuse-xfs /dev/rdisk1s1 -- /mnt/xfs
though the standard macOS mount point is below /Volumes rather than /mnt.
This is not working on my mac running Venture 13.4. If anyone knows of a current solution that can leverage my mac to read a disk, please let me know. Thanks.
-
12 hours ago, JorgeB said:
If it really fails you can lose the data from the two failed disks.
Thanks. I shut it down and will replace the drive tonight and hope nothing else fails during the rebuild.
-
9 minutes ago, JorgeB said:
It's up to you, if the server is not being heavily used probably the same risk to leave it on, btw the disk dropped offline, most likely a power/connection issue, but since it dropped there's no SMART report.
But if another drive happens to fail, I'm screwed, right?
I have a full backup on a set of external drives offsite - but it would be a long painful process to rebuild and reconstruct everything...
-
Single parity system. One disk (8) is reporting an error. Diagnostics attached.
Should I shut down the array until I can replace the disk? I am out of town, so it will be at least 48 hours before I can replace it.
Thanks.
-
On 9/1/2023 at 7:36 AM, JorgeB said:
Nope
Sorry to pile on another question -
Since changing from macvlan to ipvlan, my cloudflare tunnel to my apps no longer works - getting DNS errors / SRV lookup errors.
If I change back to macvlan, it works again.
I'm reading about this and trying to troubleshoot and learn.
Any thoughts about where to start with this?
Thanks
-
7 minutes ago, JorgeB said:
It is:
Aug 31 09:18:58 Tower kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Aug 31 09:18:58 Tower kernel: ? _raw_spin_unlock+0x14/0x29 Aug 31 09:18:58 Tower kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan]
Excellent. Thank you for taking a look. I will make the change you suggested.
Also, are all of those post disabled / blocked lines concerning?
Thanks again for your help.
-
On 8/28/2023 at 9:05 AM, JorgeB said:
Based on the screenshot if may be the macvlan issue, but post the persistent syslog when you have it to confirm. If it's the macvlan issue switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
I didn't make any changes yet based on your suggestion above.
It happened again yesterday - around 16:30.
Here are the last 1000 lines of the syslog.
Thank you for your insights.
-
7 minutes ago, itimpi said:
You need to set the Remote syslog server to have the IP address of your Unraid server OR set the mirror to flash option.
That did it. Thanks.
-
1 hour ago, JorgeB said:
Post a screenshot of the syslog server settings, but most likely it's not correctly configured, you may want to read the instructions again.
Thank you.
Edit - Does not appear to be listening on port 514.. Is that the problem?
-
23 hours ago, JorgeB said:
Based on the screenshot if may be the macvlan issue, but post the persistent syslog when you have it to confirm. If it's the macvlan issue switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
I turned on syslog server (local) with /mnt/user/syslog as the share (cache only).
That was two days ago, and nothing has been written to that folder.
Are there supposed to be logs written periodically to the folder?
How does that work for a kernel panic - when would something get written to syslog?
I checked and group/user is correct (nobody/users) and permissions are 777 for the share.
Thanks for the advice.
-
After years and years of stability, I'm now seeing occasional kernel panics during periods of high CPU usage. This is happening more frequently - I've had three this week. I have to physically turn the machine off and on.
I'd like to troubleshoot this.
I turned on Syslog Server with "local syslog" to a cache-only share.
Is this the preferred way to troubleshoot this?
In case it helps, I've attached a screenshot of the kernel panic, and I've attached my diagnostics.
Thanks.
-
1 hour ago, Jaybau said:
Might need to delete the appdata folders.
Might need to "update stack" for the changes.
6 hours ago, primeval_god said:Looks like the database is stored in a docker volume (something that is not really best practice to use in unraid). The command "docker volume ls" should list the volumes on your system. According to the compose file the one with the postgres database is called pgdata, not sure it if will be named the same when listed on the command line. An alternative for finding it is to bring up the compose stack and then run a "docker inspect immich_postgres" and look for the volume section. Once you know the name of the volume you can bring the compose stack down and then run "docker volume rm <volume_name>" which should remove the volume.
So, someone on Reddit figured this out. I had to go into the immich_postgres docker and delete the /var/lib/postgresql/data folder.
Then when I rebuilt the stack, it recreated the database.
Simply deleting the docker didn't matter - somehow the data persisted.
Maybe I don't understand the difference between a docker container and a docker volume (likely)
Thanks for your help with this. I am learning new stuff every day...
-
I installed Immich (photo server) using the Docker Compose plug in according to these instructions.
Here are the list of dockers it brings up.
I decided to wipe the install and start over with some new settings. So, I used the "Compose Down" feature.
I tried changing the Postgres password, but when I brought it back up (Compose Up), it wouldn't connect to the database. Changing the password back worked - and all of my picture data / thumbnails were still there - even though I thought I was starting over.
I tried to nuke everything - manually deleting the dockers, even going so far as to delete the Docker Compose plugin.
But even starting "from scratch" and doing everything over didn't work. When I bring up the dockers (Compose Up) - there it is - the same thumbnails and data from the Postgres database. And yes, I've deleted the Library folder, so the actual pictures are gone. It's just the database that remained.
So, I went into the console and manually deleted the Immich database using the DATABASE DROP command.
Now, I can't get anything to work again. The Docker Compose plug-in log says that since the database is still there, it doesn't have to re-create it.
How on earth can I delete the database? I know I have to delete the share with Postgres, but I don't see how or where to do that.
Thanks for the help and advice.
Are htop CPU use and the unRAID dashboard supposed to correlate? They do not for me, and I can't figure out why.
in General Support
Posted
I'm trying to figure out what has all of my cores pegged and what has ground my server to a halt.
htop shows nothing using more than 10-15% of any core. And the advanced view in the dockers doesn't show anything either.
But the dashboard shows everything pegged, and my server is mostly unresponsive except for some command line and being able to see the dashboard.
So, if htop and docker advanced view don't show any docker using up the CPU, but I can see that everything is maxed out, what could the difference be?
Diagnostics attached. Will crosspost to unRAID forum as well.
Thanks.
tower-diagnostics-20240508-1642.zip